Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.

The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    176
    arrow-down
    1
    ·
    3 months ago

    Tech guy here.

    This is a tech-flavored smokescreen to avoid responsibility for misapplied law enforcement.

    • Johnmannesca@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      By innate definition, everyone has the potential for criminality, especially those applying and enforcing the law; as a matter of fact, not even the ai is above the law unless that’s somehow changing. We need a lot of things on Earth first, like an IoT consortium for example, but an ai bill of rights in the US or EU should hopefully set a precedent for the rest of the world.

      • Deestan@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 months ago

        The AI is a pile of applied stastistic models. The humans in charge of training it, testing it and acting on its input have full control and responsibility for anything that comes out of it. Personifying or otherwise separating an AI system from being the will of its controllers is dangerous as it erodes responsibility.

        Racist cops have used “I go where the crime is” as an exuse to basically hunt minorities for sport. Do not allow them to say “the AI model said this was efficient” and pretend it is not their own full and knowing bias directing them.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        That’s not even the problem here… AI, big data, a consultant - it’s all just an excuse to point to when they do what they wanted to do anyways, profile “criminals” and harass them