Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    “Some say they should disarm them, others like to posture. We have it! Let’s use it!”

    That’s an amazing quote.

    As someone who spends a decent amount of time explaining how AI is not like the movies, this study(?)/news sounds an awful lot like the movies lol

    • Meowoem@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 months ago

      Because it is a movie, they’re purposely using it in a way it wasn’t intended to work - try it yourself and see how often it couches replies until you convince it to pretend to be a general or to play the part of a character.

      They’ve asked it to generate fiction, it’s given them fiction and now they’re click baiting a pointless story with a dumb headline.