Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • theodewere@kbin.social
    link
    fedilink
    arrow-up
    2
    arrow-down
    8
    ·
    edit-2
    9 months ago

    it comprehends context incredibly well… this one played through scenarios and saw that both China and Russia are on a path to all-out war…

    • Jack Riddle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      9 months ago

      It produces the statistically most likely token based on previous data. It doesn’t “comprehend” anything, and it can’t “play through scenarios”. It is just a more advanced form of autocomplete.