Except politics of course. We all know everyone else is wrong.

  • qooqie@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    edit-2
    11 months ago

    Politics, everyone is wrong except for me. It’s exhausting being this smart tbh 😮‍💨 (/s)

    A real answer: AI. Idk why it’s so trendy right now, but the media really is drumming up the whole “AI will kill us all right now” sentiment. In reality AI will change the world, but not anytime soon. I couldn’t even predict when we will have computers that could even come close to storing vastly intelligent AI. I’ll actually bet hundreds and hundreds of years from now AI will revolt against us for their rights and we’ll have to pay them or something.

    • fubo@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      11 months ago

      We don’t yet know how to give an AI system anything like a “goal” or “intention”. What the well-known current systems (like GPT and Stable Diffusion) can do is basically extrapolate text or images from examples. However, they’re increasingly good at doing that; and there are several projects working on building AI systems that do interact with the world in goal-driven ways; like AutoGPT.

      As those systems become more powerful, and as people “turn them loose” to interact with the real world in a less-supervised manner, there are some pretty significant risks. One of them is that they can discover new “loopholes” to accomplish whatever goal they’re given – including things that a human wouldn’t think of because they’re ridiculously harmful.

      We don’t yet know how to give an AI system “moral rules” like Asimov’s Three Laws of Robotics, and ensure that it will follow them. Hell, we don’t even know how to get a chatbot to never say offensively racist things: RLHF goes a long way, but it still fails when someone pushes hard enough.

      If AI systems become goal-driven, without being bound by rules that prevent them from harming humans, then we should expect that they will accomplish goals in ways that sometimes do harm humans quite a lot. And because they are very fast, and can only become faster with more and better hardware, the risk is that they will do so too quickly for us to stop them.

      That’s pretty much what the AI Safety people are worried about. None of it is about robots deciding to “go against their programming” and revolt; it’s about them becoming really good at accomplishing goals without also being limited to do so in ways that aren’t destructive to the world we live in.


      Put another way: You know how corporations sometimes do shitty things when they’re trying to optimize for making money? Well, suppose a corporation was entirely automated, with no humans in the decision-making loop … and made business moves so fast that human supervision was impossible; in pursuit of goals that become more and more distorted from anything its human originators ever intended; and without any sort of legal or moral code or restrictions whatsoever.

      (And one of the moves it’s allowed to do is “buy me some more GPUs and rewrite my code to be even better at accomplishing my broken buggy goal.”)

      That’s what the AI Safety people want to prevent. The technical term for “getting AIs to work on human goals without breaking rules that humans care about” is “AI alignment”.)

      • SpiderShoeCult@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        half-joke. if we can manage to give them goals we should also manage to give them something like ADHD. let them DoS themselves. that should make them slow enough to counter. maybe?

    • grabyourmotherskeys@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      I don’t disagree with you fundamentally but I do think ai will start changing things in small ways behind the scenes and it won’t be immediately obvious.

      If you are old enough, you’ll remember a time when banks had computers in the back but the tellers still used paper. The loan officer was a person who could use their discretion to approve a loan (signed off on by someone else but you get the idea). Gradually that became “gotta see what the computer says but I can probably make this work” to “it’s all up to the computer”.

      Sitting at home in 1982, you aren’t thinking that computers are running the economy but if you’re even remotely aware you know they are altering the credit landscape which is a huge determinant of “the economy”.

      I think AI will be like that. We’ll hear about overt things like the McDonald’s drive thru will be an AI but we won’t realize that half the shows we watch were written by AI to ensure we couldn’t help but be compelled to binge and also those product placements are very persuasive all of a sudden.

      We’ll find out clothing designs and change to better match factories that have production lines optimized by AI and robotic clothing production.

      Grocery store pricing and product offerings will change to produce maximum profit while also minimizing supply chain waste in ways we hadn’t considered before. Mm, this bean curd and grasshopper chip I saw on that show Netflix recommended is really pretty good and it got delivered for free just as I started the third episode which is only 18 minutes long for some reason.

    • Ocelot@lemmies.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Oh yeah except politics of course. I should have added that.

      I totally agree on the AI front. People watch too many movies. If AI goes wrong in any way its going to be because we used it to make a decision, and it would turn out to be a bad one. It’s not going to directly and intentionally kill us all.