Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.

  • astronaut_sloth@mander.xyz
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    13
    ·
    7 months ago

    Cool, Bill Gates has opinions. I think he’s being hasty and speaking out of turn and only partially correct. From my understanding, the “big innovation” of GPT-4 was adding more parameters and scaling up compute. The core algorithms are generally agreed to be mostly the same from earlier versions (not that we know for sure since OpenAI has only released a technical report). Based on that, the real limit on this technology is compute and number of parameters (as boring as that is), and so he’s right that the algorithm design may have plateaued. However, we really don’t know what will happen if truly monster rigs with tens-of-trillions of parameters are used when trained on the entirety of human written knowledge (morality of that notwithstanding), and that’s where he’s wrong.

    • Vlyn@lemmy.zip
      link
      fedilink
      English
      arrow-up
      55
      arrow-down
      2
      ·
      7 months ago

      You got it the wrong way around. We already have a ton of compute and what this kind of AI can do is pretty cool.

      But adding more compute power and parameters won’t solve the inherent problems.

      No matter what you do, it’s still just a text generator guessing the next best word. It doesn’t do real math or logic, it gets basic things wrong and hallucinates new fake facts.

      Sure, it will get slightly better still, but not much. You can throw a million times the power at it and it will still fuck up in just the same ways.

      • astronaut_sloth@mander.xyz
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        8
        ·
        7 months ago

        I mean, that’s more-or-less what I said. We don’t know the theoretical limits of how good that text generation is when throwing more compute at it and adding parameters for the context window. Can it generate a whole book that is fairly convincing, write legal briefs off of the sum of human legal knowledge, etc.? Ultimately, the algorithm is the same, so like you said, the same problems persist, and the definition of “better” is wishy-washy.

        • Vlyn@lemmy.zip
          link
          fedilink
          English
          arrow-up
          11
          ·
          7 months ago

          It will obviously get even better, but you’ll never be able to rely on it. Sure, 99.9% of that generated legal document will look perfect, till you overlook one sentence where the AI hallucinated. There is no fact checking in there, that’s the issue.

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        18
        ·
        7 months ago

        This is short-sighted.

        The jump to GPT 3.5 was preceded by the same general misunderstanding (we’ve reached the limit of what generative pre-trained transformers can do, we’ve reached diminishing returns, ECT.) and then a relatively small change (AFAIK it was a couple additional layers of transforms and a refinement of the training protocol) and suddenly it was displaying behaviors none of the experts expected.

        Small changes will compound when factored over billions of nodes, that’s just how it goes. It’s just that nobody knows which changes will have that scale of impact, and what emergent qualities happen as a result.

        It’s ok to say “we don’t know why this works” and also “there’s no reason to expect anything more from this methodology”. But I wouldn’t dismiss further improvements as a forgone possibility.

        • grabyourmotherskeys@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          Another way to think of this is feedback from humans will refine results. If enough people tell it that Toronto is not the capital of Canada it will start biasing toward Ottawa, for example. I have a feeling this is behind the search engine roll out.

          • raptir@lemdro.id
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 months ago

            ChatGPT doesn’t learn like that though, does it? I thought it was “static” with its training data.

            • grabyourmotherskeys@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              I was speculating about how you can overcome hallucinations, etc., by supplying additional training data. Not specific to ChatGPT or even LLMs…

          • Toes♀@ani.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Toronto is Canadian New York. It wants to be the capital and probably should be but it doesn’t speak enough French.

    • OldWoodFrame@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 months ago

      Yeah and I think he may be scaling to like true AGI. Very possible LLMs just don’t become AGI, you need some extra juice we haven’t come up with yet, in addition to computational power no one can afford yet.

      • astronaut_sloth@mander.xyz
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 months ago

        Except that scaling alone won’t lead to AGI. It may generate better, more convincing text, but the core algorithm is the same. That “special juice” is almost certainly going to come from algorithmic development rather than just throwing more compute at the problem.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 months ago

          See my reply to the person you replied to. I think you’re right that there will need to be more algorithmic development (like some awareness of its own confidence so that the network can say IDK instead of hallucinating its best guess). Fundamentally though, llm’s don’t have the same dimensions of awareness that a person does, and I think that that’s the main bottleneck of human-like understanding.

      • 0ops@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 months ago

        My hypothesis is that that “extra juice” is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm’s can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: Presumably you “understand” your dog - can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too. EDIT: before someone says it, as for feelings I’m not really sure, I’m not a biology guy. But my guess is we sense our own hormones as well

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 months ago

    I’m not sure I’d say it’s plateaued today but I definitely think machine learning is going to hit a wall soon. Some tech keeps improving until physical limits stop progress but I see generative AI as being more like self-driving cars where the “easy” parts end up solved but the last 10% is insanely hard.

    There’s also the economic reality of scaling. Maybe the “hard” problems could, in theory, be easily solved with enough compute power. We’ll eventually solve those problems but it’s going to be on Nvidia’s timeline, not OpenAI’s.

    • nossaquesapao@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      Generative ai is a bit different from self driving cars in the sense that they’re tolerant to failures. This may give more room for improvements when compared to other applications.

  • Pxtl@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    I hope so. Theyve already got scary implications for creative parts of the economy.

    That said, we’re in the Cambrian explosion of the tech. As it plateaus, the next step will be enhanced tooling and convenience around it. Better inputs than just text, better, more applications in new spaces, etc.

  • The Menemen!@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Maybe, but I am sure the tools the AIs can use will improve making the AIs jobs easier and thus the AI more efficient. I hope he is right tbh.

    Ewe, as a long time Linux user I need to take a shower now. I feel dirty.

    • fruitycoder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      The next big steps coming right now are AI trained on generative data and agents that act more automatically (rather than waiting for a prompt, take an action like searching the web and act on that to better complete the goal for example), and better indexed data so generated data can be informed by and cite sources in the moment.

  • Red_October@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    20
    ·
    7 months ago

    And the Wright Brothers said heavier than air flight would only ever be an amusement for the rich, and never commercially viable.

    Even taking Gates’ qualifications at face value doesn’t mean he’s actually right.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      7 months ago

      Source on that quote from the Wright brothers? Because they never said that as far as I’m aware.

      • RememberTheApollo@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        7 months ago

        They didn’t, AFAIK. It was a NYT article that quoted someone who made a similar prediction:

        Once the Wright Brothers proved flight was possible, some assumed it was just a pointless rich play thing. Famed astronomer William H. Pickering said, “The expense would be prohibitive to any but the capitalist who could use his own yacht.”

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      You didn’t read the Wright brothers or this article did you? Gates isn’t at all damning AI tech. All he said is that gpt 5 is unlikely to be very different from 4. He’s probably correct. A next best word algorithm can only go so far. Thats only a part of how language and cognition works. Until some sort of adjunct algorithm gets tagged on I don’t think we will see big leaps either.

    • Red0ctober@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      17
      ·
      7 months ago

      He just has money, which gives him and too many others the idea that he has expertise.

      • ColeSloth@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        7 months ago

        He has money because he was a damn fine computer programmer, had some great ideas, and got pretty good at selling and monopolizing his product. He doesn’t “just have money”. He was skilled and intelligent. He may or may not be wrong about gpt, but he has a hell of a lot more knowledge about the subject and insight into gpt’s inner workings than probably anyone else on Lemmy.

  • Tranquilizer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    16
    ·
    7 months ago

    If it comes from Billyboy, he could be right in the sense that it won’t get any better for the filthy commoners