• OutlierBlue@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    9
    ·
    4 months ago

    The AI revolution already happened. We’ve seen what it can do, and it won’t expand much more.

    That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more. Full-AI isn’t here yet, but it’s coming, and it will far exceed everything that we have right now.

    • HackyHorse3000@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      4 months ago

      That’s the thing though, that’s not comparable, and misses the point entirely. “AI” in this context and the conversations regarding it in the current day is specifically talking about LLMs. They will not improve to the point of general intelligence as that is not how they work. Hallucinations are inevitable with the current architectures and methods, and they lack a inherent understanding of concepts in general. It’s the same reason they can’t do math or logic problems that aren’t common in the training set. It’s not intelligence. Modern computers are built on the same principals and architectures as those calculators were, just iterated upon extensively. No such leap is possible using large language models. They are entirely reliant on a finite pool of data to try to mimic most effectively, they are not learning or understanding concepts the way “Full-AI” would need to to actually be reliable or able to generate new ideas.

      • chrash0@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        4 months ago

        it’s super weird that people think LLMs are so fundamentally different from neural networks, the underlying technology. neural network architectures are constantly improving, and LLMs are just a product of a ton of research and an emergence after the discovery of the transformer architecture. what LLMs have shown us is that we’re definitely on the right track using neural networks to solve a wide range of problems classified as “AI”

        • HackyHorse3000@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          4 months ago

          I think the main problem is applying LLM outside the domain of “complete this sentence”. It’s fine for what it is, and trained on huge datasets it obviously appears impressive, but it doesn’t know if it’s right or wrong, and evaluation metrics are different. In most traditional applications of neural networks, you have datasets with right and wrong answers, that’s not how these are trained, as there is no “right” answer to “tell me a joke.” So the training has to be based on what would likely fill in the blank. This could be an actual joke, a bad joke, a completely different topic, there’s no difference in the training data. The biases, incorrect answers, all the faults of this massive dataset are inherent in the model, and there’s no fixing that. They are fundamentally different in their application and evaluation (this extends to training) methods from other neural networks that are actually effective at what they do, like image processing and identification. The scope of what they’re trying to do with a finite dataset is not realistic and entirely unconstrained, as compared to more “traditional” neural networks, which are very narrow in scope exactly because of this issue.

    • gedaliyah@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      Oh, I’m not saying that there won’t one day come a better technology that can do a lot more. What I’m saying is that the present technology will never do much more than it is already doing. This is not an issue of refining the technology for more applications. It’s a matter of completely developing a new type of technology.

      In areas of generative text, summarizing articles and books, as well as writing short portions of code in order to assist humans, creating simple fan art, and meaningless images like avatars, and those stock photos at the top of articles, Perhaps creating short animations, Improving pattern recognition of things like speech and facial recognition… In all of these areas, AI was very rapidly revolutionary.

      Generative AI will not become capable of doing things that it’s not already doing. Most of what it’s replacing are just worse computer programs. Some new technology will undoubtedly be revolutionary in the way that computers were a completely new revolution on top of basic function calculators. People are developing quantum computers, and mapping the precise functions of brain cells. If you want, you can download a completely mapped actual nematode brain right now. You can buy brain cells online, even human brain cells, and put them into computers. Maybe they can even run Doom. I have no idea what the next computing revolution will be capable of, but this one has mostly run its course. It has given us some very incredible tools in a very narrow scope, and those tools will continue to improve incrementally, but there will be no additional revolution.

    • turmacar@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      4 months ago

      Sure.

      GPT4 is not that. Neither will GPT5 be that. They are language models that marketing is calling AI. They have a very specific use case, and it’s not something that can replace any work/workers that requires any level of traceability or accountability. It’s just “the thing the machine said”.

      Marketing latched onto “AI” because blockchain and cloud and algorithmic had gotten stale and media and CEOs went nuts. Samsung is now producing an “AI” vacuum that adjusts suction between hardwood and carpet. That’s not new technology. That’s not even a new way of doing that technology. It’s just jumping on the bandwagon.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Marketing latched onto “AI” because blockchain and cloud and algorithmic had gotten stale and media and CEOs went nuts.

        Notably, this also coincided with the first higher interest rate environment in the broader economy in over a decade.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      4 months ago

      That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more.

      “Who would ever need more than 640K of RAM?” -Bill Gates

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      4 months ago

      That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more. Full-AI isn’t here yet, but it’s coming, and it will far exceed everything that we have right now.

      go back to school, hopefully your next statement won’t sound as dumb.