• meseek #2982@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    As comical as memey as this is, it does illustrate the massive flaw in AI today: it doesn’t actually understand context or what it’s talking about outside of a folder of info on the topic. It doesn’t know what a guitar is, so anything it recommends suffers from being sourced in a void, devoid of true meaning.

    • Annoyed_🦀 🏅@monyet.cc
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It also doesn’t know what is true what is bs unless they learn from curated source. Truth need to be verified and backed by fact, if an AI learn from unverified or unverifiable source, it gonna repeat confidently what it learn from, just like an average redditor. That’s what make it dangerous, as all these millionaire/billionaire keep hyping up the tech as something it isn’t.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.

      It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Yeah but the proof was about consciousness, and a really bad one IMO.

        I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          I think it’s kind of strange.

          Between quantification and consciousness, we tend to dismiss consciousness because it can’t be quantified.

          Why don’t we dismiss quantification because it can’t explain consciousness?

    • pelespirit@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      anything it recommends suffers from being sourced in a void, devoid of true meaning.

      You just described most of reddit, anything Meta, and what most reviews are like.

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The other massive flaw it demonstrates in AI today is it’s popular to dunk on it so people make up lies like this meme and the internet laps them up.

      Not saying AI search isn’t rubbish, but I understand this one is faked, and the tweeter who shared it issued an apology. And perhaps the glue one too.

      • dependencyinjection@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        There are cases of AI using NotTheOnion as a source for its answer.

        It doesn’t understand context. That’s not to say I am saying it’s completely useless, hell I’m a software developer and our company uses CoPilot in Visual Studio Professional and it’s amazing.

        People can criticise the flaws in it, without people doing it because it’s popular to dunk on it. Don’t shill for AI and actually take a critical approach to its pros and cons.

        • milicent_bystandr@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          I think people do love to dunk on it. It’s the fashion, and it’s normal human behaviour to take something popular - especially popular with people you don’t like (e.g. j this case tech companies) - and call it stupid. Makes you feel superior and better.

          There are definitely documented cases of LLM stupidity: I enjoyed one linked from a comment, where Meta’s(?) LLM trained specifically off academic papers was happy to report on the largest nuclear reactor made of cheese.

          But any ‘news’ dumping on AI is popular at the moment, and fake criticism not only makes it harder to see a true picture of how good/bad the technology is doing now, but also muddies the water for people believing criticism later - maybe even helping the shills.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This image was faked. Check the post update.

      Turns out that even for humans knowing what’s true or not on the Internet isn’t so simple.

      • meseek #2982@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Yes we know. We aren’t talking about the authenticity of the meme. We are talking about the fundamental problem with “AI”

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

          Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

          We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

          One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

          There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

          While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Does anyone really know what a guitar is, completely? Like, I don’t know how they’re made, in detail, or what makes them sound good. I know saws and wide-bandwidth harmonics are respectively involved, but ChatGPT does too.

      When it comes to AI, bold philosophical claims about knowledge stated as fact are kind of a pet peeve of mine.

      • FruitLips@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Feels reminiscent of stealing an Aboriginal, dressing them in formal attire then laughing derisively when the ‘savage’ can’t gracefully handle a fork. What is a brain, if not a computer?

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Yeah, that’s spicier wording than I’d prefer, but there is a sense they’d never apply these high measures of understanding to another biological creature.

          I wouldn’t mind considering the viewpoint, on it’s own, but they put it like it’s an empirical fact rather than a (very controversial) interpretation.

      • Zron@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        You’re the one who made this philosophical.

        I don’t need to know the details of engine timing, displacement, and mechanical linkages to look at a Honda civic and say “that’s a car, people use them to get from one place to another. They can be expensive to maintain and fuel, but in my country are basically required due to poor urban planning and no public transportation”

        ChatGPT doesn’t know any of that about the car. All it “knows” is that when humans talked about cars, they brought up things like wheels, motors or engines, and transporting people. So when it generates its reply, those words are picked because they strongly associate with the word car in its training data.

        All ChatGPT is, is really fancy predictive text. You feed it an input and it generates an output that will sound like something a human would write based on the prompt. It has no awareness of the topics it’s talking about. It has no capacity to think or ponder the questions you ask it. It’s a fancy lightbulb, instead of light, it outputs words. You flick the switch, words come out, you walk away, and it just sits there waiting for the next person to flick the switch.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          No man, what you’re saying is fundamentally philosophical. You didn’t say anything about the Chinese room or epistemology, but those are the things you’re implicitly talking about.

          You might as well say humans are fancy predictive muscle movement. Sight, sound and touch come in, movement comes out, tuned by natural selection. You’d have about as much of a scientific leg to stand on. I mean, it’s not wrong, but it is one opinion on the nature of knowledge and consciousness among many.

          • Zron@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            I didn’t bring up Chinese rooms because it doesn’t matter.

            We know how chatGPT works on the inside. It’s not a Chinese room. Attributing intent or understanding is anthropomorphizing a machine.

            You can make a basic robot that turns on its wheels when a light sensor detects a certain amount of light. The robot will look like it flees when you shine a light at it. But it does not have any capacity to know what light is or why it should flee light. It will have behavior nearly identical to a cockroach, but have no reason for acting like a cockroach.

            A cockroach can adapt its behavior based on its environment, the hypothetical robot can not.

            ChatGPT is much like this robot, it has no capacity to adapt in real time or learn.

      • CasualPenguin@reddthat.com
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        It sounds like you could do with reading up on LLMs in order to know the difference between what it does and what you’re discussing.

  • thezeesystem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Idk seems more helpful then the suicide hotline number. Called them many times for them to tell me generic same information and often times hug up on if I started to cry.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Yeah, when Google starts trying to manipulate the meaning of results in it’s favour, instead of just traffic, things will be at a whole other level of scary.

  • lolola@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The thing these AI goons need to realize is that we don’t need a robot that can magically summarize everything it reads. We need a robot that can magically read everything, sort out the garbage, and summarize the the good parts.

  • Caspase8@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    How is everyone getting this ai overview? All I get when I google something is the usual stuff.

    • justme@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Not using Google for ages, bit I remember that big changes on,eg, Facebook roll out gradually. So not all users at the same time.

      • Klear@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        You can also get it by clicking inspect element and writing whatever ragebait you can think of in there.

    • The Octonaut@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      They don’t need to scrape Lemmy. They just need a federated instance and then they have literally everything you post delivered to them as part of the way Lemmy is designed.

      Please understand literally nothing on Lemmy is private.

        • borari@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Put a public pgp key in your profile bio, then you can actually send true end to end encrypted messages over insecure public channels.

          A very similar conversation led to a joke chain of pgp encrypted replies between me and some other rando on Reddit a few years ago. We were both banned.