• YurkshireLad@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    At what point will companies quietly and secretly start removing LLMs from their apps because they finally admit they suck? 😁

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      But it doesn’t suck. The AI is summarizing the search results it’s getting. If the search results say things that are wrong, the summary will also be wrong. Do you want the AI to somehow magically be the arbiter of objective reality? How would it do that?

      • Carnelian@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        Personally I want the AI to simply not be there lol. What is even the point of it? You have to completely fact check it anyway by using the exact same search techniques as before.

        It’s a solution that doesn’t work, put in place to solve a problem that nobody has. So yes it does suck lol

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          It’s a solution that doesn’t work, put in place to solve a problem that nobody has.

          If that’s really true then it’ll go away.

          Have you considered that maybe not everyone has the same problems you do, and some people actually find this sort of thing handy?

          • gwindli@lemy.lol
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            the problem is that the AI misrepresents those results it’s summarizing. it represents things that were jokes as fact without showing that information in context. i guess if you dont think criticaly about the information you consume this would be handy. i feel like AI is just abstracting both good and bad info in a way that makes discerning which is which more difficult, and whether you find that convenient or not, its just bad for society.

            • Instigate@aussie.zone
              link
              fedilink
              arrow-up
              0
              ·
              6 months ago

              Therein lies the issue of using LLMs to answer broad or vague questions: they’re not capable of assessing the quality or value of the information they hold let alone whether or not it is objectively true or false, and that’s before getting into issues relating to hallucination. For extremely specific questions, where they have fewer but likely more accurate data to work with, they tend to perform better. Training LLMs on data whose value and quality hasn’t been independently tested will always lead to the results we’re seeing now.

          • Maddier1993@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            Going away depends on a lot more things happening in the background like VCs stopping AI funding. Your assumption that demand matches supply lacks nuance like the fact that humans are not rational consumers.

    • Balinares@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      When investors shut off the AI money faucet. No sooner, no later.

      By god, may that happen soon.