Source

I see Google’s deal with Reddit is going just great…

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Honestly, no. What “AI” needs is people better understanding how it actually works. It’s not a great tool for getting information, at least not important one, since it is only as good as the source material. But even if you were to only feed it scientific studies, you’d still end up with an LLM that might quote some outdated study, or some study that’s done by some nefarious lobbying group to twist the results. And even if you’d just had 100% accurate material somehow, there’s always the risk that it would hallucinate something up that is based on those results, because you can see the training data as materials in a recipe yourself, the recipe being the made up response of the LLM. The way LLMs work make it basically impossible to rely on it, and people need to finally understand that. If you want to use it for serious work, you always have to fact check it.

    • Aux@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      People need to realise what LLMs actually are. This is not AI, this is a user interface to a database. Instead of writing SQL queries and then parsing object output, you ask questions in your native language, they get converted into queries and then results from the database are converted back into human speech. That’s it, there’s no AI, there’s no magic.

      • Deborah@hachyderm.io
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        Sure, if by “database” you mean “tool that takes every cell in every table, calculates their likelihood of those cells appearing near each other, and then discards the data”. Which is a definition of “database” that stretches the word beyond meaning.

        Natural language inputs for data retrieval have existed for a very long time. They used to involve retrieving actual data, though.