• gwindli@lemy.lol
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    the problem is that the AI misrepresents those results it’s summarizing. it represents things that were jokes as fact without showing that information in context. i guess if you dont think criticaly about the information you consume this would be handy. i feel like AI is just abstracting both good and bad info in a way that makes discerning which is which more difficult, and whether you find that convenient or not, its just bad for society.

    • Instigate@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Therein lies the issue of using LLMs to answer broad or vague questions: they’re not capable of assessing the quality or value of the information they hold let alone whether or not it is objectively true or false, and that’s before getting into issues relating to hallucination. For extremely specific questions, where they have fewer but likely more accurate data to work with, they tend to perform better. Training LLMs on data whose value and quality hasn’t been independently tested will always lead to the results we’re seeing now.