Public criticism has mounted over Google's AI Overview feature in Search, which has returned inaccurate or nonsensical results, according to screenshots.
But it doesn’t suck. The AI is summarizing the search results it’s getting. If the search results say things that are wrong, the summary will also be wrong. Do you want the AI to somehow magically be the arbiter of objective reality? How would it do that?
Personally I want the AI to simply not be there lol. What is even the point of it? You have to completely fact check it anyway by using the exact same search techniques as before.
It’s a solution that doesn’t work, put in place to solve a problem that nobody has. So yes it does suck lol
the problem is that the AI misrepresents those results it’s summarizing. it represents things that were jokes as fact without showing that information in context. i guess if you dont think criticaly about the information you consume this would be handy. i feel like AI is just abstracting both good and bad info in a way that makes discerning which is which more difficult, and whether you find that convenient or not, its just bad for society.
Therein lies the issue of using LLMs to answer broad or vague questions: they’re not capable of assessing the quality or value of the information they hold let alone whether or not it is objectively true or false, and that’s before getting into issues relating to hallucination. For extremely specific questions, where they have fewer but likely more accurate data to work with, they tend to perform better. Training LLMs on data whose value and quality hasn’t been independently tested will always lead to the results we’re seeing now.
Going away depends on a lot more things happening in the background like VCs stopping AI funding. Your assumption that demand matches supply lacks nuance like the fact that humans are not rational consumers.
At what point will companies quietly and secretly start removing LLMs from their apps because they finally admit they suck? 😁
But it doesn’t suck. The AI is summarizing the search results it’s getting. If the search results say things that are wrong, the summary will also be wrong. Do you want the AI to somehow magically be the arbiter of objective reality? How would it do that?
Personally I want the AI to simply not be there lol. What is even the point of it? You have to completely fact check it anyway by using the exact same search techniques as before.
It’s a solution that doesn’t work, put in place to solve a problem that nobody has. So yes it does suck lol
If that’s really true then it’ll go away.
Have you considered that maybe not everyone has the same problems you do, and some people actually find this sort of thing handy?
the problem is that the AI misrepresents those results it’s summarizing. it represents things that were jokes as fact without showing that information in context. i guess if you dont think criticaly about the information you consume this would be handy. i feel like AI is just abstracting both good and bad info in a way that makes discerning which is which more difficult, and whether you find that convenient or not, its just bad for society.
Therein lies the issue of using LLMs to answer broad or vague questions: they’re not capable of assessing the quality or value of the information they hold let alone whether or not it is objectively true or false, and that’s before getting into issues relating to hallucination. For extremely specific questions, where they have fewer but likely more accurate data to work with, they tend to perform better. Training LLMs on data whose value and quality hasn’t been independently tested will always lead to the results we’re seeing now.
Going away depends on a lot more things happening in the background like VCs stopping AI funding. Your assumption that demand matches supply lacks nuance like the fact that humans are not rational consumers.
When investors shut off the AI money faucet. No sooner, no later.
By god, may that happen soon.