This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT outputs is probably bullshit.
Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.
It’s even more than just “accepting everything as true” the machines have no concept of true. The machine doesn’t think. It’s a combination of three processes: prediction algorithm for the next word, algorithm that compares grammar and sentence structure parity, and at least one algorithm to help police the other two for problematic statements.
Clearly the problem is with that last step, but the solution would be a human or a general intelligience, meaning the current models in use will never progress beyond this point.
Sure they do. But they also trust adults a lot. Children try to find answers only because they have stimulus other than humans telling them things, but if that stimulus is missing, they will believe the adult. The environments that AI “grow up” in are different, but they are very similar from a mental perspective.
How many times have you heard the story of something hearing something false from a family member and holding it close to their heart for years?
Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.
Does using authoritative sources is fool proof? For example, is everything written in Wikipedia factually correct? I don’t believe so unless I actually check it. Also, what about reddit or stack overflow? Can they be considered factually correct? To some extent, yes. But not completely. That is why most of these LLMs give such arbitrary answers. They extrapolate on information they have no way knowing or understanding.
I don’t quite understand what you mean by extrapolate on information. LLMs have no model of what an information or the truth is. However, factual information can be passed into the context, the way Bing does it.
This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT outputs is probably bullshit.
Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.
It’s even more than just “accepting everything as true” the machines have no concept of true. The machine doesn’t think. It’s a combination of three processes: prediction algorithm for the next word, algorithm that compares grammar and sentence structure parity, and at least one algorithm to help police the other two for problematic statements.
Clearly the problem is with that last step, but the solution would be a human or a general intelligience, meaning the current models in use will never progress beyond this point.
Childrens’ minds work similarly.
Why do you even think that? Children don’t ask questions? Don’t try to find answers?
Sure they do. But they also trust adults a lot. Children try to find answers only because they have stimulus other than humans telling them things, but if that stimulus is missing, they will believe the adult. The environments that AI “grow up” in are different, but they are very similar from a mental perspective.
How many times have you heard the story of something hearing something false from a family member and holding it close to their heart for years?
Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.
Does using authoritative sources is fool proof? For example, is everything written in Wikipedia factually correct? I don’t believe so unless I actually check it. Also, what about reddit or stack overflow? Can they be considered factually correct? To some extent, yes. But not completely. That is why most of these LLMs give such arbitrary answers. They extrapolate on information they have no way knowing or understanding.
I don’t quite understand what you mean by extrapolate on information. LLMs have no model of what an information or the truth is. However, factual information can be passed into the context, the way Bing does it.