- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models.
Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models.
Anyone who knows what LLMs are knew this from the start. They are good at creating text, but if you need the text to be true, they’re probably not the best tool for the job.