• eating3645@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Very difficult, it’s one of those “it’s a feature not a bug” things.

    By design, our current LLMs hallucinate everything. The secret sauce these big companies add is getting them to hallucinate correct information.

    When the models get it right, it’s intelligence, when they get it wrong, it’s a hallucination.

    In order to fix the problem, someone needs to discover an entirely new architecture, which is entirely conceivable, but the timing is unpredictable, as it requires a fundamentally different approach.