Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    This is true if you describe a pure llm, like gpt3

    However systems like claude, gpt4o and 1o are far from just a single llm, they are a blend of llm’s other machine learning (like image recognition) some old fashioned code.

    Op does ask “modern llm” so technically you are right but i believed they did mean the more advanced “products”

    • fartsparkles@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      None of which are intelligence, and all of which are catered towards predicting the next token.

      All the models have a total reliance on data and structure for inference and prediction. They appear intelligent but they are not.

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        How is good old fashioned code comparing outputs to a database of factual knowledge “predicting the next token” to you. Or reinforcement relearning and token rewards baked into models.

        I can tell you have not actually tried to work with professional ai or looked at the research papers.

        Yes none of it is “intelligent” but i would counter that with neither are human beings, we dont even know how to define intelligence.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      No, unfortunately you are wrong.

      Gpt4 is a better version of gpt3.

      The brand new one that is allegedly “unhackable” just has a role hierarchy providing rules and that hasn’t been fulled tested in the wild yet.