Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM’s aren’t mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a “longitude neuron”

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we’ve been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they’ve trained “linear probes” (it’s own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they’ve been able to isolated individual “neurons” that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn’t mean LLM’s aren’t just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It’s a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what’s most striking about this graphic? It’s not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It’s just how sparse the data from the Global South are. – Also, no, that’s not what “world model” means if you’re talking about the relevance of world models to language understanding. (source)
  • “We can overlay it on a map” != “world model” (source)
  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    some of these replies (those are diff links) are staggeringly awful

    and this one is a piece of art:

    chatGPT just learns, it doesn’t reason, it doesn’t use imagination, it doesn’t plan. It learns at a scale that’s so far beyond what any human can, that it can use “pure” learning to do complex behaviors.

    • zogwarg@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      ^^ Quietly progressing from humans are not the only ones able to do true learning, to machines are the only ones capable of true learning.

      Poetic.

      PS: Eek at the *cough* extrapolation rules lawyering 😬.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Oof, they got so close on that last one, yet so far away. Truly a masterpiece in misunderstanding