The problem is that today’s state of the art is far too good for low hanging fruit. There isn’t a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn’t also fail so you’re often left with weird ad-hominins (“Forget what it can do and results you see. It’s “just” predicting the next token so it means nothing”) or imaginary distinctions built on vague and ill defined assertions ( “It sure looks like reasoning but i swear it isn’t real reasoning. What does “real reasoning” even mean ? Well idk but just trust me bro”)

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn’t have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There’s actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’ve got the ACM piece in a tab, staring at me, challenging me not to nope out with a TL;DR. Is it worth getting into it? I’d love to have some ammo against promptfans of all stripes.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Yeah, it’s worth examining. I didn’t find any good takeaways, but I feel that they stated their case in a citation-supported manner; it looks like a decent article to throw at folks who claim that LLMs are intelligent.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Basically: AI is (potentially?) useful, but LLMs require substantially more data than a human brain to do what they do, which is limited at best - and often less able for generalised cases than a well-defined physics model. The ideas aren’t even new, having their roots in theoretical approaches from the 1940s and applied approaches from the 1980s, but they have a lot more training data and processing power now, which makes it seem more impressive. Even if all of the data in the universe was present, this would not lead to AGI because LLMs can’t figure out the “why”.

      But I don’t think there’s anything new asserted in that article if you’re familiar with the space and the promptfans will dismiss it anyway.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      to be honest I’m in the same boat. it’s tempting but I don’t know if I have the fortitude this week to actually engage with it