As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    15
    ·
    11 months ago

    Call it whatever you want, if you worked in a field where it’s useful you’d see the value.

    “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

    Holy shit! So you mean… Like humans? Lol

    • whats_a_refoogee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      11 months ago

      “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

      Holy shit! So you mean… Like humans? Lol

      No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.

      I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.

      We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.

    • Orphie Baby@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      8
      ·
      edit-2
      11 months ago

      I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.

      Edit: When the fuck did I say “limited usefulness = not useful for anything”? God the fucking goalpost-moving. I’m fucking out.

        • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          6
          ·
          11 months ago

          I’m not the person you asked, but current deep learning models just generate output based on statistic probability from prior inputs. There’s no evidence that this is how humans think.

          AI should be able to demonstrate some understanding of what it is saying; so far, it fails this test, often spectacularly. AI should be able to demonstrate inductive, deductive, and abductive reasoning.

          There are some older AI models, attempting to similar neural networks, could extrapolate and come up with novel, often childlike, ideas. That approach is not currently in favor, and was progressing quite slowly, if at all. ML produces spectacular results, but it’s not thought, and it only superficially (if often convincingly) resembles such.