ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • eggymachus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    1 year ago

    And this tech community is being weirdly luddite over it as well, saying stuff like “it’s only a bunch of statistics predicting what’s best to say next”. Guess what, so are you, sunshine.

    • PreviouslyAmused@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I mean, people are slightly more complicated than that. But sure, at their most basic, people simply communicate with statistical models.

    • amki@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Might be true for you but most people do have a concept of true and false and don’t just dream up stuff to say.

    • dukk@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      IMO for AI to reach a useful point it needs to be able to learn. Now I’m no expert on neural networks, but if it can’t learn anything new once it’s been trained, it’s never really going to reach its true potential. It can imitate a human, but that’s about it. Once AI can really learn, it’ll become an order of magnitude more useful. Don’t get me wrong: all this AI work is a step in the right direction, but we’ll only be able to go so far with pre-trained models.