• froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    “despite the many people who have shown time and time and time again that it definitely does not do fine detail well and will often present shit that just 10000% was not in the source material, I still believe that it is right all the time and gives me perfectly clean code. it is them, not I, that are the rubes”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      The problem with stuff like this is not knowing when you dont know. People who had not read the books SSC Scott was reviewing didnt know he had missed the points (or not read the book at all) till people pointed it out in the comments. But the reviews stay up.

      Anyway this stuff always feels like a huge motte bailey, where we go from ‘it has some uses’ to ‘it has some uses if you are a domain expert who checks the output diligently’ back to ‘some general use’.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        A lot of the “I’m a senior engineer and it’s useful” people seem to just assume that they’re just so fucking good that they’ll obviously know when the machine lies to them so it’s fine. Which is one, hubris, two, why the fuck are you even using it then if you already have to be omniscient to verify the output??

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 hours ago

          “If you don’t know the subject, you can’t tell if the summary is good” is a basic lesson that so many people refuse to learn.

    • pipes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Ahah I’m totally with you, I just personally know people that love it because they have never learned how to use a search engine. And these generalist generative AIs are trained on gobbled up internet basically, while also generating so many dangerous mistakes, I’ve read enough horror stories.

      I’m in science and I’m not interested in ChatGPT, wouldn’t trust it with a pancake recipe. Even if it was useful to me I wouldn’t trust the vendor lock-in or enshittification that’s gonna come after I get dependent on aa tool in the cloud.

      A local LLM on cheap or widely available hardware with reproducible input / output? Then I’m interested.