• 0laura@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    So rude, you didn’t answer my question at all. Nowhere does it say that the AI speaking gibberish would cause a crash. I read the article. It seems to just kinda vaguely imply that something bad might happen. I don’t really consider tricking the LLM into saying naughty things to be a security issue. If I’m missing something obvious I’d love it if you told me.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Genuine question.

      So rude, you didn’t answer my question at all.

      yeah find me one single instance of someone doing this “genuine question” shit that doesn’t result in the most bad faith interpretation possible of the answers they get

      If I’m missing something obvious I’d love it if you told me.

      • most security vulnerabilities look like they cause the targeted program to spew gibberish, until they’re crafted into a more targeted attack
      • it’s likely that gibberish is the LLM’s training data, where companies are increasingly being encouraged to store sensitive data
      • there’s also a trivial resource exhaustion attack where you have one or more LLMs spew garbage until they’ve either exhausted their paid-for allocation of tokens or cost their hosting organization a relative fuckload of cash
      • either you knew all of the above already and just came here to be a shithead, or you’re the type of shithead who doesn’t know fuck about computer security but still likes to argue about it
      • fuck off
      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        the amount of times I’ve had to clean shit up after someone like this “didn’t think $x would matter”…

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      so you start by claiming that you don’t think there’s any problematic security potential, follow it up by clarifying that you actually have no fucking understanding of how any of it could work and might matter, and then you get annoyed at the response? so rude, indeed!

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          you know what

          I’ll do you the courtesy of an even mildly thorough response, despite the fact that this is not the place and that it’s not my fucking job

          one of the literal pillars of security intrusions/research/breakthroughs is in the field of exploiting side effects. as recently as 3 days ago there was some new stuff published about a fun and ridiculous way to do such things. and that kind of thing can be done in far more types of environments than you’d guess. people have managed large-scale intrusions/events by the simple matter of getting their hands on a teensy little fucking bit of string.

          there are many ways this shit can be abused. and now I’m going to stop replying to this section, on which I’ve already said more than enough.