• Pilk@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    People need to realise how easy it is for a human to figure out synthetic content. At least, with the current state of AI text generation.

    I don’t think it shouldn’t be used; I do think it should be clearly labelled as synthetic.

    Reddit is a wasteland for this shit already, though. Probably too late.

    • melbaboutown@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 days ago

      The problem is not only does it make poor recommendations that affect the legal outcome and safety of the child.

      The LLM has also now got hold of sensitive and potentially identifiable personal information, which is now subject to the company’s own rules of how that information will be handled and disclosed.

      Edit: So I don’t think it should be used for this purpose.

      I’ve also refused to allow my GP to use AI to take notes during the consultation, because I don’t think the owner of that technology should have access to my medical information.

      Ps. In the infancy of AI I used to participate in citizen science projects as a volunteer, training the models to recognise slides with cancer cells. I also watched in interest as it was used to generate simple forms challenging parking fines (?) for those who couldn’t afford legal assistance.

      So it’s not like I’m screaming about progress being bad and Thomas Edison being a witch. I simply think a lot of corner cutting and misuse is happening without regulations, and leading to real harm.