• Gigan@lemmy.world
    link
    fedilink
    English
    arrow-up
    172
    arrow-down
    1
    ·
    edit-2
    11 months ago

    GRRM is worried AI will finish writing his books before him

    • Daqu@feddit.de
      link
      fedilink
      English
      arrow-up
      48
      ·
      11 months ago

      We could teach ducks to write and they will finish before him.

      • realharo@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        Another moment in A Dream of Spring involved Bran receiving a vision that The Wall was not just a physical barrier, but a mystical shield holding back the Night King’s power. “This twist fits well within the universe and raises tension for the remainder of the story,” Swayne remarks.

        That’s just a popular fan theory that has been discussed countless times on various forums.

        I guess we can conclude that ChatGPT has been reading a lot of reddit.

        • Madrigal@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          11 months ago

          Well, assuming it wasn’t all nuked over the past few months, the fan theory stuff on Reddit was probably a pretty good training dataset.

        • foggenbooty@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Absolutely it has been. That’s what sparked the whole Reddit API debacle. Reddit wants that sweet cash stream from machine learning trawling it’s data.

    • StarkillerX42@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      11 months ago

      Actually getting a good ending to the ASOIAF after GRRM dies is gonna be one of the big turning points that transforms everyone’s opinion on AI.

      It’s gonna be like fan edits for movies. People will debate which is the better version of the story. The only person hurt by this is George, who will be dead and was never going to finish the books anyways.

    • ripcord@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      Since he will never finish the next book, then that’s very likely given infinite time :)

  • HipPriest@kbin.social
    link
    fedilink
    arrow-up
    36
    ·
    11 months ago

    I mean this isn’t miles away from what the writer’s strike is about. Certainly I think the technology is great but after the last round of tech companies turning out to be fuckwits (Facebook, Google etc) it’s only natural that people are going to want to make sure this stuff is properly regulated and run fairly (not at the expense of human creatives).

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      As it stands now, I actually think it is miles away.

      Studio’s were raking in huge profits from digital residuals that weren’t being passed to creatives, but AI models aren’t currently paying copyright holders anything. If they suddenly did start paying publishers for use, it would almost certainly exclude payment to the actual creatives.

      I’d also point out that LLM’s aren’t like digital distribution models because LLM’s aren’t distributing copyrighted works, at best you can say they’re distributing a very lossy (to use someone else’s term) compressed alternative that has to be pieced back together manually if you really wanted to extract it.

      No argument that AI should be properly regulated, but I don’t think copyright is the right framework to be doing it.

  • FluffyPotato@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 months ago

    If the models trained on pirated works were available as a non-profit sort of setup with any commercial application being banned I think that would be fine.

    Business owners salivating over the idea that they can just pocket the money writers and artists would make is not exactly a good use of tech.

  • Bruncvik@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    11 months ago

    To be the devil’s advocate (or GRRM’s attorney), I see the merit of his and other authors’ concerns. Chat GPT makes it feasible to generate short stories in their world and with their characters, which can easily replace their licensed products. This is not just their main work, but also other products that generates them some revenue stream.

    Example: A friend of mine is using Chat GPT to generate short bedtime stories for his daughters. A typical request is something like this: “Please write a five paragraph story where Elsa from Frozen meets Santa Claus. Together, they fly in Santa’s sleigh over the world, and Elsa is magicking snow on all Christmas trees.” Normally, you’d buy a Disney-licensed book of short Christmas stories (I have one for my kids), but Chat GPT is more flexible and free.

    Same goes for GRRM. He doesn’t write Children stories, but one can still prompt Chat GPT to produce stories from the universe, which scratch the ASOIAF itch. This substitutes the officially licensed products and deprives the author of additional revenue stream. Just for the fun of it, I prompted Chat GPT: “Hello GPT-3.5. Please write a four paragraph story set in the Game of Thrones universe. In this story, Jon Snow and Tyrion Lannister go fishing and catch a monster alligator, which then attacks them.” It produces a surprisingly readable story, and if I were a fan of this universe, I can imagine myself spending a lot of time with different prompts and then reading the results.

    (On a side note,AI-generated content already has at least one group of victims: the authors of short fiction. Magazines like Clarkesworld were forced to close submissions of new stories, as they became overwhelmed with AI-generated content.)

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      edit-2
      11 months ago

      Couple things:

      • i don’t see why ChatGPT would be at fault itself here. Taking the rest of your argument as granted, Chat GPT is more like a tool or service that provides “snippets” or previews, such as a Google image search or YouTube clips or summaries. The items being produced are of a fundamentally different quality and quantity and cannot really be used to copy a work wholesale. If someone is dedicated enough to price together a complete story, I would think their role in producing it is more important than the use of ChatGPT

      • copywrite law itself is broken and too broad as it is, I don’t think we should be stretching it even further to protect against personal use of AI tools. An argument can be made if an individual uses ChatGPT to produce a work which is then commercialized (just like any other derivative work), but the use of the tool by itself seems like a ridiculously low bar that benefits basically no one

      • Bruncvik@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        You are right, especially regarding the copyright law. My argument here, however, was the same argument as companies are using against non-genuine spare parts or 3D printing (even though the latter seems to be a lost battle): people who are able to generate substitutes based on the company’s designs (you can say their IP) are eating into their aftermarket profits. That’s not even taking into account planned obsolescence (my kids toys are prime examples) or add-ons to products (I printed my own bracket for my Ring doorbell). With AI, I don’t need to buy short story books for my kids to read; I’ll generate my own until they are old enough to use Chat GPT themselves.

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Yea, I mean I get why automated tools are bad for companies, I just don’t have any sympathy for them, nor do I think we should be stretching our laws beyond their intent to protect them from competition. I think the fair-use exceptions for the DMCA (such as for breaking digital copy-protection for personal use) are comparable here. Under those exceptions for example, it’s considered fair use to rip a DVD into a digital file as long as it’s for personal use. An IP holder could argue that practice “eats into their potential future profits” for individuals who may want a digital version of a media product, but it’s still protected. In that case, the value to the consumer is prioritized over a companies dubious copyright claim.

          In my mind, a ChatGPT short story is not a true alternative to an original creative work (an individual can’t use GPT to read ASOIAF, only derivative short stories), and the work that GPT CAN produce are somewhat valueless to an individual who hasn’t already read the original. Only if they were to take those short stories and distribute them (i.e. someone ripping a DVD and sharing that file with friends and family) could ‘damages’ really be assumed.

          I think the outcome of these lawsuits can help inform what we should do, also: LLMs as a tool will not go away at this point, so the biggest outcome of this kind of litigation would be the inflation of cost in producing an LLM and inflation of the value of the “data” necessary to train it. This locks out future competitors and preemptively consolidates the market into established hands (twitter, reddit, facebook, and google already “own” the data their users have signed over to them in their TOS). Now is the time to rethink copyright and creative compensation models, not double-down on our current system.

          I really hope the judges overseeing these cases can see the implications here.

  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    edit-2
    11 months ago

    I’ve expressed my opinions on this before which wasn’t popular, but I think this case is going to get thrown out. Authors Guild, Inc. v. Google, Inc. has established the precedent that digitalization of copyrighted work is considered fair use, and finetuning an LLM even more so, because LLMs ultimately can be thought of as storing text data in a very, very lossy comprssion algorithm, and you can’t exactly copyright JPEG noise.

    And I don’t think many of the writers or studio people actually tried to use ChatGPT to do creative writing, and so they think it magically outputs perfect scripts just by telling it to write a script: the reality is if you give it a simple prompt, it generates the blandest, most uninspired, badly paced textural garbage imaginable (LLMs are also really terrible at jokes), and you have to spend so much time prompt engineering just to get it to write something passable that it’s often easier just to write it yourself.

    So, the current law on it is fine I think, that pure AI generated contents are uncopyright-able, and NOBODY can make money off them without significant human contribution.

    • torpak@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      he reality is if you give it a simple prompt, it generates the blandest, most uninspired, badly paced textural garbage imaginable

      Which is not too far from the typical sequel quality coming out of hollywood at the moment ;-)

    • Lt_Cdr_Data@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      11 months ago

      This will definitely change though. As LLMs get better and develop new emergent properties, the gap between a human written story and an ai generated one will inevitably diminish.

      Of course you will still need to provide a detailed and concrete input, so that the model can provide you with the most accurate result.

      I feel like many people subscribe to a sort of human superiority complex, that is unjustified and which will quite certainly get stomped in the coming decades.

      • torpak@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        That is definitely not inevitable. It could very well be that we reach a point of diminishing returns soon. I’m not convinced, that the simplistic construction of current generation machine learning can go much further than it already has without significant changes in strategy.

        • Lt_Cdr_Data@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          11 months ago

          Could be, but the chances of that happening are next to zero and it’d be foolish to assume this is the peak.

  • Wogi@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    11 months ago

    When the movie about the nerds behind these apps comes out, this will be the part of the movie trailer where Jesse Eisenberg looks nervous and says he’s being sued for over a billion dollars.

    • db2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      11 months ago

      And if AI writes it Walter White will appear and announce the need to cook. Then they’ll all melt for no reason.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I’m still not convinced that Jesse Eisenberg and Sam Altman aren’t actually the same person.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    11 months ago

    Copyright law in general is out of date and needs updating it’s not just AI that’s the problem that’s just the biggest new thing. But content creators of traditional media have been railing against what they perceive as copyright violation for ages.

    Look at Nintendo and Let’s Plays.

    The law is the problem here. Not AI.

    • Ryantific_theory@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Copyright law has been such a disaster for so long, while clearly being wielded like a blunt weapon by corporations. I can see the existential threat that generative AI can pose to creators if it becomes good enough. And I also am aware that my dream of asking an AI to make a buddy cop adventure where Batman and Deadpool accidentally bust into the Disney universe, or remake the final season of Game of Thrones, is never gonna be allowed, but there’s honestly a huge amount of potential for people to get the entertainment they want.

      At any rate, it seems likely that they’re going to try and neuter generative AI with restrictions, despite it really not being the issue at hand.

  • Anonymousllama@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    6
    ·
    edit-2
    11 months ago

    “LLMs allow anyone to generate — automatically and freely (or very cheaply) — text that they would otherwise pay writers to create” My heart bleeds for them 🙄

    That new technology is going to make it harder for us to earn income. As if automation and other improvements over the years hasn’t diminished other positions and they should somehow be protected at the cost of improvements for everyone as a whole

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      11 months ago

      Do any of these authors use a word processor? Because that would be displacing the job of a skilled typist.

      Technological progress is disruptive and largely unavoidable. Loosing your livelihood to a machine isn’t fun, I don’t dispute that. But the fact of that didn’t stop the industrial revolution, the automobile, the internet, or many other technological shifts. Those who embraced them reaped a lot benefits however.

      Technology is also often unpredictable. The AI hype train should not be taken at face value, and at this point we can’t say if generative AI systems will ever really “replace” human artistry at all, especially at the highest of levels. But technology such as LLMs do not have reach that level to still be useful for other applications, and if the tech is killed on unfounded fear mongering we could loose all of it.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Also they’re not going to lose their livelihoods. They might lose a little bit of money, but honestly even that I doubt.

        We are still going to need humans to create creative works and as much as Hollywood reckons they’re going to replace actors with AI. They’re still going to need humans to write the scripts unless they can convince everyone that formulaic predictable nonsense is the new hotness.

        Creative works is probably the only industry that will ultimately actually be safe from the AI, not because AI can’t be creative, but because humans want humans to be creative. We put special value on human created works. That’s why people object to AI art so much, not because it isn’t good but because it lacks, for one of a better word, any soul.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        edit-2
        11 months ago

        What’s the alternative? Only mega billion corporations and pirates should be allowed to train AI? See much worse that is?

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        I fail to see how training an LLM in any way discourages authors from producing or distributing new works, which is ostensibly the intent of copyright law.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      11 months ago

      “Those fancy robots will allow anyone to create — automatically and freely (or very cheaply) — cars that they would otherwise pay mechanics to create”

      Oh the horror

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    11 months ago

    Julia was twenty-six years old… and she worked, as he had guessed, on the novel-writing machines in the Fiction Department. She enjoyed her work, which consisted chiefly in running and servicing a powerful but tricky electric motor… She could describe the whole process of composing a novel, from the general directive issued by the Planning Committee down to the final touching-up by the Rewrite Squad. But she was not interested in the final product. She “didn’t much care for reading,” she said. Books were just a commodity that had to be produced, like jam or bootlaces.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    This is the best summary I could come up with:


    According to the complaint, OpenAI “copied plaintiffs’ works wholesale, without permission or consideration” and fed the copyrighted materials into large language models.

    The authors added that OpenAI’s LLMs could result in derivative work “that is based on, mimics, summarizes, or paraphrases” their books, which could harm their market.

    OpenAI, the complaint said, could have trained GPT on works in the public domain instead of pulling in copyrighted material without paying a licensing fee.

    This is the latest lawsuit against OpenAI from popular authors — Martin wrote Game of Thrones, Grisham’s many books have been turned into films, and so on — alleging copyright infringement.

    Amazing Adventures of Kavalier and Clay writer Michael Chabon and others sued the company for using their books to train GPT earlier in September.

    Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey also sought legal action against OpenAI and Meta, while Paul Tremblay and Mona Awad filed their complaint in June.


    The original article contains 323 words, the summary contains 157 words. Saved 51%. I’m a bot and I’m open source!