A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.
The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.
Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.
(A copy of the bill is in he article, here is the important part imo:
Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)
I posted this in a thread, but Im gonna make it a parent comment for those who support this bill.
Consider youtube poop, Im serious. Every clip in them is sourced from preexisting audio and video, and mixed or distorted in a comedic format. You could make an AI to make youtube poops using those same clips and other “poops” as training data. What it outputs might be of lower quality (less funny), but in a technical sense it would be made in an identical fashion. And, to the chagrin of Disney, Nintendo, and Viacom, these are considered legally distinct entities; because I dont watch Frying Nemo in place of Finding Nemo. So why would it be any different when an AI makes it?
My best guess would be intent, which I think is an important component of fair use. The intent of youtube poop creators could be considered parody and while someone could use AI to create parody, the intent of creating the AI model itself is not parody (at least not for these massive AI models that most people use).
Transformation is in itself fair use is the thing. Ytp doesnt need to be parody or critique or anything else, because its fundamentally no longer the same product as whatever the source was as a direct result of editing
Still, the AI model itself is not transformative, it is merely incorporating that data into its training set.
But what it outputs IS transformative, which- of course- is the e primary use
If I include an image of mickey mouse (ripped straight from disney) in my application in a proprietary compression format, then the application decompresses that image and changes the hue (or whatever other kind of modification), then these are technically “transformations” but they’re not transformative.
The law being violated there is trademark, not copyright
No it isn’t. The image of mickey mouse was literally copied (hence copyright, literally right to copy). Regardless, that’s still IP law being violated so I don’t know how that helps your case.
If you arent calling it mickey mouse, it would actually be fine from a copyright perspective. What youd get sued for is the character design itself being too similar, which is a trademark/IP issue
I see this argument a lot as a defense for AI art and I see a couple major flaws in this line of thinking.
First, it’s treating the AI art as somehow the same as a dirivitive (or parody) work made by an actual person. These two things are not the same and should not be argued like they are.
AI art isn’t just dirivitive. It’s a Frankenstein’s Monster of a bunch of different pieces of art stitched together in a procedural way that doesn’t credit and in fact obfuscates the original works. This is problematic at best and flat out dishonest thievery at worst. Whereas a work made by a person that is dirivitive or parody has actual work and thought put into it by an actual person. And would typically at least credit the original works being riffed on. This involves actual creative thought and human touch. Even if it is dirivitive it’s unique in some way simply by virtue of being made by a person.
AI art cannot and will not ever be unique, at least not when used to just create a work wholesale. Because it’s not being creative. It’s calculating and nothing more. (at least if we’re talking about current tachnology. A possible future General AI could flout this argument. But that would get into an AI personhood conversation not really relevant to our current machine learning tech).
Secondly, no one is worried that some hypothetical shitty AI video is going to somehow usurp the work that it’s stealing from. What people are worried about is that AI art is going to be used in place of hiring actual artists for bigger projects. And the fact that this AI art exists solely because it’s scraped the internet of art from those same artists now losing their livelihoods makes the tech incredibly fucked up.
Now don’t get me wrong though. I do believe machine learning has its place in society. And we’ve already been using it for a long time to help with large tasks that would be incredibly difficult if not impossible for people to do on their own in a bunch of different industries. Things like medicine research in the pharmaceutical sector and fraud monitoring in the banking sector come to mind.
Also, there is an argument to be had that machine learning algorithms could be used as tools in creating art. I don’t really have a problem with those use cases. Things that come to mind are a bunch of different tools that exist in music production right now that in my opinion help in allowing artists to fulfill their vision. Watch some There I Ruined It videos on YouTube to see what I mean. Yeah that guy is using AI to make himself sound like other musicians. But that guy also had to be a really solid singer and impressionist in the first place for those songs to be any good at all.
You could say that about literally all art - no artist can name and attribute every single influence that played even the smallest effect on the work created. Say I commissioned an image of an anime man in a french maid uniform in a 4 panel pop art style. In creating it at some level you are going to draw on every anime image you’ve seen, every picture of a french maid uniform, every 4 panel pop art image and create something that’s a synthesis of all those things. You can’t name and attribute every single example of all of those things you have ever seen, as well as anything else that might have influenced you.
…and this is the crux of it - it’s not anything related to the actual content of the image, it’s simple protectionism for a class of worker. Basically creatives are seeing the possibility of some of their jobs being automated away and are freaking out because losing jobs to automation is something that’s only supposed to effect manufacturing workers.
Again, the argument is it’s nothing to do with the actual result, but with it being done by an actual human as opposed to a mere machine. A pixel for pixel identical image create by a human would be “art” by virtue of it being a human that put each pixel there?
Except I couldn’t. Because a person being influenced by an artwork and then either intentionally or subconsciously reinterpreting that artwork into a new work of art is a fundamentally different thing from a power hungry machine learning algorithm digesting the near entirety of modern humanity’s art output to churn out an image manufactured to best satisfy some random person’s text prompt.
They’re just not the same thing at all.
The whole purpose of art is to be an outlet for expressing ourselves as human beings. It exists out of this need for expression; part of what makes a work worth appreciating is the human person(s) behind that said work and the effort and skill they put into making it.
Yes it has nothing to do with the content of the image. I never claimed otherwise. In fact AI art sometimes being indistinguishable from human made art is part of the problem. But we’re not just talking about automating someone’s job. We’re talking about automating someone’s passion. Automating someone’s dream career. In an ideal world we’d automate all the shitty jobs and pay everyone to play guitar, paint a portrait, write a book, or direct a film. Art being made by AI won’t just take away jobs for creatives, it’ll sap away the drive we have as humans to create. And when we create less our existence will be filled with even more bleakness than it already is.
I’m not certain I understand what you’re asking. But If the human is the one making the decision on where to put the pixel then yeah that would be fine. But at no point am I arguing about whether or not AI art is “art”. That would just be a dumb semantic argument that’d go nowhere. I’m merely discussing why I believe AI art to be unethical. And the taking away work from creatives point is only one facet as to why I do.
The big differences there are whether it’s a person or a machine and just how much art one can digest as inspiration. Again, reference my example of a commission above - the main difference between a human and an AI making it is whether they look up a couple dozen examples of each element to get a general idea or 100 million examples of each element to mathematically generalize the idea, and the main reason the number of examples and power requirements need to be so different is that humans are extremely efficient pattern developing and matching machines, so efficient that sometimes the brain just fills in the pattern instead of bothering to fully process sensory inputs (which is why a lot of optical illusions work).
At a level, “churning out an image to best satisfy some random person’s” description is essentially what happens when someone commissions a work or when producing things to spec as part of some project. They don’t generally say “just draw whatever you are inspired to” and hope they like the result. This is the thing that AI image generators are specifically good at, and is why I say it’s about protectionism for a class of workers who didn’t think their jobs could be automated away in whole or in part.
Except you are, you are just deeming that job “someone’s dream career” as though that changes whether or not it’s a job that is being automated in whole or part. Yes, it’s going to hurt the market for commissioned art works and the like. Again, upset because those jobs are supposed to be immune to automation and - whoopsie - they aren’t. Join the people in manufacturing, or the makers of buggy whips.
Literally no one is going to ban or forbid anyone from creating art because AI art exists.
With all respect, your argument has a pretty obvious emotional valence. You don’t care if the result is 1:1, you care that it happened in a way that makes you uncomfortable. Art can be an outlet for self expression and no one is taking that away. What’s it to you if I enjoy asking an AI for art?
The fact of the matter is, capitalism has never been a good place for artists who want to follow their dream. If that’s something you want, then I’d suggest supporting the end of all work for money that automation provides. Then people can truly work on whatever they care about all day and not have to worry about feeding themselves.
This is completely and utterly your own opinion, not a fact. I know several people who can’t draw for shit, due to various reasons, but now AI allows them to create images they enjoy. One of them has aphantasia (They literally cannot imagine images).
This is basically trying to argue there’s only 1 correct way to make “art”, which is complete and utter bullshit. Imagine trying to say that a sculpture isn’t art because it was 3D printed instead of chiseled. It makes 0 sense for the method of making the art to impact whether or not it is art. “Expression” can take many forms. Why is this form invalid?
Never claimed it wasn’t an opinion. And I fully acknowledge that tools can make creating art easier. Hell, I even support the use of machine learning tools when making art. When used as tools and not as a means of creating art wholesale they can enable creativity. But, I’m sorry, writing a text prompt for an AI to produce an image is not making art (for the person writing the prompt). It’s writing a prompt. In the same way that a project manager writing a brief for a contract artist to fullfil is also not creating the art. The AI is producing the art (and by extension the artists who created the works the AI was trained off of). Your friend with aphantasia is not.
Again, I never said any form of art was invalid. Not even AI art. Nor do I think AI art isn’t art. AI art is perfectly capable of creating something worthwhile by means of its content. It’s basing it’s output on worthwhile works of art created by people after all. I’m merely arguing AI art is unethical. If you made a mural out of the blood of children you murdered it’d still be art. But it sure as shit wouldn’t be ethical.