ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.
Yes.
Just because they’re in a neural network and not ASCII or unicode doesn’t mean they’re not stored. It’s even more apt a concept since apparently those works can be retrieved fairly easily, even if the references to them are hard to isolate. It seems ChatGPT is storing eidetic copies of data, which would imply what other people have said in this thread, that it is overfitting itself to the data and not learning truly generalisable language.
The claim is that it contains entire copies of the book. It does not. AI memory is like our memory, we do not remember books word to word.
They are spitting out, as in the quote above, “verbatim text”, as in, word for word. That is copyrightable.
And that’s not what you said. You it has no memory. That’s clearly wrong.
It’s only under copyright if it’s a significant portion of the work. Single sentences are not enough, unless it’s a short poem.