Something something Stallman was right (about this specific thing, anyway).
Something something Stallman was right (about this specific thing, anyway).
Well, it’s a “problem” for philosophers. I don’t think it’s a “problem” for neurology or hard science, that’s the only point I was trying to make.
Right now our understanding of derivative works is mostly subjective. We look at the famous Obama “HOPE” image, and the connection to the original news photograph from which it was derived seems quite clear. We know it’s derivative because it looks derivative. And we know it’s a violation because the person who took the news photograph says that they never cleared the photo for re-use by the artist (and indeed, demanded and won compensation for that reason).
Should AI training be required to work from legally acquired data, and what level of abstraction from the source data constitutes freedom from derivative work? Is it purely a matter of the output being “different enough” from the input, or do we need to draw a line in the training data, or…?
All good questions.
And yeah all the extra data that we humans fundamentally aquire in life does change everything we make.
I’d argue that it’s the crucial difference. People on this thread are arguing like humans never make original observations, or observe anything new, or draw new conclusions or interpretations of new phenomena, so everything humans make must be derived from past creations.
Not only is that clearly wrong, but it also fails the test of infinite regress. If humans can only create from the work of other humans, how was anything ever created? It’s a risible suggestion.
But we make the laws, and have the privilege of making them pro-human. It may be important in the larger philosophical sense to meditate on the difference between AIs and human intelligence, but in the immediate term we have the problem that some people want AIs to be able to freely ingest and repeat what humans spent a lot of time collecting and authoring in copyrighted books. Often, without even paying for a copy of the book that was used to train the AI.
As humans, we can write the law to be pro-human and facilitate human creativity.
To be clear, I don’t think the fundamental issue is whether humans have a training dataset. We do. And it includes copyrighted work. It also includes our unique sensory perceptions and lots of stuff that is definitely NOT the result of someone else’s work. I don’t think anyone would dispute that copyrighted text, pictures, sounds are integrated into human consciousness.
The question is whether it is ethical, and should it be legal, to feed copyrighted works into an AI training dataset and use that AI to produce material that replaces, displaces, or competes with the copyrighted work used to train it. Should it be legal to distribute or publish that AI-produced material at all if the copyright holder objects to the use of their work in an AI training dataset? (I concede that these may be two separate, but closely related, questions.)
I hesitate to call it a problem because, by the way it’s defined, subjective experience is innately personal.
I’ve gotten into this question with others, and when I began to propose thought problems (like, what if we could replicate sensory inputs? If you saw/heard/felt everything the same as someone else, would you have the same subjective conscious experience?), I’d get pushback: “that’s not subjective experience, subjective experience is part of the MIND, you can’t create it or observe it or measure it…”.
When push comes to shove, people define consciousness or subjective experience as that aspect of experience that CANNOT be shown or demonstrated to others. It’s baked into the definition. As soon as you venture into what can be shown or demonstrated, you’re out of bounds.
So it’s not a “problem”, as such. It’s a limitation of our ability to self-observe the operating state of our own minds. An interesting question, perhaps, but not a problem. Just a feature of the system.
There is a so-called “hard problem of consciousness”, although I take exception with calling it a problem.
The general problem is that you can’t really prove that you have subjective experience to others, and neither can you determine if others have it, or whether they merely act like they have it.
But, a somewhat obvious difference between AIs and humans is that AIs will never give you an answer that is not statistically derivable from their training dataset. You can give a human a book on a topic, and ask them about the topic, and they can give you answers that seem to be “their own conclusions” that are not explicitly from the book. Whether this is because humans have randomness injected into their reason, or they have imperfect reasoning, or some genuine animus of “free will” and consciousness, we cannot rightly say. But it is a consistent difference between the humans and the AIs.
The Monty Hall problem discussed in the article – in which AIs are asked to answer the Monty Hall problem, but they are given explicit information that violate the assumptions of the Monty Hall problem – is a good example of something where a human will tend to get it right, through creativity, while an AI will tend to get it wrong, due to statistical regression to the mean.
beehaw.org – a lemmy instance specifically geared toward quality discussion and keeping everybody nice to each other – has basically been told by Lemmy devs that the moderation tools they want and need just aren’t in the roadmap, and they’ll need to fork and develop their own version.
That’s an incredibly disheartening attitude.
I think the big challenge right now is sustaining growth. I don’t think many reddit refugees are paying for their fediverse services.
I support dessalines on Patreon, but I don’t really know what else I should be doing. I think that folks who want to run these services need to figure out how to charge money for it, or they won’t be able to buy infrastructure or network bandwidth.
Yeah, I use the regular Youtube client more frequently, because my senses aren’t assaulted by a crapass load of thumbnails of dumb stuff.
He’s rich. First rule of being rich is never pay when you can borrow.
Oh goody. There’s a RickRussell_CA@lemm.ee and it’s not me. And it’s using one of my older profile pictures.
EDIT: 2023/8/29 update – I posted to the lemm.ee support community and the admins decided to disable the account. Well done!
I like pixelfed.social, but I’m an admittedly “lite” user of Instagram, don’t really post my own stuff, just use it to find interesting photos. It’s been good for that.
Walk it off, Snowflaknov!
US Virgin Islands: nah we’re good
As someone who has geeked out on fonts since we were trading bitmap fonts for System 5 on the Mac, I can say that article is fine. Believe it or not, all of that actually matters to graphic design/text design people.
Did you enjoy your core download journey?
I don’t quite get the complaints. Sync (non-Pro) was always ad supported. Nothing has changed.
LJ Dawson is charging more for “Ultra” features (understandable, since this whole reddit kerfuffle has upset his business model), but you don’t need Ultra to enjoy the Sync client, and you don’t even need to pay that higher price to disable ads.
A