People confabulate all the time.
False memories can also be deliberately created. Here’s a classic: https://www.washington.edu/news/2001/06/11/i-tawt-i-taw-a-bunny-wabbit-at-disneyland-new-evidence-shows-false-memories-can-be-created/
People confabulate all the time.
False memories can also be deliberately created. Here’s a classic: https://www.washington.edu/news/2001/06/11/i-tawt-i-taw-a-bunny-wabbit-at-disneyland-new-evidence-shows-false-memories-can-be-created/
Neural nets, including LLMs, have almost nothing to do with statistics. There are many different methods in Machine Learning. Many of them are applied statistics, but neural nets are not. If you have any ideas about how statistics are at the bottom of LLMs, you are probably thinking about some other ML technique. One that has nothing to do with LLMs.
The noun doesn’t matter after an adjective like ‘multiple.’ Nothing good ever follows ‘multiple.’
-Terry Pratchett, Guards! Guards!
I thought she made some very good points, but the quote in the title makes no sense to me.
Ahh. TV shows before everything became political. Just two guys hating each other for very silly reasons completely unconnected to anything on earth.
But it’s not “from each according to his ability”. FOSS is what people feel like contributing. And it’s not “to each according to their need”. It’s take it or leave it, unless someone feels like fulfilling requests.
Traditionally, the slogan meant a duty to work. Contributing what you feel like is just charity.
Capitalism, at its core, is private control of the capital. Copyright law turns code into intellectual property/capital. I’ve read the argument that copyleft requires strong copyrights. That argument implicitly makes copyleft a feature of capitalism. You know how rich people or corporations sometimes donate large sums to get their name on something, EG a hospital wing? That’s not so different from a FOSS license that requires attribution.
I feel this deserves more attention. Not only is the Milky Way named for literal milk; it is named for specifically for human milk.
Makes you wonder what they are up to now.
The bug is called Leroy.
Upvoted. Then saw that that put the count at 422. So I had to downvote instead.
The article alleges, though without evidence, that the tracking is just an excuse to raise rates.
A quick search didn’t turn up quite the right statistics, but traffic fatalities have been seriously on the rise in the US. That probably implies higher payouts. (WP)
But also, when trackable unsafe drivers have to pay more (and trackable safe driver less), then the unsafe drivers will prefer to be untrackable. You may be on the receiving end of the recalculated actuary tables.
that will ultimately be used to create huge amounts of wealth for very few,
But… That is what these poisoning attacks are fighting for. They are attacking open image generators that can be used by anyone. You can use them for fun or for business, without having to pay rent to some owner who is not lifting a finger. What do you think will happen if you knock that out?
This attack doesn’t target Big Tech, at all. The model has to be open to pull off an attack like that.
This doesn’t have anything to do with tracking. This is supposed to sabotage free and open image generators (ie stable diffusion). It’s unlikely to do anything, though.
Hard to say what the makers want to achieve with this. Even if it did work, it would help artists just as much, as better DRM would help programmers. On its face, this is just about enforcing some ultra-capitalist ideology that wants information to be owned.
Text explaining why the neural network representation of common features (typically with weighted proportionality to their occurrence) does not meet the definition of a mathematical average. Does it not favor common response patterns?
Hmm. I’m not really sure why anyone would write such a text. There is no “weighted proportionality” (or pathways). Is this a common conception?
You don’t need it to be an average of the real world to be an average. I can calculate as many average values as I want from entirely fictional worlds. It’s still a type of model which favors what it sees often over what it sees rarely. That’s a form of probability embedded, corresponding to a form of average.
I guess you picked up on the fact that transformers output a probability distribution. I don’t think anyone calls those an average, though you could have an average distribution. Come to think of it, before you use that to pick the next token, you usually mess with it a little to make it more or less “creative”. That’s certainly no longer an average.
You can see a neural net as a kind of regression analysis. I don’t think I have ever heard someone calling that a kind of average, though. I’m also skeptical if you can see a transformer as a regression but I don’t know this stuff well enough. When you train on some data more often than on other data, that is not how you would do a regression. Certainly, once you start RLHF training, you have left regression territory for good.
The GPTisms might be because they are overrepresented in the finetuning data. It might also be from the RLHF and/or brought out by the system prompt.
I accidentally clicked reply, sorry.
B) you do know there’s a lot of different definitions of average, right?
I don’t think that any definition applies to this. But I’m no expert on averages. In any case, the training data is not representative of the internet or anything. It’s also not training equally on all data and not only on such text. What you get out is not representative of anything.
A) I’ve not yet seen evidence to the contrary
You should worry more about whether you have seen evidence that supports what you are saying. So, what kind of evidence do you want? A tutorial on coding neural nets? The math? Video or text?
That’s a) not how it works and b) not averaging.
Who exactly creates the image is not the only issue and maybe I gave it too much prominence. Another factor is that the use of copyrighted training data is still being negotiated/litigated in the US. It will help if they tread lightly.
My opinion is that it has to be legal on first amendment grounds, or more generally freedom of expression. Fair use (a US thing) derives from the 1st amendment, though not exclusively. If AI services can’t be used for creating protected speech, like parody, then this severely limits what the average person can express.
What worries me is that the major lawsuits involve Big Tech companies. They have an interest in far-reaching IP laws; just not quite far-reaching enough to cut off their R&D.
That’s where the almost comes in. Unfortunately, there are many traps for the unwary stochastic parrot.
Training a neural net can be seen as a generalized regression analysis. But that’s not where it comes from. Inspiration comes mainly from biology, and also from physics. It’s not a result of developing better statistics. Training algorithms, like Backprop, were developed for the purpose. It’s not something that the pioneers could look up in a stats textbook. This is why the terminology is different. Where the same terms are used, they don’t mean quite the same thing, unfortunately.
Many developments crucial for LLMs have no counterpart in statistics, like fine-tuning, RLHF, or self-attention. Conversely, what you typically want from a regression - such as neatly interpretable parameters with error bars - is conspicuously absent in ANNs.
Any ideas you have formed about LLMs, based on the understanding that they are just statistics, are very likely wrong.