A breakdown in the weighting system is the most probable. Don’t get me wrong I am not an AI engineer or scientists, just a regular cs bachelor. So my reply probably won’t be as detailed or low level as your’s. But I would look at what is going on with whatever algorithm determines the weighting. I don’t know if LLMs restructure the weighting for the next most probably word, or if its like a weighted drop table.
My fun guesswork here is that I don’t think the neural net weights change during querying, only during training. Otherwise the models could be permanently damaged by users.
A breakdown in the weighting system is the most probable. Don’t get me wrong I am not an AI engineer or scientists, just a regular cs bachelor. So my reply probably won’t be as detailed or low level as your’s. But I would look at what is going on with whatever algorithm determines the weighting. I don’t know if LLMs restructure the weighting for the next most probably word, or if its like a weighted drop table.
Hate to break it to you, but you’re more qualified than me!
I only did a Coursera cert in machine learning.
My fun guesswork here is that I don’t think the neural net weights change during querying, only during training. Otherwise the models could be permanently damaged by users.
The neural net doesn’t change of course, but previous text is used as context for the next generation.