They’ve surely started working on it already. Current “AI” (LLMs) aren’t perfect. They require constant human adjustments.
I’m an auditor for a “machine learning” algorithm’s work, and it develops new incorrect processes faster than it corrects them. This is because corrections require intervention, which involves a whole chain of humans, whereas learning new mistakes can happen seemingly spontaneously. The premise of machine learning is that it changes over time, but it has no idea which changes were good until it gets feedback.
So, to answer your question, I’m sure they’re throwing a ton of money at that. But when will it be viable, if ever?
They’ve surely started working on it already. Current “AI” (LLMs) aren’t perfect. They require constant human adjustments.
I’m an auditor for a “machine learning” algorithm’s work, and it develops new incorrect processes faster than it corrects them. This is because corrections require intervention, which involves a whole chain of humans, whereas learning new mistakes can happen seemingly spontaneously. The premise of machine learning is that it changes over time, but it has no idea which changes were good until it gets feedback.
So, to answer your question, I’m sure they’re throwing a ton of money at that. But when will it be viable, if ever?