OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
I asked Bard how ChatGPT can fix itself, here’s what it said:
ChatGPT can fix itself in a number of ways, including:
Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.
Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.
Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.
Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.
Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.
“Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”
So basically create its own Fox News and see how that goes.
The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.
Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.
If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.
I asked Bard how ChatGPT can fix itself, here’s what it said:
ChatGPT can fix itself in a number of ways, including:
Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.
Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.
Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.
Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.
Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.
“Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”
So basically create its own Fox News and see how that goes.
this is widely known to very quickly destroy your model
The feedback loop is already happening, and is called model collapse.
It’s not a good thing.
The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.
Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.
If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.