There was this uber hype around it, then we started using it … and it just makes so many errors, it’s literally just generating more work. Scrapped it after less than a week. It’s modern snakeoil.
Bard is the same, I asked it questions about two of my favourite bands whom I know a lot about. It omitted facts and invented things that were not true!
We used it for code generation. But we ended up spending more time fixen and debugging the generated code than it would have taken us to just write it.
Also it introduces the most annoying type of bugs. Like once it misspelled a property name, but only at one point in the code, got it right everywhere else.
That’s why, in the case of a GPT model you would feed it custom training data using something like LlamaIndex. I don’t know if there’s an API available for Bard, tho.
You’re wrong assuming that the free models that we have at our disposal are the only possible and best implementations of these LLMs.
What! I have the opposite experience.
Im a tabletop roleplaying gamemaster and it has helped me immensely with translations, formatting of text, compiling and keeping track of my players character backgrounds and even coming up with plots and scenes that are suited for each player.
I have a feeling this one’s mostly operator error.
Once we found the issues, it was actually quite easy to tell the AI to fix them. But at this point you’re debugging generated code to imrpove your input for the code generator … and it just was faster to write the code by hand.
And yes, there was a vast overestimation of what it can do, especially by some managers that used to be coders and thought this would compensate for their lack of recent practical expirence. It didn’t … I had to fix it.
What did you use it for? I helps me a lot with coding, scripting, translations, terminologies… Sometimes it makes mistakes, but other times it produces working code that accomplishes what I asked for.
In any case, ChatGPT is just a demo that uses the GPT-3.5 Turbo model. Many people is being misled assuming that the ChatGPT research preview is all that the model has to offer. You can also try the improved model GPT-4, but it’s not free.
If you really want to get its full potential you need a custom implementation in Python that works against the API and do things like fine tune the model, embeddings, feed it custom data or give it access to tools with LangChain.
Of course that’s not something easy to do, but don’t think that the ChatGPT web/app is GPT models’ full potential.
It’s not that I hate it, but like, chatGPT sucks.
There was this uber hype around it, then we started using it … and it just makes so many errors, it’s literally just generating more work. Scrapped it after less than a week. It’s modern snakeoil.
Bard is the same, I asked it questions about two of my favourite bands whom I know a lot about. It omitted facts and invented things that were not true!
We used it for code generation. But we ended up spending more time fixen and debugging the generated code than it would have taken us to just write it. Also it introduces the most annoying type of bugs. Like once it misspelled a property name, but only at one point in the code, got it right everywhere else.
That’s why, in the case of a GPT model you would feed it custom training data using something like LlamaIndex. I don’t know if there’s an API available for Bard, tho.
You’re wrong assuming that the free models that we have at our disposal are the only possible and best implementations of these LLMs.
What! I have the opposite experience.
Im a tabletop roleplaying gamemaster and it has helped me immensely with translations, formatting of text, compiling and keeping track of my players character backgrounds and even coming up with plots and scenes that are suited for each player.
I have a feeling this one’s mostly operator error.
Or you vastly overestimated what it could do.
Once we found the issues, it was actually quite easy to tell the AI to fix them. But at this point you’re debugging generated code to imrpove your input for the code generator … and it just was faster to write the code by hand.
And yes, there was a vast overestimation of what it can do, especially by some managers that used to be coders and thought this would compensate for their lack of recent practical expirence. It didn’t … I had to fix it.
My point is that it’s not just for coding, if you think that’s the only use case then sure I get why you’d think it was shitty.
I’ve used it a bit for general knowledge things and fun facts, and on more than a couple of occasions it just made shit up.
I’m sure it has some uses, I see a lot of AI generated porn in my “all” feed … just haven’t found one for myself or my work.
What did you use it for? I helps me a lot with coding, scripting, translations, terminologies… Sometimes it makes mistakes, but other times it produces working code that accomplishes what I asked for.
In any case, ChatGPT is just a demo that uses the GPT-3.5 Turbo model. Many people is being misled assuming that the ChatGPT research preview is all that the model has to offer. You can also try the improved model GPT-4, but it’s not free.
If you really want to get its full potential you need a custom implementation in Python that works against the API and do things like fine tune the model, embeddings, feed it custom data or give it access to tools with LangChain.
Of course that’s not something easy to do, but don’t think that the ChatGPT web/app is GPT models’ full potential.