ChatGPT consistently makes up shit. It’s difficult to tell when something is made up because it’s a language model so it is supposed to sound confident as if it’s any person telling a fact that they know.
It knows how to talk like a subject matter expert because that’s usually what gets publicized most and thus that’s what it’s trained on, but it doesn’t always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it’s wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won’t trust a glorified autocomplete.
Good point, hallucinations only add to the fake news problem and artificial content problem.
I’ll counter with this: how do you know the stuff you look up online is legit? Should we go back to encyclopedias? Who writes those?
Edit: in case anyone isn’t aware, GPT “hallucinates” made up information in specific cases when temperature and top_p settings aren’t optimized, wasn’t saying anyone’s opinion was a hallucination of course
Some generative chatbots will say something then link to where the info is from. That’s good because I can followup
Some will just say something. That’s bad and I’ll have to search myself afterwards.
It’s the equivalent of a book with no cover or a webpage where I can’t see what website it’s on. Maybe it’s reputable, maybe it’s not. Without a source I can’t really decide
No thanks.
Care to elaborate why not? Interested in your viewpoint
ChatGPT consistently makes up shit. It’s difficult to tell when something is made up because it’s a language model so it is supposed to sound confident as if it’s any person telling a fact that they know.
It knows how to talk like a subject matter expert because that’s usually what gets publicized most and thus that’s what it’s trained on, but it doesn’t always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it’s wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won’t trust a glorified autocomplete.
Good point, hallucinations only add to the fake news problem and artificial content problem.
I’ll counter with this: how do you know the stuff you look up online is legit? Should we go back to encyclopedias? Who writes those?
Edit: in case anyone isn’t aware, GPT “hallucinates” made up information in specific cases when temperature and top_p settings aren’t optimized, wasn’t saying anyone’s opinion was a hallucination of course
Some generative chatbots will say something then link to where the info is from. That’s good because I can followup
Some will just say something. That’s bad and I’ll have to search myself afterwards.
It’s the equivalent of a book with no cover or a webpage where I can’t see what website it’s on. Maybe it’s reputable, maybe it’s not. Without a source I can’t really decide
Cause Chatgpt isn’t reliable on actual information and i don’t want to have any “assistant” at all.