As I’m sure you’ve noticed by now, I’m using ChatGPT to generate some posts defining computing terms. I’m labeling them all clearly as generated text, but I’m also checking them carefully before posting.
Checking ChatGPT’s output is extremely important, because one can be lulled into believing that the program knows what it’s talking about. The text it generates always sounds authoritative, even when it’s horribly wrong. And it’s wrong more often that you might imagine.
Note: I’m not going to call ChatGPT an Artificial Intelligence (AI), because it’s nothing of the kind. It’s just a program that is able to generate believable-sounding text, regardless of its accuracy, based on analysis of a crapload of content written by humans. True AI will include this particular skill, just as it will include the ability to play chess, to generate art, to drive cars, to make quick decisions based on unpredictable circumstances, and many thousands more specific skills that humans possess.
A recent post at Futurism does a nice job of providing real-world examples of a mainstream news site (CNET) using ChatGPT to generate articles that sound fine, but include serious errors. In this case, the publisher did not initially identify the generated articles as being from ChatGPT, and while I can understand their reluctance, I think that was a bad call.
The risks of depending on ChatGPT are enumerated in this post from KnowBe4:
- A big downside to ChatGPT is that its sources are unknown, meaning the creators themselves may not even know where the answers it produces come from. This often leads to incorrect information being given.
- The concerning thing about this flaw is that the inaccuracies are tough to notice, especially because it sounds confident but can really be utter nonsense.
- You cannot use ChatGPT to find out if text has been written by ChatGPT. There is no antidote.
- It cannot detect scams. ChatGpt is a natural language processing tool that can answer questions and generate text based on the input it receives, but it does not have the ability to detect scams or other fraudulent activities.
Meanwhile, when I asked ChatGPT “Can ChatGPT’s writing be relied upon as accurate?” it responded as follows:
ChatGPT is a machine learning model and its accuracy is based on the data it was trained on. It can provide accurate information and responses to a wide range of questions and prompts, but it may not always be completely accurate or up-to-date. It’s always a good idea to verify any information provided by ChatGPT with additional sources.
Update 2024Jul01: We should think of ChatGPT as a bullshit machine. To find out why, read the detailed analysis over at Springer.com (Michael Townsen Hicks, James Humphries & Joe Slater). Excerpt:
Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.