ChatGPT: experiments in writing

As I’m sure you’ve noticed by now, I’m using ChatGPT to generate some posts defining computing terms. I’m labeling them all clearly as generated text, but I’m also checking them carefully before posting.

Checking ChatGPT’s output is extremely important, because one can be lulled into believing that the program knows what it’s talking about. The text it generates always sounds authoritative, even when it’s horribly wrong. And it’s wrong more often that you might imagine.

Note: I’m not going to call ChatGPT an Artificial Intelligence (AI), because it’s nothing of the kind. It’s just a program that is able to generate believable-sounding text, regardless of its accuracy, based on analysis of a crapload of content written by humans. True AI will include this particular skill, just as it will include the ability to play chess, to generate art, to drive cars, to make quick decisions based on unpredictable circumstances, and many thousands more specific skills that humans possess.

A recent post at Futurism does a nice job of providing real-world examples of a mainstream news site (CNET) using ChatGPT to generate articles that sound fine, but include serious errors. In this case, the publisher did not initially identify the generated articles as being from ChatGPT, and while I can understand their reluctance, I think that was a bad call.

The risks of depending on ChatGPT are enumerated in this post from KnowBe4:

  • A big downside to ChatGPT is that its sources are unknown, meaning the creators themselves may not even know where the answers it produces come from. This often leads to incorrect information being given.
  • The concerning thing about this flaw is that the inaccuracies are tough to notice, especially because it sounds confident but can really be utter nonsense.
  • You cannot use ChatGPT to find out if text has been written by ChatGPT. There is no antidote.
  • It cannot detect scams. ChatGpt is a natural language processing tool that can answer questions and generate text based on the input it receives, but it does not have the ability to detect scams or other fraudulent activities.

Meanwhile, when I asked ChatGPT “Can ChatGPT’s writing be relied upon as accurate?” it responded as follows:

ChatGPT is a machine learning model and its accuracy is based on the data it was trained on. It can provide accurate information and responses to a wide range of questions and prompts, but it may not always be completely accurate or up-to-date. It’s always a good idea to verify any information provided by ChatGPT with additional sources.

About jrivett

Jeff Rivett has worked with and written about computers since the early 1980s. His first computer was an Apple II+, built by his father and heavily customized. Jeff's writing appeared in Computist Magazine in the 1980s, and he created and sold a game utility (Ultimaker 2, reviewed in the December 1983 Washington Apple Pi Journal) to international markets during the same period. Proceeds from writing, software sales, and contract programming gigs paid his way through university, earning him a Bachelor of Science (Computer Science) degree at UWO. Jeff went on to work as a programmer, sysadmin, and manager in various industries. There's more on the About page, and on the Jeff Rivett Consulting site.

Leave a Reply

Your email address will not be published. Required fields are marked *