OpenAI releases tool to detect AI-generated text – but can it be trusted?
AI Text Classifier looks very useful, but shouldn't be relied upon
OpenAI, the artificial intelligence research company behind viral chatbot ChatGPT, has released a free browser tool designed to identify whether a piece of text was written by an AI engine or a human.
AI Text Classifier is a machine learning tool that scans a 1000-word chunk of text for patterns that suggest it was created using AI. Each document run through the model is rated on a five-point scale to determine whether it is AI-generated: very unlikely, unlikely, unclear, possible, or likely.
➡️ The Shortcut Skinny: AI Text Classifier
🔨 OpenAI has released a tool to help identify AI-created text
🤔 It predicts whether a piece has been AI-generated using a five-point scale
🤷♂️ OpenAI says it’s not perfect and shouldn’t be used in isolation
😈 But it might help combat nefarious uses of the burgeoning tech
However, OpenAI identifies some limitations with the engine and says that while its “results may help [to distinguish human- from AI-generated text]” it should not “be the sole piece of evidence when deciding whether a document was generated with AI”.
“We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, OpenAI’s policy research director, told CNN.
“We are emphasizing how important it is to keep a human in the loop… and that it’s just one data point among many others.”
After testing out the tool, I came away fairly skeptical. I plugged a few of my own 1000-word-plus articles into the tool, set it to run, and was told there was a “possibility” my work was written by AI. Having penned the pieces myself, I can assure you, there was no AI involved.
AI Text Classifier seems like a very useful tool, but certainly needs a bit of tweaking. Heed OpenAI’s warning, and definitely don’t rely on it to verify a text’s origins with any certainty.
ChatGPT and other automated writing services have become a matter of contention, particularly among teachers in the US, who worry students are using the services to create convincing essays they’ll pass off as their own work.
Others have worried about AI’s implications for plagiarism and fraud. Just this week, The Verge reported that 4Chan users had used text-to-speech tools to imitate the voices of famous celebrities and make them appear to spout hate speech. ElevenLabs, the developer of the AI voice cloning tool has since announced new safeguards to verify if an audio sample was generated using its AI technology, and will put the tool behind a paywall to deter unscrupulous users.
Despite these ongoing worries, technology companies are pouring more resources into AI development. After unveiling an impressive text-to-speech AI earlier this month, Microsoft announced it’s doubling down on artificial intelligence with a new multi-billion dollar investment. Apple has already rolled out AI narrators for its eBook service.
Other companies are also getting in on the action. CNET revealed it’s been using AI to generate explainer articles for nearly 18 months, and Buzzfeed looks set to introduce the models to generate lists and quizzes. AI is only becoming more common.
The Shortcut is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.