Yes, by companies like Pangram Labs. However, most AI detectors too often fail in either detecting AI-generated essays, or they misidentify human written essays as AI-generated.
Cheating on college essays and papers is nothing new. Plagiarizing from articles or books dates back hundreds of years. As Wikipedia grew in popularity and quality, cheaters turned to it. But now, Large Language Models (LLMs) like ChatGPT, Grok, Gemini, and others have quickly taken over as the favored tool for cheating. LLM generated content alludes plagiarism detectors, because the output is not identical to the material it has ingested.
Many educators have thrown up their hands believing that AI generated content can’t be detected. This fatalistic view is often accompanies with, “we need to teach our kids to be AI proficient.” Those of us familiar with AI understand there it doesn’t take much to be “proficient.” I recently spoke to one university professor who believes the above – and he teaches how to write!
More troubling is research from MIT showing that the use of AI negatively impacts our brain. One doesn’t “strain” their brain when the LLM does all of the work. It’s sort of like trying to become stronger by watching others in the gym. Like muscles, our brains need to be exercised.
This AI controversy reminds me of the advent of calculators (yes, I was around back then). Why bother to learn mathematics when the calculator can do it all for you? Schools eventually settled on once a student masters a mathematical concept, they can use calculators to more quickly solve more complex problems. I suspect writing will evolve in a similar way. We still need to teach how to write, but we will allow AI for ideation or generation of content where it makes sense.
If your child is struggling with writing, high-dosage tutoring can be highly impactful. Studies from HeyTutor have shown that high-dosage tutoring can increase reading and writing skills by over 50%.
So, can AI-generated writing be detected? Absolutely.