OpenAI has developed a tool that could potentially expose students who use ChatGPT to write their papers. But, according to The Wall Street Journal, the company is discussing whether to launch it on the market.
In a statement to TechCrunch, an OpenAI spokesperson confirmed that the company is exploring a method of watermarking texts. However, it is taking a “deliberate approach” due to “the complexities involved and its likely impact on the broader ecosystem outside of OpenAI.”
“The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers,” the company explained.
This is expected to be a different approach from most previous attempts to detect AI-generated text, which have been largely ineffective.
At the same time, OpenAI will focus exclusively on detecting texts written with ChatGPT, not with other companies’ models.
To do this, small changes will be made to the way ChatGPT selects words, basically creating an invisible watermark in the text that can later be detected using a separate tool.
After the publication in the journal, OpenAI also updated its May blog post about its research on detecting content created by artificial intelligence. More details about the changes can be found here.
Loading comments …