On July 20, the OpenAI company stopped working on a tool that was supposed to distinguish human writing from artificial intelligence. This decision was made due to the low level of accuracy of the classifier, writes The Verge.
“We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company said.
At the same time, OpenAI said it plans to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.” But what kind of mechanisms these will be has not yet been reported.
By the way, researchers from Stanford University and the University of California at Berkeley recently published a scientific article that aims to show changes in GPT-4 performance over time. They question the stable performance of OpenAI’s large language models (LLMs), including GPT-3.5 and GPT-4.
However, OpenAI denies any claims of diminishing the capabilities of GPT-4. Here they assured that they make each new version smarter than the previous one.
Loading comments …