At the end of December, The New York Times filed a lawsuit against OpenAI, accusing the company of copyright infringement by using Times materials to train artificial intelligence. OpenAI released a post responding to these accusations.

In a blog post, the company calls The New York Times’ lawsuit groundless and gives 4 main reasons why:

  1. They cooperate with news organizations and create new opportunities 
  2. Training is a fair use, but the company provides an option to refuse it because it is the “right thing to do”
  3. “Regurgitation” is called a rare mistake that they try to reduce to zero
  4. The New York Times doesn’t tell the whole story

First of all, the blog post said that the company cooperates with news organizations and always tries to engage in a dialogue and find compromises and solutions to problems.

“Our goals are to support a healthy news ecosystem, be a good partner, and create mutually beneficial opportunities,” they wrote in a post.

According to the company, ChatGPT enables news sites to gain new readers by always providing a link to the source from which the chatbot takes information. In addition, OpenAI believes that their chatbot helps journalists to do their job.

The company calls the use of news organizations’ materials fair use, but because they are primarily trying to be a good partner for other companies, they allow websites and media to restrict artificial intelligence’s access to their materials.

“Regurgitation,” a problem that causes artificial intelligence to memorize and then completely copy or imitate media materials, was called a rare error that OpenAI is trying to minimize in every way possible.

The company also tries to make sure that users do not manipulate ChatGPT and do not make the chatbot violate the rules by using other people’s content. They try to limit this in every way possible.

The last, and probably most important, point in this post is that OpenAI claims that The New York Times is not telling the whole story. According to them, the Times refuses to provide examples of what ChatGPT is accused of, and that the newspaper manipulated ChatGPT.

“It seems they [The New York Times] intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate. Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” they wrote in the post.

The company also says that The New York Times, like any single source, has not contributed significantly to the training of existing models, nor would it be influential enough for future training.