The company OpenAI lobbied in the European Union for the relaxation of regulation in the field of artificial intelligence. This is reported by Time with reference to documents on the company’s interaction with European officials.

It is about the EU Law on AI, which on June 14 was approved by the European Parliament and will now move to the final stage of discussion before a final adoption in January.

OpenAI lobbied for significant elements of this document in such a way as to reduce the regular burden on the company. In several cases, it proposed amendments that eventually made it into the final text of the law.

For example, last year OpenAI argued to EU officials that general-purpose systems should not be considered “high-risk” in the AI law. This means that they will be subject to strict legal requirements, including transparency, traceability and human oversight.

In September 2022, the company sent a document titled OpenAI White Paper on the European Union’s Artificial Intelligence Act to EU Commission and Council officials. In it, it noted that GPT-3 is not a high-risk system.

“But (it – Ed.) possesses capabilities that can potentially be employed in high risk use cases,” the document says.

OpenAI’s lobbying efforts seem to have paid off. The final draft of the law did not contain the wording from previous drafts that general purpose AI systems should be considered “high-risk” in advance.

Instead, the agreed law called for providers of so-called “foundation models,” meaning powerful AI systems trained on large amounts of data, to comply with fewer requirements, including preventing the creation of illegal content, disclosing whether the system was trained on copyrighted material, and carrying out risk assessment.

We will remind that before the adoption of the document by the European Parliament, it was supported by key committees: Internal Market Committee and the Civil Liberties Committee.