The EU has agreed on the preconditions for a digital services law that will force technology companies to take more responsibility for content hosted on their platforms. Under the new requirements, they need to remove illegal content and goods faster, explain to users and researchers how their algorithms work, and respond more harshly to the spread of misinformation. Failure to comply with these conditions may result in fines of up to 6% of annual turnover.
“The act will update the basic rules for all online services in the EU. Thisgives practical effect to the principle that what is illegal offline, should be illegal online. The greater the size, the greater the responsibilities of online platforms,” said Ursula von der Leyen, President of the European Commission.
The final text of the law has not yet been published, but it is already known that it will include the following requirements:
- Prohibition of targeted advertising based on religion, sexual orientation or ethnicity. Also, minors cannot be targeted.
- Prohibition of “dark patterns” – confusing or deceptive user interfaces that encourage people to make a choice. The EU says cancelling a subscription should be as easy as signing up.
- Large online platforms should make the operation of recommendation algorithms transparent to users. For example, explain how Facebook sorts news in the feed or how Netflix offers TV shows. Users should also be offered a “non-profiling” system. In the case of Instagram, this could mean a timeline.
- Hosting and online platforms will need to clearly explain why they removed illegal content and provide an opportunity to challenge it. The law does not specify which content is illegal and leaves it to the discretion of individual states.
- The largest online platforms will need to provide key data to researchers in order to “have a better understanding of how online risks develop.”
- Marketplaces will need to store basic information about sellers on the platform to keep track of people selling illegal goods or services.
- Also, large platforms will have to implement new strategies to combat disinformation during crises. This provision was added, in particular, due to Russia’s invasion of Ukraine.
Although the law applies only to EU citizens, its effect will be felt in other countries. Global technology companies may decide that it is more profitable for them to implement a single strategy for content control and to adopt relatively strict EU rules as a guide.
The law should make platforms responsible for the risks that their services may pose to society and citizens. Most of the responsibility lies with large companies. Those with more than 45 million users – like Google or Meta – will be scrutinized. These technology companies have strongly lobbied for a relaxation of the law, especially with regard to targeted advertising and algorithms.
The general conditions of the law have already been agreed. The act has yet to be formally voted on, but the move is rather formal. The rules shall enter into force 15 months after the vote or on 1 January 2024, whichever is later.
Loading comments …