YouTube will require authors to label content that looks realistic and was created using artificial intelligence. The corresponding policy update will come into effect next year, writes Bloomberg.

The new rules are expected to apply to content that is created using AI tools to more realistically reflect events that never happened. This also includes videos of people saying or doing something that never actually happened.

“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” YouTube explained.

Soon, authors will have to put appropriate labels on such content. Moreover, these labels will be more visible for videos on sensitive topics. If authors do not label artificially created content, YouTube may penalize them. For example, they may be removed from the platform or have their monetization suspended.


YouTube will work with creators before implementing this policy to make sure they understand the new requirements. The platform will also develop its own tools to detect violations of the rules.

In addition, the company will later provide users with the ability to request the removal of AI-generated content that imitates an identified person. This practice will also apply to the music industry if videos contain synthetic voices of performers.

Recently, it was also reported that YouTube is testing a new Play something button in its mobile app. It directs users to random videos.