OpenAI has introduced a new video generation model, Sora 2, with realistic physics and synchronized sound
OpenAI has announced the launch of Sora 2, an updated artificial intelligence model for video and audio generation that creates even more realistic video clips.
Unlike the previous version, the new system creates long videos with smooth transitions between scenes and retains realistic movement physics. Sora 2 eliminates problems with object deformation and incorrect movement, allowing you to reproduce complex actions, including sports tricks, according to the laws of physics.
The model also supports precise synchronization of sound with the video sequence. Dialogues, background noises and sound effects are consistent with the visual sequence, which makes scenes more realistic. In addition, Sora 2 can integrate real people or other characters into the generated environment. To do this, it is enough to record a short video and audio recording with the desired object, after which it can be inserted into any scene as a cameo, preserving the natural appearance and voice.
Along with the model, the company introduced a mobile application Sora for iOS. It allows users to create, edit and share videos. The application includes tools for content control, restrictions for teenagers and mechanisms for protection against violent material.
Sora is currently only available in the US and Canada. You need an invitation from the company to use it. Once you have access, you can invite up to four other people. There is no word yet on whether it will be available on Android.
Sora 2 will initially be available for free, with heavy restrictions at the start, so that users can freely explore its capabilities. In addition, ChatGPT Pro subscribers will also be able to use an experimental, higher-quality model of Sora 2 Pro on sora.com (and later in the Sora app). The company also plans to release Sora 2 in the API.