OpenAI presents Sora its video generator from text

Sora can create videos of detailed scenes up to 60 seconds long.

Artificial intelligence is progressing by leaps and bounds and now OpenAI presented Sora, its text-video model capable of generating detailed scenes.

From what they detailed from OpenAI, “Sora can create videos up to 60 seconds long showing highly detailed scenes, complex camera movements, and multiple characters with vibrant emotions.”

According to them, this model is not only able to understand what the user asked in the text, but also how these things exist in the physical world.

In this way, Sora works with the same mechanisms as text-to-image generators, where The user provides an initial indication, called a prompt, and then the artificial intelligence generates the image from it.

In the case of Sora’s videos, OpenAI presented several examples, which they say have not been edited and show everything from the streets of Japan to mammoths running in the snow.

From what they detail, the model still has some weaknesses and may encounter problems when it comes to simulating the physics of a complex scene, as well as may not understand some cases of cause and effect. As detailed, this could be seen by requesting a video of someone biting a cookie, which could then be without bite marks.

Before being available, as the company mentioned, they take a series of important security measures. “We work with a series of trainers – experts in areas such as misinformation, hateful content and bias – who test the model. »

Source: Latercera

Facebook
Pinterest
LinkedIn
Twitter
Email

Leave a Reply

Your email address will not be published. Required fields are marked *