OpenAI has introduced Sora, an innovative generative AI model poised to revolutionise the realm of multimedia content creation.
Sora stands as a pioneering tool designed to seamlessly transform text into captivating video content, presenting a paradigm shift in the way we conceive, produce, and interact with media.
With its unveiling, Sora not only showcases the remarkable advancements in artificial intelligence technology but also heralds a new era of creative possibilities and storytelling potential. This unveiling marks a significant milestone in the ongoing pursuit of harnessing AI to augment human creativity and reshape the landscape of digital media production.
“We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction,” said OpenAi in a blog post.
The company added that the text-to-video model can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
Today, Sora is becoming available to red teamers to assess critical areas for harms or risks.
Video Credit: OpenAi
“We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals.
“We’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon.”
The company said Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.
“The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately persist characters and visual style,” said OpenAi.
Video Credit: OpenAi
Safety
“We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model.”
OpenAI said it was also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.
“We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”