OpenAI’s new video model, Sora: A New Era in Video Begins

Posted by:

|

On:

|

Out of the blue, OpenAI has announced Sora, an incredible text-to-video diffusion model, setting a new benchmark in the realm of artificial intelligence. Sora, can convert simple text descriptions into realistic, minute-long videos, with some of the best results seen in the Industry. The quality shown by openAI could leap forward and further democratise video production, offering creators across the spectrum—from hobbyists to professionals—the tools to bring their visions to life with unprecedented ease and precision.

made with OpenAi’s new Sora model

Can I use Sora?

Unfortunately Sora is not yet available to the public. The model is currently “red-teaming” the model, meaning it is being adversarially tested to make sure it produces content in line with OpenAI’s guidelines. While in principle this may sound sensible, if Dall-E is anything to go by, this incredible technology could be hampered by OpenAi’s overprotective guard rails. We can only hope that OpenAI takes a more free handed approach in what it allows Sora to produce. Otherwise we will have to rely on the great minds at Stability AI to take SVD to the next level.

OpenAI has also made the model available to a select group of Visual artists, designers and filmmakers to get feedback on how the model can be useful to creative professionals.

Questions about the training data

Some questions have arisen regarding the origin of OpenAi’s training data, once again opening the question as to whether the approaches taken in model training and development are ethical in nature. This question begins to become more challenging when you understand that AI doesn’t actually hold any information, but rather holds connections between Ideas, and is simply efficient at going thru those connections to produce the desired results.