Introducing Sora Turbo: The Future of Video Generation

The first step in stock trading is to open a stock account. On the early morning of December 10th, on the third day of OpenAI’s 12-day special event, the highly anticipated video generation tool, Sora, finally made its official debut. The initial announcement of Sora was on February 16th, when OpenAI provided dozens of demo videos showcasing an astonishing future where anyone could create high-quality short films by simply inputting text prompts into a computer program.


However, in the following 300 days, Sora remained in the announcement stage and did not launch. During this period, several major tech companies, including Meta, Google, and Amazon, demonstrated their own video generation models, and domestic companies like KeLing and HaiLuo also shone brightly overseas, becoming the most hotly discussed video generation models on foreign networks. With the newly released Sora Turbo, users can generate videos up to 20 seconds long using text, images, or other video materials.


Currently, this tool is available to ChatGPT Plus and Pro users in the United States and some other markets. After the launch of the Sora official website, users flocked in, and the explosive demand quickly led to the website crashing. The company’s CEO, Sam Altman, stated, “We severely underestimated the demand for Sora, and it will take some time to make it accessible to everyone.” Users who obtained access permissions have been sharing videos generated by Sora online, and it is evident that Sora still holds many surprises.


However, some users have also reported that Sora’s understanding of physical laws is not good enough, leading to unnatural human hand movements, text garbling, and animals running and then flying. Exactly 300 days later, Sora was officially released on the early morning of December 10th, when OpenAI officially launched Sora Turbo. This marks 300 days since the company first publicly released a preview of this product.


Currently, the Sora website is live, and ChatGPT paying users in the United States and other markets can start using Sora through the website, but it will take some time before it becomes available in most parts of Europe and the United Kingdom. Compared to the initially announced version of Sora, the Sora Turbo model has added features such as text-to-video, animated images, and mixed videos. OpenAI stated that ChatGPT Plus subscribers can generate up to 50 videos with a maximum resolution of 720p and a duration of 5 seconds.


With the launch of the ‘most expensive ever’ ChatGPT Pro service last week, which charges $200 per month, users can generate up to 500 videos, generate 5 videos simultaneously, with a duration of 20 seconds, a maximum resolution of 1080p, and subscribers can download watermark-free videos.


Image source: OpenAI’s Altman, along with Sora team leaders Bill Peebles and Aditya Ramesh, conducted a live broadcast for approximately 20 minutes to introduce Sora. During the broadcast, they showcased Sora’s new exploration page, which includes AI-generated videos created by users. OpenAI highlighted a feature called ‘Storyboard’, which allows users to generate videos based on a series of prompts, as well as convert photos into videos.


OpenAI also demonstrated a ‘Blend’ tool that enables you to adjust Sora’s output with text prompts and blend two scenes together to create a new one. Image source: X. Regarding the launch of Sora and its unexpected video editing capabilities, renowned AI commentator Rowan Cheung commented, ‘Christmas has come early to the AI world.’ Image source: X. In response to the previously mentioned security concerns, OpenAI stated that videos generated using Sora will bear visible watermarks and C2PA metadata to indicate they were created with AI.


Before uploading images or videos to Sora, OpenAI prompts you to agree to a protocol stating that the content you upload does not contain minors, explicit or violent content, or copyrighted material. OpenAI has indicated that ‘abuse of media uploads’ could lead to account bans or suspensions. Sora’s product manager, Rohan Sahai, said, ‘We face immense pressure; we want to prevent illegal activities with Sora, but we also want to strike a balance between creative expression and illegal activities.


‘ Altman stated during the live broadcast that for OpenAI, Sora is not just a technology but also a tool that empowers creative individuals. Within OpenAI’s cultural DNA, stimulating human creativity with AI is also very important. Through Sora, OpenAI envisions a new collaborative model where AI and humans co-create. Text was once the primary form of human-computer interaction, but they believe it is far from sufficient; videos can convey more emotions and details.


Additionally, Sora is not just a video generation tool for OpenAI but also a significant milestone on the path to AGI (Artificial General Intelligence). The servers were overwhelmed as many users flocked to the Sora official website, hoping to be among the first to experience this model.



As a result of overwhelming demand, OpenAI had to temporarily shut down the account creation feature for Sora. Altman posted on platform X, stating, “We severely underestimated the demand for Sora; it will take some time to make it accessible to everyone. We are trying to figure out how to do this as quickly as possible!” Image source: X. OpenAI has not yet responded on how many accounts were successfully created before the shutdown nor revealed when the account creation feature will be resumed.


However, users who have gained access are sharing their creations on social media. One user perfectly simulated a time-lapse video of a rose blooming from bud to full bloom, looking just like in a documentary. Image source: X. Another netizen simulated the bustling streets of Japan in the 1980s. Image source: X. Renowned tech blogger MKBHD also released a video simulating a real news broadcast. Apart from some subtitle glitches, it looks no different from an actual news scene.


Image source: X. OpenAI employee Will Depue also released a video simulating a fabricated historical video, blurring the line between reality and illusion. Image source: X. However, some feedback suggests that Sora’s understanding of physical laws is not yet perfect, with unnatural human hand movements, text glitches, and animals running and then flying away. For instance, in the fabricated historical video mentioned above, a careful look reveals a cavalryman riding a horse backward.


Sora vs. Competitors Netizens have also compared Sora with the previously popular conch model, using the same prompts to generate a post-apocalyptic robot video. Image source: X. Sora version Conch version Some even directly compared several of the most popular overseas generation models (Ko Ling, Sora, Runway, Conch) and concluded that: from a filmmaking perspective, Ko Ling’s effects are the most practical; Sora’s effects are the best (but if the camera angle is wrong, everything is wrong); Conch is good in some situations but feels weak and inconsistent; Runway is the best in terms of workflow, but it is not controllable.


AI video generation is inherently iterative, so compared to slow, beautiful, but incorrect, fast and concise is a good feature. Image source: X. Sora version Ko Ling version Conch version No wonder some netizens commented, “Everyone is excited about OpenAI Sora, but for now, for video production, Conch and Ko Ling are still the most suitable for me.”



With the debut of Sora, the competition in the field of large-scale video generation models is undoubtedly going to become more intense. Last week, Tencent also released the Yuanbao AI video generation model, which boasts a massive 13 billion parameters, making it the largest open-source video model to date. For those interested in stock trading, opening an account is the first step! Choose Dongfang Fortune Securities, where market trends and trading can be managed all in one app.


Leave a Comment

Your email address will not be published. Required fields are marked *