Latest from OpenAI, Anthropic, Google, Meta & more
15 stories
The race to build artificial intelligence that can generate photorealistic video has reached a critical inflection point, with four competing systems now capable of producing footage so convincing that distinguishing the real from the synthetic requires frame-by-frame analysis. Yet despite billions in investment and months of hype, each model excels at radically different tasks—suggesting that rather than one clear winner emerging, the future of AI video may belong to whoever can master the unglamorous work of integration. The findings reveal a harder truth than Silicon Valley prefers to admit: raw capability matters far less than knowing which tool solves which problem.
How OpenAI built a real-time access system combining rate limits, usage tracking, and credits to power continuous access to Sora and Codex.
OpenAI shipped Sora for Android in 28 days using Codex. AI-assisted planning, translation, and parallel coding workflows helped a nimble team deliver rapid, reliable development.
Our latest video generation model is more physically accurate, realistic, and controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora...
Filmmaker Lyndon Barrois describes how to use Sora as a storytelling tool.
Filmmaking duo Vallée Duhamel explains how Sora helps build new worlds.
Discover the Sora feed philosophy—built to spark creativity, foster connections, and keep experiences safe with personalized recommendations, parental controls, and strong guardrails.
Disney and OpenAI have reached an agreement to bring more than 200 Disney, Marvel, Pixar and Star Wars characters to Sora for fan-inspired short videos. The agreement emphasizes responsible AI in...
To address the novel safety challenges posed by a state-of-the-art video model as well as a new social creation platform, we’ve built Sora 2 and the Sora app with safety at the foundation. Our...
Interdisciplinary artist Minne Atairu discusses how Sora helps realize her vision.
Since we introduced Sora to the world last month, we’ve been working with artists to learn how Sora might aid in their creative process.
Discover how Higgsfield gives creators cinematic, social-first video output from simple inputs using OpenAI GPT-4.1, GPT-5, and Sora 2.
Sora 2 is our new state of the art video and audio generation model. Building on the foundation of Sora, this new model introduces capabilities that have been difficult for prior video models to...
Our video generation model, Sora, is now available to use at sora.com. Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios. You...
Sora is OpenAI’s video generation model, designed to take text, image, and video inputs and generate a new video as an output. Sora builds on learnings from DALL-E and GPT models, and is designed to...