Runway AI's Gen-3 Alpha
Runway AI reveal its new hyper realistic AI video model Gen-3 Alpha, capable of 10-second-long clips. The race for high quality AI-generated videos is heating up.
IA NEWS
Yanhel AHO GLELE
6/17/20243 min read
With China's Kling AI, LumaAI's Dream Machine and now Runway AI's Gen-3-Alpha, the race to high quality, Ai-generated videos has never been so exciting.
Runway AI, a pioneer in generative AI tools for film and image content creators, has recently announced the release of Gen-3 Alpha, a groundbreaking model capable of generating high-quality video clips from text descriptions and still images. This latest innovation marks a significant improvement over Runway's previous flagship video model, Gen-2, with faster generation speeds and more precise control over the structure, style, and motion of the videos.
Gen-3 will be available in the coming days for Runway subscribers, including enterprise customers and creators in Runway’s creative partners program.
Gen-3 Alpha excels at generating expressive human characters with a wide range of actions, gestures and emotions,” Runway wrote in a post on its blog. “It was designed to interpret a wide range of styles and cinematic terminology [and enable] imaginative transitions and precise key-framing of elements in the scene.”
No precise release date has yet been given for the model, with Runway only showing demo videos on its website and social account on X, and it is unclear if it will be available through Runway’s free tier or require a paid subscription to access.
For artists, by artists
Training Gen-3 Alpha was a collaborative effort from a cross-disciplinary team of research scientists, engineers, and artists. It was designed to interpret a wide range of styles and cinematic terminology.
Gen-3 Alpha, like all video-generating models, was trained on a vast number of examples of videos — and images — so it could “learn” the patterns in these examples to generate new clips. Where did the training data come from? Runway wouldn’t say. Few generative AI vendors volunteer such information these days, partly because they see training data as a competitive advantage and thus keep it and info relating to it close to the chest.
Prompt : A cinematic wide portrait of a man with his face lit by the glow of a TV
Limitations and Copyright Issue
Gen-3 Alpha has its limitations, including the fact that its footage maxes out at 10 seconds.
“The model can struggle with complex character and object interactions, and generations don’t always follow the laws of physics precisely,” Germanidis told TechCrunch in an interview. “This initial rollout will support 5- and 10-second high-resolution generations, with noticeably faster generation times than Gen-2. A 5-second clip takes 45 seconds to generate, and a 10-second clip takes 90 seconds to generate.”
Like many others AI model, Gen-3-Alpha has been trained on a large range of data. Where did the training data come from? Runway wouldn’t say. Few generative AI vendors volunteer such information these days, partly because they see training data as a competitive advantage and thus keep it and info relating to it close to the chest.
Training data details are also a potential source of IP-related lawsuits if the vendor trained on public data, including copyrighted data from the web — and so another disincentive to reveal much. Several cases making their way through the courts reject vendors’ fair use training data defenses, arguing that generative AI tools replicate artists’ styles without the artists’ permission and let users generate new works resembling artists’ originals for which artists receive no payment.
Prompt : View out a window of a giant strange creature walking in rundown city at night, one single street lamp dimly lighting the area.
Safeguard and Moderation
Runway said that its new model will be released with a new set of safeguards including a moderation system to block attempts to generate videos from copyrighted images and content that doesn’t agree with Runway’s terms of service. Also in the works is a provenance system — compatible with the C2PA standard, which is backed by Microsoft, Adobe, OpenAI and others — to identify that videos came from Gen-3.
“Our new and improved in-house visual and text moderation system employs automatic oversight to filter out inappropriate or harmful content,” Germanidis said. “C2PA authentication verifies the provenance and authenticity of the media created with all Gen-3 models. As model capabilities and the ability to generate high-fidelity content increases, we will continue to invest significantly on our alignment and safety efforts.”
Subscribe to our newsletter
Enjoy exclusive special deals available only to our subscribers.
Contacts
blockainexus@gmail.com
main@blockainexus.com
Get in touch
Opening hours
Monday - Friday: 9:00 - 18:00