OpenAI, meet Movie Gen by Meta: it can generate video with sound!
It's on! Meta announces that it has developed a new AI model called Movie Gen, capable of creating realistic video and audio clips based on user prompts. Meta claims that Movie Gen can compete with media generation tools from companies like OpenAI and ElevenLabs.
Meta shared examples of Movie Gen's output, including videos of animals swimming and surfing, as well as clips that used real photos of people, showing them engaged in activities like painting. In addition to video generation, Movie Gen can create background music and sound effects synced with the video content, and it can be used to edit existing videos.
In one example, Meta demonstrated how Movie Gen added pom-poms to a man running in the desert, and in another, it transformed a dry parking lot into a puddle-filled one as a man skateboarded, Reuters reports.
The videos generated by Movie Gen can last up to 16 seconds, and audio clips can be up to 45 seconds. Meta also shared data from blind tests that suggested the tool's performance is competitive with platforms like Runway, OpenAI, ElevenLabs, and Kling.
While some in the entertainment industry see this technology as a way to streamline production, others express concerns over the potential use of AI trained on copyrighted material without permission.
Lawmakers have also raised alarms about AI-generated content, such as deepfakes, being misused in elections worldwide, including in the US, Pakistan, India, and Indonesia.
Meta spokespersons noted that Movie Gen would not be released for open developer access like the company's Llama language models, due to the varying risks of each AI model. Instead, Meta is collaborating directly with content creators and plans to integrate Movie Gen into its own products by next year.
According to Meta's blog post and research paper, the development of Movie Gen involved both licensed and publicly available datasets. Meanwhile, OpenAI has been in discussions with Hollywood executives about potential collaborations involving its Sora tool, though no formal deals have been reported.
Concerns about AI's use in entertainment escalated earlier this year when actress Scarlett Johansson accused OpenAI of imitating her voice without consent for a chatbot.
In a separate development, Lions Gate Entertainment, the studio behind "The Hunger Games" and "Twilight," partnered with AI startup Runway, granting access to its film and TV library to train an AI model that the studio and its creators can use to enhance their projects.
Personally, I think more AI models is more fun, but I'd really hate to see art and creativity die. On the other hand, recent films – even those made shortly before the AI boom – are so obnoxious, that I'm wondering if AI-made stuff won't be an improvement…
Meta shared examples of Movie Gen's output, including videos of animals swimming and surfing, as well as clips that used real photos of people, showing them engaged in activities like painting. In addition to video generation, Movie Gen can create background music and sound effects synced with the video content, and it can be used to edit existing videos.
The videos generated by Movie Gen can last up to 16 seconds, and audio clips can be up to 45 seconds. Meta also shared data from blind tests that suggested the tool's performance is competitive with platforms like Runway, OpenAI, ElevenLabs, and Kling.
This announcement arrives as Hollywood grapples with the implications of generative AI in filmmaking, following Microsoft-backed OpenAI’s debut of its Sora model in February, which can create cinematic-quality videos from text prompts.
While some in the entertainment industry see this technology as a way to streamline production, others express concerns over the potential use of AI trained on copyrighted material without permission.
Lawmakers have also raised alarms about AI-generated content, such as deepfakes, being misused in elections worldwide, including in the US, Pakistan, India, and Indonesia.
Meta spokespersons noted that Movie Gen would not be released for open developer access like the company's Llama language models, due to the varying risks of each AI model. Instead, Meta is collaborating directly with content creators and plans to integrate Movie Gen into its own products by next year.
In a separate development, Lions Gate Entertainment, the studio behind "The Hunger Games" and "Twilight," partnered with AI startup Runway, granting access to its film and TV library to train an AI model that the studio and its creators can use to enhance their projects.
Personally, I think more AI models is more fun, but I'd really hate to see art and creativity die. On the other hand, recent films – even those made shortly before the AI boom – are so obnoxious, that I'm wondering if AI-made stuff won't be an improvement…
Things that are NOT allowed: