Meta’s new “Movie Gen” AI system can deepfake video from a single photo

You May Be Interested In:Using a dual-screen portable monitor for a month



On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.

The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta’s previous work in video synthesis, following 2022’s Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.

An AI-generated video of a baby hippo swimming around, created with Meta Movie Gen.

Meta isn’t the only game in town when it comes to AI video synthesis. Google showed off a new model called “Veo” in May, and Meta says that in human preference tests, its Movie Gen outputs beat OpenAI’s Sora, Runway Gen-3, and Chinese video model Kling.

Movie Gen’s video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.

AI-generated video from Meta Movie Gen with the prompt: “A ghost in a white bedsheet faces a mirror. The ghost’s reflection can be seen in the mirror. The ghost is in a dusty attic, filled with old beams, cloth-covered furniture. The attic is reflected in the mirror. The light is cool and natural. The ghost dances in front of the mirror.”

Even so, as we’ve seen with previous AI video generators, Movie Gen’s ability to generate coherent scenes on a particular topic is likely dependent on the concepts found in the example videos that Meta used to train its video-synthesis model. It’s worth keeping in mind that cherry-picked results from video generators often differ dramatically from typical results and getting a coherent result may require lots of trial and error.

share Paylaş facebook pinterest whatsapp x print

Similar Content

A photo of a person voting in a voting booth.
Perplexity will show live US election results despite AI accuracy warnings
The logo of American cloud computing and virtualization technology company VMware is seen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on March 2, 2023.
“Extreme” Broadcom-proposed price hike would up VMware costs 1,050%, AT&T says
Can AI make crime scene investigations less biased?
Can AI make crime scene investigations less biased?
Human scientists are still better than AI ones – for now
Human scientists are still better than AI ones – for now
Cheap AI “video scraping” can now extract data from any screen recording
Cheap AI “video scraping” can now extract data from any screen recording
In this photo illustration the American social news
Reddit’s getting more popular—and profitable
The News Spectrum | © 2024 | News