close
close

Adobe: Video generation is coming to Firefly this year

Users will get their first chance to try out Adobe's AI video generation model in just a few months. The company says features based on Adobe's Firefly video model will be available in the Premiere Pro beta app and on a free website before the end of 2024.

According to Adobe, three features – Generative Extend, Text to Video and Image to Video – are currently in private beta but will be publicly available soon.

Generative Extend, which lets you extend any input video by two seconds, will be integrated into Premiere Pro's beta app later this year. Firefly's Text to Video and Image to Video models, which create five-second videos from prompts or input images, will also be available on Firefly's dedicated website later this year. (The time limit may increase, Adobe noted.)

Adobe's software has been popular with creatives for decades, but generative AI tools like these could upend the very industry the company serves—for better or for worse. Firefly is Adobe's answer to the recent wave of generative AI models, including OpenAI's Sora and Runway's Gen-3 Alpha. The tools have captivated audiences by creating clips in minutes that would have taken a human hours to create. But these early attempts at tools are generally considered too unpredictable for use in professional settings.

But controllability is where Adobe believes it can set itself apart. Ely Greenfield, CTO of digital media at Adobe, tells TechCrunch there's a “huge appetite” for Firefly's AI tools if they can complement or speed up existing workflows.

For example, Greenfield says Firefly's generative fill feature, added to Adobe Photoshop last year, “is one of the most widely used features we've introduced in the last decade.”

Adobe declined to disclose the price of these AI video features. For other Firefly tools, Adobe allocates Creative Cloud customers a certain number of “generative credits,” with one credit typically yielding one generation result. More expensive plans, of course, offer more credits.

In a demo with TechCrunch, Greenfield showcased the Firefly-based features coming later this year.

Generative Extend can pick up where the original video ends, adding two extra seconds of footage relatively seamlessly. The feature takes the last few frames of a scene and runs them through Firefly's video model to predict the next few seconds. For the scene's audio, Generative Extend reproduces background noises like traffic or nature sounds, but not people's voices or music. Greenfield says this is to comply with music industry licensing requirements.

In one example, Greenfield showed a video clip of an astronaut looking out into space that was modified with this feature. I could see the moment it was extended just after an unusual lens flare appeared on the screen, but the tracking shot and objects in the scene remained the same. I could imagine it being useful if your scene ends a moment too soon and you just need to stretch it out a little longer for the transition or fade.

More well-known are Firefly's text-to-video and image-to-video features. They allow you to enter a text or image command and deliver up to five seconds of video. Users can access these AI video generators at firefly.adobe.com, likely with rate limits (though Adobe didn't specify).

Adobe also says Firefly's text-to-video capabilities do a pretty good job of spelling words correctly, something AI video models often struggle with.

When it comes to security, Adobe has been overly cautious from the start. Greenfield said Firefly's video models have blocks on generating videos that contain nudity, drugs and alcohol. Additionally, he added, Adobe's video generation models are not trained on public figures such as politicians and celebrities. The same certainly can't be said for some of its competitors.