close
close

Adobe's Firefly AI videos only arrive when it's safe amid India offensive

It's been clear for some time that text-to-video as a method will be the next major chapter for generative artificial intelligence (AI), and while most of it is still in the restricted access phase, the speed at which tools are becoming more realistic makes these developments fascinating. Earlier this year, OpenAI gave the world its first look at Sora, a tool that used early demos to show off its realistic generations that would be hard to identify as AI generations at first glance. The same goes for Runway's Gen-3 Alpha. Now it's Adobe's turn to confirm that their Firefly platform will add what they're calling the Firefly Video Model later this year. OpenAI hasn't released a timeline yet, but could do so in the coming weeks.

Adobe confirms that Firefly's generative video capabilities will be integrated primarily into the Premiere Pro platform. (Official image)

Adobe confirms that Firefly's generative video capabilities will be integrated into the Premiere Pro platform as a priority, underscoring the company's belief that AI will be ready for professional video content and editing workflows. “Just like our other Firefly generative AI models, editors can work with confidence knowing that the Adobe Firefly video model is designed to be commercially safe and is only trained on content we have permission to use – never on content from Adobe users,” said Ashley Still, senior vice president & general manager of the Creative Product Group at Adobe.

Also read: Wired Wisdom: AI videos are inevitable, but are we ready to answer difficult questions about reality?

The future of generative video is still unclear in terms of wider adoption of these tools and how to handle the often complex prompts. The realism in the demos is great. Sora impressed us, and the potential of Firefly's video generation seems no less. However, these are extremely specific prompts to highlight the features – and these prompts are often not as clear and obvious when users start using them.

With that in mind, Adobe also touts the editing capabilities. In addition, they believe that Firefly Video Models' prompts will be helpful in filling in gaps in a video edit by generating generic footage (also called B-rolls) and secondary perspectives for a video you share with Firefly. How about looking at a skyline through binoculars or a smartphone's video camera? Firefly Video Models will be able to generate a video with that perspective.

Also read: As the world grapples with deepfakes, AI companies agree on a set of principles

“The Firefly video model allows you to use extensive camera controls such as angle, movement and zoom to create the perfect perspective for your generated video,” adds Ashley Still. There are three pillars to this – Generative Extend, Text to Video and Image to Video – all of which are intended to be relevant to the workflows of typical creatives and enterprise users.

Adobe insists that Firefly Video Models will not be released into beta testing until later this year when they are “commercially safe.” A key element of this would be the placement of content attribution, an industry-wide acceptance of labeling of AI-generated content that HT has covered in detail before. This labeling is intended to distinguish generations of real video or photos. For realistic video generations, as is already the case with photos and audio (and often a mix of both), it may be important to distinguish between the real and the artificial to prevent misuse.

Read also: Exclusive | Most cutting-edge tools use home-grown AI: Cameron Adams from Canva

Another aspect would be how video generation models handle the creation of human faces that may or may not bear resemblance to real, living people. These significant developments, which see the tech company confront AI competition and combat creative workflow platforms, are bearing fruit ahead of Adobe's annual developer conference, MAX, taking place next month.

One development we need to note is that Google is clearly not generating human faces based on any prompts in its new generative tools that use the Gemini models (these tools are also available on the new Pixel 9 phones), nor is it magically editing photos with human faces in them – it's not going to change the perspective of the background in a photo that has my face and a friend's face in it. On the other hand, if it's an object like, say, a car, you can create backgrounds that make it look like you parked it with the New York skyline or Kensington Palace in the background.

Read also:Fighting fire with equal means? Gen AI as a defense against AI-based cyberattacks

Adobe has also added support for eight Indian languages ​​for its versatile editing platform Adobe Express. This should strengthen the platform's relevance in the Indian market as competition from Canvas Magic Studio increases. “With millions of active users, Adobe Express is rapidly being adopted in India and we are excited to meet the rapidly growing content creation needs of this diverse market by introducing user interface and translation capabilities in multiple Indian languages,” said Govind Balakrishnan, senior vice president, Adobe Express and Digital Media Services

The company confirms that Express on the web will support Hindi, Tamil and Bengali. Meanwhile, the translation feature will support Hindi, Bengali, Gujarati, Kannada, Malayalam, Punjabi, Tamil and Telugu. Canva also added support for several Indian and global languages ​​to its suite earlier this year, including content translation and generations, which is also aimed at teams and business users.