Adobe first launched Adobe Firefly in March 2023, and since then they’ve delivered rapid innovation with new models in imaging, design and vectors. These Firefly models have quickly grown to power some of the most popular features across Creative Cloud and Adobe Express like Generative Fill in Photoshop, Generative Remove in Lightroom, Generative Shape Fill in Illustrator and Text-to-Template in Express. Along the way, they’ve received incredible feedback from the creative community and enterprise customers alike — and in total Adobe’s community has generated over 12B images and vectors making Firefly and the features it powers some of the fastest adopted by their community.
We all know that video is the currency of engagement today — and we’re excited to share a peek at the upcoming Firefly Video Model and some of the revolutionary professional workflows it’ll power in Adobe’s industry-leading video tools like Premiere Pro, available starting in beta later this year.
Over the past several months, Adobe has worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, they’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.
Just like Adobe’s other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content Adobe has permission to use — never on Adobe users’ content.
We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.
A new era of video editing
The ever-increasing demand for fresh, short-form video content means editors, filmmakers and content creators are being asked to do more and in less time. Today, not only do editors cut picture, but they’re also tasked with colour correction, titling, visual effects, animation, audio mixing and more. Adobe is leveraging the power of AI to help editors expand their creative toolset so they can work in these other disciplines, delivering high-quality results on the timelines their clients require.
Common editorial tasks — like navigating gaps in footage, removing unwanted objects from a scene, smoothing jump cut transitions, and searching for the perfect b-roll takes time. When performed well, addressing these tasks can make the difference between a compelling and emotional narrative, rather than one that distracts from the story you’re trying to tell.
Sometimes, sharing creative intent with your team and the stakeholders that green light and fund the work can also be a challenge, requiring many rounds of communication. Adobe provides tools such as Frame.io to streamline teamwork by enabling a uniquely integrated review and approval process. And now, Adobe is delivering AI tools to help facilitate an even better creative process that helps take the tedium out of post-production. Adobe’s AI tools give editors more time to explore new creative ideas, the part of the job they love, while also setting them up for successful collaboration with the larger team.
Not only does Adobe facilitate streamlined work processes, but now, with Generative AI, they’ve made it even easier and faster for editors and motion designers to create their best work in record time. Watch below to see how Adobe is taking video editing to new heights using Firefly.
With Firefly Text-to-Video, you can use text prompts, a wide variety of camera controls, and reference images to generate B-Roll that seamlessly fills gaps in your timeline.
The Firefly Video Model excels at generating videos of the natural world. When production misses a key establishing shot needed to set the scene, generate an insert with camera motion, like landscapes, plants or animals.
Need more complementary shots? Fill the gap in your timeline with a generated clip based on a reference frame. The more detailed your prompts are, the better the model can leverage that depth to generate inspirational imagery and b-roll.
The Firefly Video Model supports a broad variety of use cases including creating atmospheric elements like fire, smoke, dust particles and water against a black or green background that can then be layered over existing content using blend modes or keying inside Adobe’s tools like Premiere Pro and After Effects. And coming later this year to Premiere Pro (beta), Generative Extend allows you to extend clips to cover gaps in footage, smooth out transitions, or hold on shots longer for perfectly timed edits.
Adobe is excited by all the recent advancements on the Adobe Firefly Video Model and they look forward to continuing to partner with the community to build generative AI into the Adobe tools and workflows you rely on. Contact the team at Dax Data for me information.