Thursday 9 May 2024

What Are the Superpowers in AI Post Software?

NAB

After all the noise surrounding the creation of videos using generative AI, it could be that efficiency is AI’s greatest asset to post-production. Sixty-three percent of “creative-class workers” polled by Variety say GenAI tools allow them to do things they were already doing more efficiently. Post-production tech developers from Avid and Adobe to startup Strada introduced workflow improvement tools at the 2024 NAB Show.

article here

Avid’s new digital assistant, for instance, aims to support creative workflows by making them more efficient and taking care of redundant tasks so users can focus on content creation and delivery

Avid Ada uses AI technologies to automate speech-to-text transcription, summarization, and language translation, streamlining news production workflows and enhancing efficiency by automating time-consuming tasks, said the company.

Blackmagic Design has also added more AI firepower to DaVinci Resolve. Version 19 of the editing and finishing system features the IntelliTrack AI point tracker for object tracking, stabilization and audio panning, and UltraNR, which uses AI reduce digital noise from a frame. Processing these effects can be accelerated by up to three times using NVIDIA boards.

Blackmagic shares further details explaining that IntelliTrack AI (it says is powered by the DaVinci Neural Engine) can also be used in Fairlight to automatically generate precision audio panning by tracking people or objects as they move across 2D and 3D spaces. An AI based dialogue separator FX can rebalance dialogue against background sound and the reverberant sound of the room – “perfect for field recordings and interviews in busy locations.”

Of the leading NLE developers, though, it is Adobe that has gone furthest in incorporating AI into its creative suite. It announced a number of tools coming to Premiere Pro this year enabling users to streamline editing, including adding or removing objects or extending a clip.

These new editing workflows will be powered by a new video model that will join the range of Adobe’s existing AI models, branded Firefly, which include image, vector, design and text effects.

“Editors want to use generative AI to reduce workflow tedium, streamline tasks, and increase creative control, with three important requirements,” Adobe said.

“AI must be integrated into their workflows, and the tools they use every day like Premiere Pro,” the company added.

“The real value is in editing existing content faster with Generative AI — adding objects, removing distractions, or extending clips seamlessly on the timeline.”

Adobe also announced upcoming general availability of AI-powered audio workflows in Premiere Pro, including new fade handles, clip badges, dynamic waveforms, AI-based category tagging and more.

The real eye-opener is in Adobe’s policy with regards to AI tools it does not develop and this includes adding the ability for its users to potentially work with Sora from OpenAI or Runway from Midjourney – directly in Adobe applications.

It would mean that users could use Adobe’s new Firefly generator or third party tools to generate B-roll for their project. At NAB, Adobe demonstrated how Pika Labs could be used with Adobe’s Generative Extend tool to add a few seconds to the end of a shot.

Using technology in such a way as to generate footage that wasn’t ever actually filmed can be looked upon as an extension of longstanding VFX techniques but the ramifications go beyond this.

It opens the potential for much than just B-roll to be auto-created. Greater use of the technology would surely require that producers have cast iron legal advice that the AI tool used wasn’t contravening copyright (or ethical boundaries). Documentary teams should be wary of introducing deepfaked footage without flagging it to the audience.

Adobe acknowledges this and pledges to attach Content Credentials to assets produced within its applications — crucially including open source models like Sora — so users can see how content was made and what AI models were used to generate the content created on Adobe platforms.

“Content Credentials will be supported inside Premiere Pro, to help create a chain of trust from creation to editing to publication,” Adobe underscores.

While much of the early conversation about generative AI has focused on a competition among companies to produce the ‘best’ AI model, Adobe says it sees a future in which thousands of specialized models emerge, each strong in their own niche.

“Adobe’s decades of experience with AI shows that AI-generated content is most useful when it’s a natural part of what you do every day. For most Adobe customers, generative AI is just a starting point and source of inspiration to explore creative directions.”

Also ported to Premiere Pro is Magnifi, an app that uses AI and machine learning to detect key moments from live or recorded sports, news, and entertainment content. The aim of the new extension for Premiere Pro is to enable users to build their projects faster “without hindering their existing video editing workflows by eliminating the need for switching between the different platforms.”

Also at NAB Show 2024, Strada co-founder and CEO Michael Cioni highlighted the transformative potential of “utility AI” in automating mundane production tasks rather than replacing creative processes in the film and television industry.

“Editing’s backroom tasks, like transferring files or syncing audio, will vanish,” he argues. “This may lead to job losses, but it will enhance creative work by eliminating delays. Strada automates tasks by employing AI engines. It offers features like automatic sound syncing, image detection, and translation for global content.”

While Strada is aimed at the tasks that can be automated to speed the creative process, its model still relies on the generations old production methodology of raw camera records into post for edit and grade.

“AI will never replace genuine human creativity,” Cioni insisted. “Humans value authenticity over synthetic creations. AI-generated content lacks the ephemeral nature of real experiences, making it less treasured. Synthetic AI is a valuable tool, but it won’t replace human creativity.”

LucidLink, which has a cloud storage product that hooks into each of these NLE applications, says the use of AI in M&E doesn’t need to be controversial: “It’s already being used in many applications, and holds exciting possibilities to streamline workflows, boost creativity, and inspire new types of art.”

It too believes that AI is particularly good at speeding up some of the more tedious and time-consuming aspects of content creation including color correction; audio syncing; metadata tagging and subtitling.

“For all its disruptive capability, AI’s powers are legitimately transformative, and the creative teams who wield it will not only have a competitive advantage but a creative one,” it states.

LucidLink also looks forward to future use cases for AI including generating photoreal worlds for VR which is at present, highly resource-intensive and a vision for AI as a collaborative creative partner.

“AI could be a dream writing partner to bounce jokes and wild ideas off of. For technical workers, this could be an AI co-editor who could take notes on the rhythm of a scene in progress.”

No comments:

Post a Comment