IBC
Agentic AI promises to increase supply chain efficiencies and improve content personalisation as consumers begin to converse with avatars and their devices.
article here
The industry will soon be awash with agentic AI. The evolution from GenAI has greater autonomy to actively make decisions, execute tasks and even to learn.
It’s a leap that Nvidia CEO Jensen Huang has heralded as ‘the age of agentive AI’. The chip maker believes AI agents will drive a multi-trillion-dollar industry by transforming how people work. That prediction supersedes earlier research forecasting a global $70+bn market for autonomous agents by 2030.
“There’s no doubt that 2025 will be the year for agentic adoption,” declared Jiani Zhang, Chief Software Officer at Capgemini Engineering citing 52% of organizations intending to employ AI agents in their workflow this year.
“Agentic AI is the next step from generic AI because it helps users benefit from LLMs in a more scalable and more standardised way,” says Jonas Michaelis, CEO at qibb which claims the first AI-powered conversational assistant for media workflows.
Far from being just another buzzword, “agentic AI represents a fundamental progression in technology that allows humans to be infinitely more efficient,” reckons Genies, an LA-based developer of AI-powered avatars which is valued at $1 billion.
‘AI Assistance Agents in Live Production’ are even being proposed by ITN and the BBC, in partnership with Cuez and Google in an IBC Accelerator to be unwrapped at IBC Show later this year.
Agentic AI systems are already being introduced into the media industry, taking root first among current processes and procedures relying most heavily on ingesting vast amounts data.
Supply chain workflows
Media Asset Management is one potential area for Agentic transformation. According to Nvidia, video analytics AI agents can analyse large amounts of live or archived videos, request tasks via natural language and perform complex operations like video search, summarisation and visual question-answering.
“Imagine a world where we can schedule, locate, prepare, tag, package and deliver assets to select platforms and destinations—all without having to wrangle multiple teams, several emails, overlapping technologies and hours spent coordinating,” says Zeenal Thakare, SVP of Enterprise Solutions Architecture at Ateliere Creative Technologies in a blog titled ‘Reworking the Playbook: How Agentic AI Can Revolutionize the Media Supply Chain’. “That's what an Agentic experience could look like for the end-to-end media supply chain.”
Cuez introduced AI-powered automation and cloud-based rundown to the gallery in a Future of the Control Room project which won an award at IBC 2024. Its platform makes live broadcasts more efficient with real-time scripting, no-code automation, and smart studio voice control.
Content localisation is where Michaelis sees the biggest initial benefit of agentic AI. “It’s the ability to take one version created in CMS to automate the creation of another version formatted for different lengths and platforms and audience preferences.”
“AI Copilot also helps technology teams to build new integrations and automations and also helps identify why something has failed and where the bug is.”
Currently in beta, the AI Copilot from German media integrators qibb, is for seasoned engineers and non-technical users alike to build, optimise, document, and troubleshoot workflows. It is specifically designed for media automation and trained in-house on qibb’s own data and also integrates with more than a hundred media services like iconik, Mimir and OpenAI.
“All the integrations that we've built, all the connectors, and templates for media workflows, all of our public documentation goes into CoPilot to assist technology teams to build, maintain and operate their own systems,” explains Michaelis.
Startup Highfield AI claims its new agentic AI solution can improve professional broadcast graphics workflow efficiency by up to 75%. It does this by automating repetitive tasks such as graphics production in the news room.
According Founder and CEO Amir Hochfeld, the system analyses the stories as written by journalists in their NRCS system such as CGI Open Media and Avid’s iNews. It then deploys a set of AI agents that automate tasks usually done by operators: selecting the most suitable graphics templates created with systems such as Vizrt and filling them with relevant content, including text, images, and video clips pulled from broadcasters' content repositories.
“While the system operates autonomously, journalists maintain complete control over the final product, safeguarding editorial integrity and quality,” Hochfeld says. “The result is a significant return on existing investments and a faster way to achieve high-quality productions.”
Transforming personalisation
The agentic AI system would learn about a user’s habits, learning and adapting over time, with data able to fed into personalised content recommendations or advertising.
“The more AI understands who you are and connects with other services, the more it can truly act on an individual's or company’s behalf,” says Thakare. Tailored AI agents deliver context-rich guidance and form opinions aligned with individual data.”
“Unlike with GenAI, the outcomes are personalised. You can talk to the TV and have a conversation,” explains Tom Weiss, CTO, Run3TV a developer of web TV platforms for US broadcasters using the next-gen TV standard ATSC 3.0. “One of the things you can do with our technology is that when an ad is playing out you can put an overlay on top and that overlay can be some kind of agent with which the viewer can converse. It’s an AI based agent that knows a little about where you are, your key demographics and details such as the local weather. It can then provide expert recommendations about products you might be interested in.”
This is, he insists, superior to shoppable ads that link from commercials streamed on Connected TVs to brand websites. Run3TV are developing the concept which could be launched in the second half of the year.
Instead of relying on predefined algorithms and user instructions, agentic AI demonstrates autonomy, adaptability, and the capacity for proactive decision-making.
“For media companies, this level of precision opens new avenues for targeted advertising and subscription models,” says Thakare. “By predicting user needs and tailoring offerings, businesses should experience improved conversion rates and customer loyalty.”
A step beyond this is to converse with characters in a programme. Weiss says, “You could be having a conversation with one of the characters in the show – perhaps one those moments where characters break the fourth wall and they start talking to you. Another example would be at the end credits of the show and one of the lead characters reappears and says, ‘Can I tell you some more about what's going on backstage? Press now to enable the voice conversation. Then you have a conversation with them about the behind the scenes.”
Agentic AI are already introduced into video games worlds to augment non-player characters. Instead of following limited predefined scripts the AI-character learns from interactions with players and respond to player actions in real-time. Meaning Machines developments in ‘conscious’ AI characters is one example.
Taking this even further the technology could empower audiences to become co-creators. Thakare says, “Agentic AI systems enable viewers to customise storylines, characters, and outcomes in real-time, blurring the lines between passive consumption and active participation.”
He says this not only enhances user satisfaction but also provides media companies with invaluable insights into audience preferences and trends.
“With agentic AI you can talk back and have a personalised interaction with the character,” adds Weiss.
That sounds like it might be a few years off and Weiss agrees but says the nuts and bolts of the tech is already here. “The back-end is basically ChatGPT, DeepSeek, Claude or another LLM model. Typically, there's no special training required for this level of viewer participation. It doesn't require a lot of heavy software running on racks of machine.”
Reinventing Content Creation
On the creative side it’s perfectly possible to build an agentic AI for editing.
“If you think about the user interface for Avid, or any NLE tools, they are not particularly intuitive for first time users to pick up,” Weiss says. “One of the things about Agentic AI is that it can de-skill the tool so you don't need technical expertise in how to use it. You can imagine speaking prompts into a video editing Agentic AI which will use those prompts to run an automation script in the backend and deliver up a quick first assembly.”
The realtime speed at which rough cuts are possible would give the director the chance to review the edit while they’re still on set and make adjustments to their next take.
“Basically, it’s just easier to explain to an Agentic AI what to do then having to sit down and program it yourself,” says Weiss.
Human in the loop - for now
Everyone pushing agentic AI claims it to be an enhancement to human jobs.
Highfield AI, for instance, says its assistant was designed to always have a “human in the loop”, where users retain full oversight and approval prior to publishing. “This is fundamental design ethos. Someone has to review and approve the visuals and final piece before it is published,” says Hochfeld.
Nvidia’s Huang characterises gentic AI functions as a digital assistant capable of handling workflows, problem-solving, and providing real-time support.
Genies predicts AI-powered customer service reps able to handle entire conversations and AI research assistants that synthesise information and present insights.
Indeed, human resources and software engineering departments could be first in line to benefit – or face cuts in employment.
“AI agents are the new digital workforce,” said Huang, adding “the IT department of every company is going to be the HR department of AI agents in the future.”
As companies “employ” and train AI “workers” to be part of hybrid teams of humans and AIs working together, “the role of human resources will evolve into a department for human and machine resources,” suggests Marco Argenti, the CIO of Goldman Sachs.
He thinks the first AI ‘layoffs’ could eventually emerge, in which AI models will be replaced by better AI tools or humans “if they perform poorly compared to their peers.”
AI agents are boosting developer productivity by automating repetitive coding tasks. Mckinsey project that by 2030 AI could automate up to 30% of work hours in the U.S, “freeing developers to focus on more complex challenges and drive innovation,” claim Nvidia.
The chip maker also claims Generative AI agents can save marketers an average of three hours per content piece, “allowing them to focus on strategy and innovation.”
Weiss points to Jevons paradox which says that when technological advancements make something more efficient, the product, resource or output increases, the cost drops and consumption rises too. In other words, from the industrial revolution onwards, new technologies are shown to increase employment not replace it.
“Many existing jobs are going to disappear, but if people can get more efficient there's likely to be a lot more work to do, in the same way the spinning jenny didn't put anyone out of business.”
As applied to creative roles, agentic AI is repeatedly said to help human talent “break free” from repetitive tasks so they can focus on more of what gives them satisfaction – like innovating storytelling. Genies’ wording is typical of AI developers: “Instead of replacing human creativity, AI is acting as a co-creator—enhancing the creative process and enabling people to bring ideas to life faster than ever before.”
Ateliere declares AI systems will handle routine tasks while allowing creators to focus on high-value creative work.
Arguably the most honest answer comes from qibb. “Agentic AI is an efficiency tool,” agrees Michaelis. “In our case, it will let you build more, more efficiently and faster. It will empower a team of two people or three people to output what a team of 10 people would have done.
“Whether this drives teams to do more or teams getting downsized is a question
that probably nobody can answer. It probably depends on the use case. What we
see is that it will free teams to do more to do more value creation rather than
cutting cost. However, at least for the near future there will always be a
human in the loop checking the output, finalising it and then pushing it out.”
The term ‘Agentic’ already personifies an algorithm. The
next step is to give it a voice… and a face. Virtual anchors, chatbots, and
AI-generated influencers are already “engaging audiences in real-time, offering
personalised responses and fostering a sense of connection,” notes Thakare.
Genies believe AI-driven avatars and digital agents “will power the next generation of personalised, interactive experiences.” Digital identity has evolved from the early days of internet usernames to social media profiles. The next stage powered by agentic AI identity moves beyond static profiles to “intelligent, interactive beings that can represent users, celebrity personas, existing characters, or even fictional characters in deeply personalised ways,” Genies says. “These will serve as the new ‘usernames,’ acting as a layer across digital experiences.”
That’s where smart Avatars come into play. Could this be how we will interface with the internet in future?
“Our clothing changes, our preferences, our mindset, our beliefs and more. We want identity online to represent this,” says Genies. “From the ease of creation, to their evolution over time based on user interactions, to their compatibility with all types of diverse settings, these avatars offer a glimpse into the future of digital identity and engagement.”
No comments:
Post a Comment