Wednesday, 30 April 2025

In Conversation: Barbara Ford Grant

Interview and copy written for HPA

article here

Today’s tentpole VFX pipelines are labor-intensive, time-consuming and expensive, while AI promises to produce the same but much cheaper and quicker with a smaller team.

Barbara Ford Grant, Media Advisor, Consultant, and pioneering VFX and creative technology executive set out to prove the merits and weakness in this argument.

“Studios working on $100 million or above productions have been in a really sweet spot, but now they’re taxed with having to make something that is a substantially better experience than their competitors because they want to get audience into theatres, but they have to do it a lot cheaper. They must get the cost down and the premium up.”

Ford Grant told attendees at the recent HPA NET Roundtable in March, “Cinematic storytelling is still a human creative process of writing, visualizing, planning, shooting, and executing, and then doing post, but you can weave AI tools throughout that entire process to create huge efficiencies.”

With the 2023 strikes, Ford Grant found herself with extra time.  Taking advantage of that unusual circumstance, she decided to make a short film to explore the possibilities and limitations of AI filmmaking.

“I’d been working on machine learning R&D for about 15 years, but once generative video tools like Midjourney came out, I wanted to play around with them unencumbered by the studio system.”

Under the banner of BFG Productions, she developed, wrote and produced a 22-minute film, Unhoused, on a shoestring $40,000 budget.  The majority of the budget was used to shoot the production traditionally with real actors on location with a union crew.

“Then we used AI tools to see how far above our weight we could punch in post and VFX.”

Turns out, pretty far indeed.

With Daniel Kramer, VFX Supervisor, FX Artist, Head of CG, her partner on the project, they shot for two days, then used Gaussian Splatting and AI to create a third day’s shoot on an LED stage at Lux Machina.

Of the 60 VFX shots, only four were entirely AI-generated. The rest used a real-time pipeline and conventional software like Houdini, Nuke, Maya and Adobe Creative Suite augmented with AI techniques for animation, FX, fluids and so-on from Adobe Firefly, Midjourney and Runway ML among others. Kramer used PostShot and LumaAI to create 3D Gaussian splats from multiple still photos captured with a DSLR and iPhone.  Those splats were loaded into Unreal and displayed in realtime on an LED wall for a virtual production shoot.

Some of these tools saved them time; others did not, due to the lack of maturity and early iteration of the new technology. “We set out to ask a number of key questions,” says Ford Grant. “What level of production value could we achieve? What size team did we need (how low can you go…) and how fast could we go?

Ford Grant and Kramer each have 30 years of production experience but haven’t been at the coalface of creating shots for some time. Aside from some complex composites involving HDR and 3D tracking, they found that they were able to complete 80% of the work on Unhoused themselves.

 

AI Challenges

Overall, it is “mind-blowing what can be done,” she says, “but there are many gaps where human expertise will be required.” A key one is that productions need control over every aspect in isolation.

“Most tools are limited in resolution and quality,” Grant says. “For example, often they aren’t 24fps and only 8-bit color. Many tools rely on prompts which are imprecise for control. Multiple takes with the same prompt produced very different results. Reference images, depth maps, roto mattes and 3D models are better inputs to guide AI more directly.”

Many of these shortcomings are actively being worked on and new models are released daily to give better control and consistency to the outputs.

“In fact, there are a couple shots I’m going to go back and redo because the tools have advanced in this short space of time and now I do have more control over the outcome.”

The power and accessibility of AI tools promises to democratize the entire creative process meaning, that anyone can potentially create content with A-list production values. On the other hand, studios could also use these tools and simply cut the artists out of the loop.

Ford doesn’t think either extreme is true. “It still takes a lot of understanding, taste and expertise by humans to get the best outcomes. Fully AI-generated material may look ‘okay’ superficially, but it wouldn’t make it past a review at a studio. You have to know what ‘good’ looks like. You have to know what you’re looking for and, when you get painted into a corner, how to solve it another way.”

She believes anyone with experience of production, particularly those with an animation or VFX background, should excel with AI tools.

“They’re going to go a long way into production with a lot less people.”

Studios, however, will need to employ more creative talent, not less, if they want to succeed with AI.

“Studios are likely to hire emerging AI studios (such as Secret Level, Asteria and the Russo Brothers’ AGBO) which are embracing these tools in order to negotiate productions at a lower cost.

“AI should be a golden time for authoring, directing and imagining content. That said, the further away you are from being the person who decides if a shot or a piece of content is good enough the more likely it is that your job is at risk.”

“I encourage people to learn these tools as quickly as possible. AI can be a really great collaboration tool. Use them to create novel and inventive things.”

Rights management will become even more integral to the creative and business process. Ford advises artists to digitally fingerprint their work to control access and verify a breach of their IP.

“Artists need control over their data,” she says. “Depending on how extensive your library is you may not need third party LLMs. If you’re a creator with 40 years of your own beautiful material to train on, you can arrange your own rules and regulations [enshrined on the blockchain] for allowing others to collaborate and build on those characters and models. If you’re a studio with a rich IP library, you might choose to do that as well. Whichever the case, people need to get control of their assets quickly.”

Looking ahead, Ford Grant predicts that AI democratization will see small production teams become viable again. “Artist-driven boutiques will return as quality becomes achievable at lower costs.”

Ford Grant emphasizes the positives in the industry adoption of AI and believes the future will be creator-led. “The further you industrialise your media business away from the creative process – away from the artistry of it all – the further you are from what’s happening now and the more you will miss out.”

For anyone who thinks an AI is going to generate a masterpiece on its own, Ford Grant offers an analogy with the creation of movie classics of the past. Wim Wenders’ Paris, Texas (1984) left an impression on audiences, Ford Grant included, because it continually upended our expectations of what narrative should look like.

Paris, Texas is a prime example of how a group of people came together and made art that moved people because of the combined creative alchemy. That’s something that you never want to go away. Just as you could never make that film from a single point of view – it required Sam Shepherd, Robby Müller, Harry Dean Stanton, Ry Cooder and more – you could never rely on one AI model or AI alone to make something as fresh.”

Tuesday, 29 April 2025

Five minutes with Victoria C. Jordan, Sr. Manager - Post Production, Paramount Pictures

 interview and copy written for Sohonet

Victoria C. Jordan, senior manager of post production at Paramount Pictures, oversees work on numerous feature film and TV series including the upcoming Walter Hamada produced horror feature Primate, director Gina Prince-Bythewood fantasy film Children of Blood and Bone and the action comedy The Naked Gun starring Liam Neeson and Pamela Anderson. Previously a dailies supervisor, her work included Smile, Significant Other, Mean Girls, A Quiet Place: Day One, Bob Marley: One Love; IF; Sonic the Hedgehog 3 and Knuckles.

article here

As you can imagine there is a mountain of complexity involved in each – and Jordan gets to juggle multiple projects every day. She joined the studio four years ago on the back of a storied editorial career as assistant editor on shows including CSI: Cyber (CBS), Station 19 (ABC/Disney), Proven Innocent (Fox) and Hightown (Starz).

Thanks Victoria. Will you share more about your core responsibilities are? 

As a senior manager, I manage several of our shows at once. I’m currently on three films where I'm working closely with editorial through post. One of the key roles for me is that I oversee dailies. I help across editorial and with archiving as well. Any issue that our editors have in their suites here will pass my desk. If they need stock music, for example, I’ll handle clearance for that. Anything that will make the cut better, I am there to help.

To what extent do you get involved in hiring the post-production team?

That starts as soon as the movie is greenlit. If the director doesn’t yet have an editor in mind, we start the process by contacting agencies and talent who we know to find the best fit. One of the shows we're doing is The Naked Gun directed by Akiva Schaffer who already had his established team including editor and first assistant but needed a second assistant editor and post PA to fill out the crew. 

Principal photography for that show was in Atlanta last spring and I oversaw dailies, approving dailies budgets and was in constant communication with the lab at FotoKem. It’s all about making sure that everything goes to where it needs to go whether that’s the studio, our editorial team or the production team. I’ll check the footage for any color issues or resolve any sound issues. If I can head any issues off at the pass and keep everything running smoothly then, as far as my editorial team is concerned, they stay creative because they have everything they need.

I’m there for our editorial teams emotionally as well. Whatever the hurdles that we go through perhaps with last minute schedule changes or shifting dates around of friends and family screenings or if we need to go on a hiatus and people needs reassuring that our release date is saying the same, I’m there to explain how we're going to work.

Does this supervision role work upwards too with your director?

Yes. It’s all a matter of building empathy and trust. I like to establish a strong working rapport with my director and to video chat as frequently with them as they wish to make sure that their project stays on track and so that they can create spectacular films.

There's always a grayscale in this business between professional and personal. Professionally, we do everything in our power to ensure things go smoothly. That could be making sure that the director has everything that they need from scheduling ADR time with actors to seeking out motion graphics companies for titles.

Like the rest of us, a director may also at times be cranky or sad or upset about something so I feel it’s part of my job to navigate those waters together so we have a great outcome.

To what extent are your teams enabled for remote work?

Typically, we like our editorial teams to work in office with us which is nice because my department is on the same floor as our editorial teams so we are always accessible. That’s something that wasn’t possible during the pandemic period. My director is often in the office too and sometimes our producers will visit to look over cuts, have screenings or just to catch-up. 

Everyone is busy and remote workflows facilitate a better use of everyone’s time. Someone could be in another country or juggling another project or understandably doesn’t want to waste time fighting the LA traffic so we rely on ClearView Flex Glow to connect them to our editors.

If a director wants to see a certain scene or check how the cut is going or switch things up and they don’t have time between meetings they simply jump onto ClearView Flex Glow and pop-in remotely. It’s a simple, instant and straightforward communications device with rock solid frame rate, color and audio accuracy including HDR.

What does normal look like when it comes to work flowing through the facility?

I feel like normal is now hybrid. Our facilities have the capability to send out a ClearView link in order to review color if needed. And wherever my producers or director are we can schedule time to connect them for reviews at their convenience.It's a hybrid situation where we give them the option to review remotely or coming in to sit with the team. Over the course of a theatrical project, it is highly likely that our director and producers will avail themselves of both options.

It strikes me that you really thrive on the pressure of all these different demands on your time, but what excites you about your job? 

I come from editorial where I used to be an assistant editor, and then I went over to the studio side with post management. What I enjoy now is that I can be an advocate for my editorial teams. I really enjoyed the freedom of freelance life because I was jumping from show to show but in the post management side I'm still working with different teams on different shows, and so I enjoy that change every nine months or two years or so. 

What I really enjoy about my job is trying to make it as easy as possible to do what we need to do and have fun doing it. To be creative and to create the stories that we want to tell. I get to connect with different people and be in the trenches with my team to create something awesome that’s both innovative and entertaining.  

 


Monday, 28 April 2025

Behind the Scenes: Warfare

IBC

Extended takes, 360-degree sets and military authenticity reinforces the fog of war recreated from the memories of real life US soldiers

article here

Playing out like a transcript from found footage, Warfare sets out to provide an authentic understanding of being in a combat environment under intense pressure.

Like writer-director Alex Garland’s previous film Civil War, this is another eye-witness account of battle this time delivered in an ultra-realistic, forensically accurate manner. Conceived and co-directed by former Navy SEAL and Civil War stunt coordinator Ray Mendoza and based on a mission gone wrong incident that Mendoza participated in during the Iraq war, the film dispenses with romanticisation in its depiction of what it feels like to be under fire.

“Reality doesn’t let people off the hook; when things are tough, there isn’t a dissolve or a cut or some music to cheer you up,” says Garland of his approach to making the movie. “You remain in that state until circumstances relieve you from pressure or the moment, and that’s what Warfare does — it adheres to reality, not the reassurances of cinema.”

Based on memories of the event in 2006 from surviving soldiers, the sparse story unfolds minute by minute in chronological order breaking some of the formula of conventional war movies. Like Sam Mendes’ faux single shot Dunkirk, Garland’s film is almost realtime and uses extended takes ranging from five and ten minutes to upwards of 15 minutes for the final evacuation scene. There is no room for introspection as the soldier’s mission rapidly turns south from surveillance into survival.

 “Long extended takes allowed us to float through spaces where people are doing things concurrently — we could pick up realistic details you cannot script,” says Garland. “The actors were doing 12-minute take after 12-minute take, and wound up yawning, flexing, or scratching the back of their heads.

“What we captured was a sort of semi-reality — something that belongs to reality but exists within the film and gives off the quality of reality.”

This is cinematographer Dave Thompson’s first feature film although he is a seasoned Steadicam/ A cam operator who operated for Garland on Civil War. He has previously collaborated with DPs such as Robert Richardson, and Dante Spinotti and alongside Francis Lawrence on the Hunger Games franchise. For Warfare, Thompson operated handheld alongside A camera Barney Coates using the DJI Ronin 4D camera system used to photograph Civil War.

“We had to be careful how the light was in close-ups, how colour was used, to not start what I would personally call mythologizing,” Garland explained to Newcity Film. “The language of the movement of the camera, the language of the lenses, was all accented away from the trickery of cinema.”

Bovington bootcamp

Aside from a brief male-bonding prologue in the barracks and some late-night drone shots, the movie plays out in and around an apartment building where the SEALs are fired on by Al Qaeda.

Much as Stanley Kubrick doubled the London docks for war torn Vietnam in Full Metal Jacket, so Garland reconstructed a detailed section of Iraqi city Ramadi in the UK. They mapped out the main apartment building inside a north London warehouse using tape on the floor and room dividers to delineate walls. This hangar became a rehearsal space where the actors could practice manoeuvres in gear while production designers Mark Digby and Michelle Day, who worked with Garland on Ex Machina and Annihilation, constructed the Warfare set at Bovington Airfield Studios in Hemel Hempstead.

Digby and Day constructed 12 buildings in a streetscape organised around three distinct points, allowing the camera operators to film in almost 360-degrees. They used as few set extensions as possible to widen the expanse, situating the street in a distinctive T-shape.  

“In most directions you could point a camera and use whatever was in the frame without having to rely on bluescreens to extend the set,” says Garland.

The recreated streetscape included an urban residential neighbourhood of low-slung, two-story apartment buildings with concrete facades surrounding a marketplace where Al Qaeda operatives circulate in the movie. In addition to the veteran’s memories, the designers drew on Google Maps, and firsthand accounts of other non-American military sources, plus photographs of the actual house directly after the action had taken place. This provided details for sniper holes, bloodstains, even the colour of the curtains. SFX constructed fake pillars outside the apartment building which are destroyed by an IED explosion outside.

A military communications director was on hand during production to help actors relay radio information with accuracy. The cast delivered dialogue over actual radio lines while realistic ambient sound effects of airplanes flying, dogs barking and people milling around were pumped onto set through the PA system.

After co-writing the film, Garland says his duties became more technical and logistical, with Mendoza taking charge of directing the actors’ performance by way of a three-week bootcamp based on BUD/S (Basic Underwater Demolition SEAL), designed to prepare SEALs to perform under intense levels of stress and fatigue.

This included a crash course in communications training and military terminology that taught the cast to speak with brevity, efficiency, and clarity.  Mendoza and Tim Chappel, a former British Army Royal Green Jacket, also taught the actors weapons training, progressing from holding weapons and firing blanks on a shooting range to practicing room clearances with blind ammunition.

Near total recall

Mendoza, who is played in the movie by Reservation Dogs star D’Pharoah Woon-A-Tai, worked as a military advisor and stunt choreographer after retiring from the Navy on movies including Jurassic World and Emancipation.

He designed the battle scenes on Civil War, including the climactic assault on the White House and it was this sequence that inspired Garland to see how far he could push the visceral immersion of being trapped, fighting, inside a building.

After Civil War wrapped in 2023, Garland and Mendoza worked together for a week in Los Angeles. Garland transcribed while Mendoza recounted the Ramadi operation. They conducted a series of interviews with the SEAL team, building out key memories and incidents until the transcript took the shape of a screenplay. Other characters were also interviewed, with their memories of the operation depicted as they were remembered.

“We were not inventing people or reordering events here,” says Garland. “When you look through the timeline of what the SEALs were saying happened, we had to forensically piece together events — until a point arose when we had enough information from multiple sources to decide how we would tell it onscreen.”

Warfare is also a tribute to wounded SEAL Elliott Miller (played by Cosmo Jarvis) who took part in the operation. Miller doesn’t recall what happened that day in 2006 but he and other vets from the conflict were on set reconstructing their collective experience.

“I wanted to track down and collect everybody’s memories and perspective, to create a living document that would give Elliott the ability to see and experience what happened during the operation,” says Mendoza. “Memories come rushing back, sometimes closure and understanding follow. We were young when we fought in Ramadi and didn’t have the tools or the dialogue to talk about these things until 20 years later.”

 


Wednesday, 23 April 2025

Behind the Scenes: The Last of Us II

IBC

There is a version of episode 2 where the brutal death of a loved character isn’t quite so extreme. But they chose not to go there, explains editor Timothy A Good.

article here

The makers of action adventure The Last of Us broke their own rules in depicting the tragic death of one of its main characters, according to the show’s lead editor Timothy A Good, ACE. Episode two of the second season is already notorious for showing one of the most brutal of on-screen deaths but there was a version where the audience wasn’t put through as much emotional torture.

“Normally we don’t show the moment of violence,” Good explains. “Normally we cut away and show reaction from the character’s perspective. We tried that version. Craig [showrunner Craig Mazin] said, ‘What if we break our rule for the first time? It's going to be even more shocking if we position the audience as Ellie’.”

Good won a Primetime Emmy for his work on the first season of HBO’s video game adaptation and returns as lead editor, also with a co-producer credit, for the second outing.

Fans of the game will already know the outcome and the script by Mazin and video game creator Neil Druckmann doesn’t shy away. The question was how intense that death scene should be.

“My job as editor is to present everything and the kitchen sink of these sequences so they are the largest, most extreme version that I can possibly give, and then we delete our way to success,” says Good.

The show’s emotional heart has been built around the relationship between feisty teenager Ellie (Bella Ramsey) and adoptive father Joel (Pedro Pascal) battling through a post-apocalyptic landscape. In episode 202 ‘Through the Valley’ Joel is captured by a rebel militia intent on revenge. Can Ellie get to him in time?

Creative discussions centred around how much the audience could take of Joel’s torture while knowing what is probably going to happen. A decision was made to compress the time it took for Ellie to get into the cabin to effectively put the audience out of their misery.

“We don’t want to prolong the audience’s suffering, at the same time we want to make sure that she wasn't teleporting into the scene,” says Good. “We don’t want to go straight from her running to all of a sudden opening the door. We still had to see her go through all the beats of getting to the lodge, getting into the lodge, going up the stairs, but everything was propulsively edited so that we only used the tiniest clips to follow her perspective and movement through that space, but doing so in the fastest possible way to get her up there.”

Once inside, there were further discussions about how much the audience should see. This is a show that has until now preferred to show extreme violence happening off screen.

“There was a version of the scene where we actually don't see the death of Joel,” Good reveals. “We see it from Ellie’s perspective only. We see behind the golf club as it goes down, and you see her face in the distance, and then we close in to Ellie reacting. There was a version where her reaction is so sharp then when it happens she screams immediately.”

Mazin wanted to break the rule. “We see the impact from her perspective. It's still a wide shot, not close-up, and still from the character's point of view, but now we are seeing the entire motion of that event play out.”

The scene hits as hard as it does because of the audience’s connection with the characters. Reconnecting with those characters, five years on in story terms, and two years since Season One finished, was the job of episode 201 which Good also edited.

“The first step is we see Ellie as someone who no longer needs help. We see her take down a larger guy with a specific martial arts move. Then we see Joel a little differently too. He’s still taking care of younger people only this time he has another surrogate daughter in Dina whom he treats in a similar fashion to Ellie. We see the close connection he has with her as a way of reintroducing the kind of person he is.

“This is important when we go to episode two because Dina is in the room when Joel is about to die. And because she's in the room, he can't do anything to defend himself. If he does try, he risks losing her, and that’s something he would never do. These nuances are introduced in episode one and hit home in episode 2.”

The first episode also deliberately withholds a scene with Joel and Ellie together until near the end when Joel intervenes to protect her at a dance.

“The way it is written and staged emphasises this huge gulf between them,” Good says. “We see how Joel still tries to protect his ‘daughter’ and then he realises that he has poked the bear. Her reaction is so above and beyond what just happened that we know there’s a much deeper rift between them.”

The rift is the lie that Joel told Ellie during the season one finale. Joel saved Ellie from certain death but masked the truth from her in an attempt to save her from further harm. It’s clear that Ellie has doubts about his version of events.

“There's a moment that they might talk about it, when she comes home and sees him on the porch,” says Good. “He's probably hoping that she will open up to him. He’s fixed the strings on her guitar as if to say ‘I did this for you, let's talk about it’ but she chooses to ignore the invitation.

“Setting all of that up in #201 was what allowed episode two to hit as hard as it does because you recognise that neither Joel or Ellie have the closure that we hope that they have. That's life. You don’t always get the opportunities to rectify things.”

There’s even more going on in episode two, not least of which is a massive Game of Thrones style attack in the snow by the infected on a community of vulnerable humans.

“The pressure of that battle sequence was in making sure that it was logical and told from the character’s point of view. In particular, we hope that you felt what Tommy [Joel’s brother, played by Gabriel Luna] was feeling - that he's almost alone against the horde of infected and we're following all the things that he's experiencing and witnessing.”

This version we see is different from the original cut in which the battle sequence was intercut with  the log cabin sequence.  “At a certain point, we realised that no one wants to go back to the battle because the battle was no longer character-based and is now a distraction from the story of Joel and Ellie. We care so much about Joel and Ellie that once we get them into the lodge we don’t actually want to leave them.

“So, we shifted everything from the battle and frontloaded it all to create a miniature operatic sequence. In theory, we could end on a great cliffhanger and not see what happens up at the lodge.

“Instead, what I think we do best on Last of Us, is we go to the intimate details, the intimate stories and the things that bind the characters together that hopefully the audience is wanting to see. We wanted to save all of that for the finale of this episode.”

 

 

 

 

 

 

 

 

Wednesday, 16 April 2025

BBC unveils blueprint for long-term research driven by AI and IP

Streaming Media

article here

The BBC has outlined how it will plan future R&D and it is being built around internet-only, intelligent, intermediated, interactive and immersive media.
“These are the forces shaping the media landscape for the next generation,” says Jatin Aythora, Director, BBC Research & Development.
Framing its focus and investment going forward are three hypotheses, one of which states that ‘all user interaction will be driven by artificial intelligence’.
The BBC itself is in transition, along with other UK broadcasters, from over the air terrestrial distribution to one that increasingly relies on broadband networks.
“Our work at R&D has become about more than just preparing the BBC for that transition,” says Aythora. “In our new research cycle, we need to explore the use of AI more fully to ensure that how the BBC uses this technology is always responsible, ethical and in the public interest.”
Funded by the British public in an annual licence fee (currently £174.50 / U$231 per household) everything the BBC and by extension its R&D wing does must meet its public service remit.
As Aythora puts it, the work of R&D must be “ethical and for the good of the BBC, licence fee payers, the media industry, and society” – that’s quite a responsibility but one BBC R&D has turned to its advantage, benefiting the wider industry too.
It beat most of the world to the streaming age when iPlayer launched in 2007. Before that it developed one of the first video streaming codecs and its computer vision work is the basis of ubiquitous sports graphics system Piero.  More than a decade ago it pioneered interactive and personalised media.  BBC R&D was one of the main contributors to industry work in DVB updating the DVB DASH standard to support consistent low latency streaming. 
More recently, it helped establish key sets of principals and tools to combat AI misinformation with the Coalition for Content Provenance and Authenticity (C2PA).
So how it thinks the technology landscape will develop is worth listening to.
Three hypotheses are central to its current thinking:
1. All user interaction will be driven by artificial intelligence.
From content recommendations to real-time language translation and automated story generation, AI will become central to shaping the user experience.  BBC R&D thinks AI-driven interaction will allow for “hyper-personalised content delivery, improved user retention, and optimised marketing strategies” but this also presents a challenge for the BBC.
“Business models are changing and pushing industries to transform,” says Aythora. “As the AI ecosystem evolves, we will need to define an approach that ensures we can maintain and build on the trust we have in this new environment.”
2. The internet backbone will feature a new layer that provides a permanent and transparent registry for digital assets.
This hypothesis envisions using distributed ledger technologies (such as those that underpin blockchains), to create a new layer of core internet infrastructure that provides secure and tamper-proof tracking of digital content and assets. Such a layer can help fight disinformation by verifying content authenticity while ensuring appropriate attribution and rights management for creators.
“Tackling misinformation, content piracy, and copyright infringements are key challenges for the For the news and media industries,” he says.  “This new layer of internet infrastructure would promote greater transparency and trust, empowering both creators and consumers while fostering a more credible media ecosystem.”
3. AI infrastructure will be treated as national critical infrastructure.
The third working assumption is that AI’s underlying infrastructure - data centres, training models, and compute power - will be recognized as vital to national security and economic stability, much like how utilities such as electricity or telecommunications are managed.
“For the media industry, robust AI infrastructure will ensure the seamless operation of tools used to generate, curate, and distribute content,” says Aythora.
It would also guarantee resilience against cyber-attacks targeting AI systems and uphold the reliability of AI-driven journalism during wars or elections.
From these hypotheses, BBC R&D then worked out how they might impact the BBC, how they relate to its public service remit and how the BBC could contribute to any associated tech development.
“We don’t assume that any single technology will define the media ecosystem of the future,” says the BBC R&D chief. “Instead, we believe that it will reflect a set of outcomes enabled by different types of technologies.”
Its activity is guided by a vision of a media landscape that is increasingly Internet-only; Intelligent; Intermediated; Interactive and Immersive. These five ‘I’s (not the five eyes of the Angle-Australian-UK spy network) are outlined as follows:
Internet-only Produced and distributed by IP
When over half of UK households are predicted to exclusively watch TV online by 2030 and it’s almost a matter of policy that BBC itself will evolve into a primarily internet based organisation over time, it’s stands to reason that the next generation of audiences will consume BBC content over IP.
BBC R&D is exploring how “to ensure that the future of media provision is sustainable, affordable and universal and that the organisation is positioned to create and distribute content in ways audiences want.”
Examples in sustainability include a tool called LECCIE that allows BBC teams to monitor and minimize their cloud energy footprints.
It claims to have improved streaming by making open source contributions to the dash.js JavaScript DASH player to improve low latency support as well as testing and assessing different stream switching algorithms through simulations and trials.
New internet experiences include a music discovery service, designed by BBC R&D “to bring back some of fun and excitement of flipping through records in a bargain bin when discovering new music. It’s all well and good to take a recommendation from the record shop, but you don’t really know if you like it until you listen and the music connects with you, sometimes in unexpected ways.” 
Intelligent – Enhanced and automated by AI
BBC R&D is investigating how Generative AI in particular can be used within the organisation with the provisos that its use is always in the best interests of the public; always prioritises talent and creativity; and is open and transparent with audiences when used in content creation.
“We want to develop AI technology that has been trained ethically and responsibly and will enhance the output that our journalists and producers create,” says Aythora.
Examples here are the trial of a speech-to-text GenAI tool called Whisper AI to quickly generate a high-quality transcript of audio.  This is reviewed by a member of the editorial team, and edited where necessary. A final transcript is then uploaded with the audio online.
Intermediated – Mediated by platforms and agents
According to media regulator Ofcom over half of UK adults use social media as a news source. Therefore, as the nation’s most trusted news source, the BBC desires to maintain a direct, independent relationship with audiences. “The BBC needs to understand and work with the platforms and agents that sit between us and them.”
Examples here include Freely, the free streaming service that launched a year ago and offers live and on-demand TV. The BBC worked with other UK broadcasters to deliver this and contributed to the development of the HBBTV operating system for smart TVs.
BBC R&D also played a foundational role in the 'content credentials' C2PA initiative to tackle misinformation online, accelerated by AI. Other members include Adobe and Microsoft. C2PA has developed a technical standard to encode information about the provenance of content that show where a piece of media has come from, and how it’s been edited.
Interactive Optimized for user experience & engagement
Acknowledging that audiences demand more interactive experiences, R&D has been developing  technologies that enhance audience interactivity while supporting the UK’s creator economy.
A key project is Wing Watch. This interactive wildlife stream uses ML-driven data to help audiences navigate live content based on their interests. “This type of personalized interactivity enhances user experience, increases content discoverability, and lays the foundation for future AI-driven engagement models,” says Aythora.
Immersive Augmented by immersion & contextual awareness
Along with enhanced interaction, BBC R&D is exploring how it can create new audience experiences that are immersive, for example using data about contextual awareness sent from the user’s connected device. Many of its experiments in this area are around live music, such as bringing live concerts into multi-player online game experiences.
One strand of this work involves investigating the use of volumetric capture technology. The BBC say this “ideal for intimate performances of one or two artists under controlled lighting conditions” and allows fans to view the performance remotely.
It is also researching approaches using broadcast cameras as a part of work in the MAX-R EU collaborative project. “This approach may be better suited for a larger number of performers, with complex stage and lighting conditions, at the expense of a less 3D feel and a more limited range of viewpoints,” it says.
Another augmented reality prototype uses spatial audio, geolocation, voice recognition and physical gestures to take  a user wearing smart headphones “on a journey through universes without ever leaving your local park.”
Taken together and the hypotheses and 5is are the blueprint for the BBC’s long-term research agenda.
Aythora says, “By focusing on AI-driven user experiences, ensuring transparency in digital asset management, and recognising the strategic importance of AI infrastructure, media organisations can adapt to a rapidly evolving technological and societal landscape while maintaining relevance and trust.”