Friday, 17 October 2025

AI helps InterDigital reach beyond VVC in race to develop next-gen codec

Streaming Media


Candidates for H.267 already significantly outperform VVC as the hunt for a new video compression standard gets underway

article here

The starting gun has been fired on development of a new video codec beyond VVC with gains of at least 20% up to 50% claimed by R&D lab InterDigital.

The target for H.267 is to deliver improved compression efficiency, reduced encoding complexity  and enhanced functionalities such as scalability and resilience to packet loss.

“It's a real big challenge and a great opportunity to develop new ideas, patents, and algorithms,” said Edouard Francois, Senior Director 2D Codecs Lead at InterDigital. “In particular, we are exploring how AI can be used in synergy with traditional video compression methodologies.”

Headquartered in Wilmington, DE and holder of more than 33000 worldwide patents and applications across wireless, Wi-Fi, 5G/6G and video, InterDigital is one of the world’s largest pure R&D and licensing companies.

StreamingMedia was given a tour of its video lab in Rennes, France, InterDigital where scientists said they were exploring combinations of AI and traditional compression methodologies to compete for patents that could be locked into H.267 when the standard is published in 2029.

There has been reluctance among some companies including Amazon to formally kickstart a new video compression project which could mean rip and replace of encoders in their existing ecosystem. In addition, many Big Tech and streamers are committed to working with rival codec AV1 from AOMedia.

“People were cautious,” says Francois. “That’s why ITU and ISO convened a workshop to ascertain market demand. The key question was were we able to compress video with a significantly better efficiency than VVC?”

At that workshop the Joint Video Experts Team (JVET) which reports to ITU-T VCEG and ISO/IEC MPEG issued a call for evidence. Samsung and Amazon were among attendees.

“The goal was to show the state of the art of video compression where anybody can come with crazy ideas,” says Lionel Oisel, Head of Video Labs and General Manager, InterDigital France.

Nokia, Ericsson, Fraunhofer HHI and InterDigital responded to the call and presented their results at a JVET meeting in Geneva earlier this month.

“That was very important because there was a clear expression of interest in the need for a new video codec,” says Francois. “Further increasing compression efficiency, because reducing the bit rate is always good but with an encoder which is easily configurable and where the complexity on the encoder side is at least maintained to a reasonable level.

“Fraunhofer HHI demonstrated success. They had optimised a lot of their software, removed some constraints of VVC and added a few tools and were able to achieve a 20% bit reduction gain running at the same encoding speed of VCC.”

InterDigital made dual responses. One, called Enhanced Compression Model (ECM), was based on conventional codec schemes and the other was a hybrid of VVC overlaid with AI tools termed Neural Network Video Coding (NMVC).

The former was made principally in partnership with Qualcomm which was actively involved in ECM development and the latter was made in tandem with Huawei.

InterDigital had begun work on ECM in 2021, a year after VVC was finalized. Designed purely for research and without taking account of encoder complexity, by the end of 2024 ECM had reached version 18 and was demonstrating a coding gain of 28% over VVC.

In purely visual tests the company claims it can achieve 50% gains for some sequences.

“Overall more than two thirds of the sequences were gaining 30%,” says Francois. “The evidence shows that you can outperform VVC with an encoder that has reasonable complexity. NMVC consists of VVC plus two to three ML/AI tools which could increase efficiency further.”

When new codecs are developed there is traditionally a trade-off between reduction in bitrate and increased encoder complexity. If saving bitrate was the only goal then you could keep introducing more complex tools and algorithms, however this makes the encoder much more complex to implement. Reducing or at very least maintaining complexity levels was a key ask by the market.

At the October CfE meeting it was agreed that there was concrete evidence that with existing tools a new encoding method could significantly improve on VVC without increasing complexity. JVET gave the go-ahead for a call for proposals.

Competing participants in this next stage, including InterDigital, will now work until January 2027 before presenting results back to ISO/ITU for assessment. Finalisation of the new standard is expected by end of 2029.

“We only used publicly available technologies and publicly disclosed algorithms to answer the CfE but we have internal technologies that were not disclosed and which already in our lab tests do better than the CfE response that we submitted,” Francois explains. 

“Now, we switch to hidden mode and we develop tools and technologies internally.  Many other companies will do this too over the next 18 months. Our research is focused on keeping the complexity low. We cannot make the complexity explode.”

Key research aspects include optimising the trade-off between bitrate and visual fidelity, developing fast encoding methods suitable for constrained devices, and advancing performance in emerging use cases like HDR, 8K, gaming, and user-generated content.

The standardisation phase will start after January 2027 and will be a collaborative effort led by JVET.

“Everybody works on their own or with some additional companies trying to bring the best potential solution that will be evaluated in January 2027 but the one that will win won’t become the standard,” says Oisel. “Instead it will likely be used as a baseline for further development from 2027 to 2029.

He adds, “This standardisation period will determine which tools are adopted (therefore licensable). To do that you have to prove that it delivers huge gain and also that you don't have high complexity. The issue with AI tools is that they put the complexity on the decoder side which is something that chip makers like Broadcom will fight against because they don’t want to add complexity to their hardware. If you come with a tool with huge gain but also huge complexity then this won’t be selected.”

VVC state of adoption

VVC itself has been slow to rollout so news of a potentially superior codec launching in less than four years may stagnate adoption completely.

“Everybody's waiting for a trigger,” says Oisel. “The trigger could come from the content provider but to deploy that they need hardware, they need encoding solutions, and also decoding solutions on the devices.

“There are a large number of TVs that potentially can decode VVC, whether enabled or not, and a couple of mobile phone manufacturers have developed VVC decoders. There are encoder solutions too but not necessarily fully optimal yet so this means that you don't reach the full bitrate gain of VVC. On the content provider side VVC is adopted as standard for next generation TV in Brazil. Content providers who wants to stream TV3.0 in Brazil will have to implement VVC. Encoder manufacturers will have to comply with the requirements of their customer (TV Globo) and TV manufacturers will also need to be TV3.0 compliant.”

ATSC3.0 which is rolled out to more than 75% of US markets references VVC as a codec; as does DVB in Europe but people are still waiting for a trigger.

“It could come from Brazil but the main market right now for VCC is China. Tencent is using VVC quite a lot where one use case for VVC is to better manage a huge number of UGC social videos. VVC could be a very good target for them to reduce the file size because compared to HEVC you have a reduction between 45%- 50%. Usually it is the US that leads the way but in this case it could be China that leads o, which is pretty unusual.”

The reference codecs for mobile via the 3GPP are AVC [H.264] and HEVC [H.265] and the battle to go to the next generation has not yet started. The competition is likely between AV1 (AOMedia) and VVC (MPEG).

“AOM are to release AV2 by end of this year and it also seems to be hugely complex on the decoder side,” says Oisel. “Will they be able to simplify it? Usually, MPEG are in advance compared to AOM. AV2 is using a lot of tools that were developed for VVC. So there are two parallel tracks, but the underlying technology between MPEG and AOM standards are, to date, not much different.”

Thursday, 16 October 2025

NAB NY preview: Political backdrop casts long shadow over TV innovation

IBC

article here 
Buffeted by economics and squeezed by Big Tech the last thing America’s broadcasters wanted was to have their news operations muzzled or business threatened with political interference yet that’s the realpolitik of US TV entering NAB Show NY.
While US TV networks continue to engage in a high stakes battle for free speech members of the National Association of Broadcasters (NAB) which carry network programming want the support of the Federal Communications Commission (FCC) on two equally pressing business matters.
Brendan Carr, the chair of the media regulator is painted as the villain following the politically charged comments he made leading up to Disney's suspension of ‘Jimmy Kimmel Live!’, yet station groups want Carr to relax rules around national TV ownership and to speed up the transition to NextGen TV or risk regional broadcasters going to the wall. Both are prominent topics of conversation at NAB New York this week.
Earlier this year NAB petitioned the FCC to reform the rules which limit the reach that any one TV station owner can have. Under the current 39% ownership-cap rule, the proposed $6.2 billion acquisition of station group Tegna by rival Nexstar would be barred.
NAB argue that the rules are outdated and “prevents local stations from achieving the scale needed to compete with global tech and streaming giants like YouTube, Amazon, Meta and Netflix – none of which face similar restrictions.”
“The message is clear,” said NAB president and CEO Curtis LeGeyt in a statement. “It is time to eliminate the outdated national TV ownership cap. Broadcasters are united in calling on the FCC to level the playing field and give local stations a fair shot to compete, invest in journalism and continue providing our communities with trusted news and public safety information.”
In a further attempt to thwart Big Tech from swamping their business, the NAB is pressing the regulator to mandate a nationwide switchover to the IPTV platform ATSC 3.0 from the current ATSC 1.0. While 75% have switched, there is a reluctance among others because, as in the UK, there is a risk of a small but significant part of the population being unable to receive any local TV if terrestrial services are shut down overnight.
Carr has previously described the issue as a “break glass moment” and this month appeared “tentatively” prepared to agree to NAB’s proposals for a hard cut-off in 2028.
Not only does NextGen TV promise superior audio and visual quality but it can deliver  personalised experiences which are the lynchpin of future programming and advertising revenues for cash-poor stations.
NAB argues that a Spring 2028 sunset should give TV set vendors enough time to upgrade their product with new tuners and for TV stations to revamp internal systems to 4K.
Execs from Nexstar, Sinclair and Hearst entertain the debate ‘Technology and the Future of the TV Station Group’ while representatives from E.W. Scripps and the Graham Media Group discuss the ‘State of the Industry’ in another conference session.
Rebuilding credibility in news
Sinclair and Nexstar both dropped ‘Jimmy Kimmel Live!’ across their ABC affiliate stations and only relented once Disney had returned the show to air. While execs from these groups are speaking, don’t expect them to reference the controversy.
Do, though, expect tougher questions to be asked at a strand of sessions examining the future of journalism. BBC tech journalist Thomas Germain moderates a panel including speakers from CNN,  Axios and Hearst around issues of ‘Trust, Misinformation, and News Credibility’. Part of this is the corrosive use of AI to create deepfakes and algorithm-driven misinformation. At the same time, audience fragmentation “fuelled by filter bubbles and partisan echo chambers” says NAB “has intensified polarisation and eroded the shared reality that credible journalism depends on.”
This is happening against the background of a White House intent on having news organisations sign NDAs to prevent unlicensed reporting or have their Pentagon access blocked; while CBS News teams are ordered to report to conservative-leaning journalist and entrepreneur Bari Weiss recently installed as their new boss by CBS’ (and Paramount) owner David Ellison.
Weiss has already stamped her mark, telling staff at the network’s news division to explain what they do during working hours, apparently in the strictest confidence. Union Writer’s Guild East begged to differ and urged staffers not to comply.
World Cup puts sports on agenda
As attention turns to next year’s FIFA World Cup hosted across the US, Canada and Mexico, closely followed by the 2028 Los Angeles Olympics, sports tech American-style is moving centre field.
The country has pimped up it’s already tech-laden stadia with private 5G networks ready for a range of media activations during both events.
Speaking at the session Reimagining the Arena: Innovation at the Heart of Live Sports is Jake Kornblatt, VP of Global Enterprise and Commercial Lead for Sports & Venues at Verizon which is FIFA’s official telco partner supporting global media broadcasts of the World Cup.
It has teamed with NVIDIA and will use AI to manage numerous camera feeds and highlight key moments for live production. 5G networks at all World Cup venues will also be used to connect referees, coaches and team members throughout the tournament. 
Stadia are no longer just a place to watch the game, says NAB with more than a touch of hyperbole, “it’s becoming the ultimate experience engine.”
A keynote from the US Soccer Federation will address efforts to lift the domestic game on the same tide as World Cup 2026.
There are few faster growing areas of sports than those powered by women. Deloitte predicts that global revenues in women’s elite sports will reach at least US$2.35 billion (£1.88 billion) this year, a rise of 240 per cent in four years, with a significant rise in broadcast ($590 million, 25 per cent) and matchday revenues ($500 million, 21 per cent). The two highest revenue-generating sports remain according to the analyst, are basketball and football with the former on track to become the leading revenue-generating women’s sport globally.
Against that backdrop it will be fascinating to hear from TOGETHXR, the country’s fastest-growing women’s sports media company. It’s co-founders include World Cup winner Alex Morgan and WNBA national and Olympic champ Sue Bird.
The chief content officer and executive chair of TOGETHXR talk about ‘Shaping the Next Era of Sports Media Coverage’ with content ranging from social media to scripted and unscripted video, docuseries, and merchandise anchored by the tagline, ‘Everyone Watches Women’s Sports.’
Bigger, better, more cost-effective
NAB NY’s show floor is a fraction the size of the one in Las Vegas but still holds a healthy selection of vendors. The theme cutting through the lot of them is helping broadcasters to deliver more content, faster, and across more platforms than ever. 
In many ways it’s the same fundamental challenge: how to produce more, with fewer resources. Budgets to produce events are shrinking, yet audiences demand higher-quality content in greater volumes—often personalised by language, platform, or format. Many of the same forces: internet infrastructure and audiences eager for content spell opportunities if the right formula is applied.
The primary challenge for broadcasters today “is figuring out how to maintain a stable business model when every piece of the media value chain is undergoing transformational change,” says Steve Reynolds, CEO of Imagine Communications. His solution is IP deployed using the SMPTE ST 2110 standard.
That’s not because IP is the newest tech, he says, but because it enables everything else — remote workflows, cloud integration, distributed teams, and scalable infrastructure. “IP is the enabler, helping media operations pursue new business models that were previously out of reach.”
MultiDyne’s president Frank Jachetta agrees, “Right now, broadcasters are really juggling three different worlds: SDI, SMPTE 2110, and then IP, streaming, and cloud. Each one has its own set of demands when it comes to equipment and expertise, so the real challenge is making them all work together without complicating the workflow. But that’s also where the opportunity lies. If you can align those workflows, it opens the door to more efficient, flexible operations. It also supports things like REMI production, which we see as a big part of the future.”
Telemetrics’ VP Michael Cuomo believes the main pressure points are around reducing production costs. “Because budgets are limited, yet production studios are being asked to produce more content, we're addressing this concern by incorporating robotics that streamline workflows.”
An emerging opportunity, identified by Duncan Miller, Director of Global Marketing, Adder Technology, lies in the rise of virtual production.
“By merging real-time rendering with traditional production techniques, virtual production enables agile content creation, reduces post-production overheads, and allows effective collaboration from anywhere. For broadcasters and content providers, this means the ability to meet audience demand for more content, faster, without compromising on quality.”

Wednesday, 15 October 2025

BTS It Was Just An Accident

IBC

The Cannes Palme d’Or-winning critique of Iran’s police state was made in secret as an act of defiance. IBC365 sits down with the film’s Editor Amir Etminan to learn more about the fearless filmmakers’ process.

article here

Criticising the Iranian state in Iran risks intimidation, imprisonment, or worse, but filmmakers continue to defy censorship. The latest to achieve recognition is the Palme d'Or-winning drama It Was Just an Accident.

“Just because this film is about not being able to speak or work freely in Iran today doesn’t mean we should stop attempting to work or speak freely,” says Amir Etminan, the 42-year-old Iranian who edited the feature for Director Jafar Panahi. “Making this type of film in Iran actually fights against those issues.”

Panahi has been making politically charged films since 2000, when his third feature, The Circle, was openly critical of the treatment of women under the Islamist regime. His films, including The White Balloon (1995), Offside (2006), and No Bears (2022), consistently win awards. Yet, he has been imprisoned, banned from travel, and placed under house arrest for most of the past decade.  

The tragi-comedy It Was Just An Accident is judged to be his most overtly anti-government tale yet. At its heart, the story presents a moral dilemma: what would you do if you found your former jailer, and how could you be sure it was them anyway?

Co-financed by French company Les Films Pelleas, the film is France’s submission for Best International Feature at the 2026 Oscars and was made under the noses of Iranian officials. 

“Unfortunately, in Iran, especially for independent cinema, we are used to making films secretly,” comments Etminan, who previously cut No Bears. “We’ve normalised it. Mr Panahi is a very popular and well-known person in Iran, so we already had this fear and stress that there would be problems.” 

Pulling it off

First, the team obtained fake permission documentation to make a short film under another crew member’s name. The cast and crew were kept to a minimum of just 20 people, with several scenes taking place inside a van to disguise their activity. Cinematographer Amin Jafari shot it all using RED Komodo, a small cine camera that produces significantly larger files than a cell phone. Etminan was on set every single day, acting as a digital imaging technician (DIT) and an editor. 

“Every day I would take the memory cards home, make a copy of them, and convert the files to proxies before giving the cards back to Panahi to hide,” he says. 

Etminan used a modest 2020 MacBook Air with 8GB RAM and 128GB storage to edit the full film entirely offline. He often worked 18 hours a day with no editing assistant to create the proxy files from the RED footage. 

“Because the situation was very risky and secret, I only had a small laptop and a tiny SSD so as not to draw attention.” 

Nonetheless, on the 26th and penultimate day of shooting, Iranian security police came to visit the team. Luckily, the RED Komodo was already mounted on a car and being driven away from the set with Director Panahi and the actors. However, Etminan and several others were caught, and the laptop containing the full, edited film was found in the back of another vehicle.

“The police opened the back of the truck and grabbed the backpack, which had the laptop inside. They started to question us and wanted us to stop shooting.” 

Next to the bag was a pair of cameras that were props for the character of a wedding photographer in the film. Etminan made exaggerated efforts to protect the cameras in a bid to divert police attention away from the laptop. 

“The police thought these cameras were the main cameras we were shooting the film with. They asked me to open the cameras and tell them what information was on them. I said I didn’t know and that the batteries were not working, which, fortunately, they were not. The police demanded the memory card from the camera, which I gave them, so they went away satisfied that they’d taken something from the set that they thought was the main card of the film.” No arrests were made. “They came with 20 people and couldn’t find a single frame.” 

A few days later, with the offline edit complete, Etminan transferred the finished cut to an 8TB SSD card in Tehran and handed it to someone “completely unrelated to cinema” who then transferred the files over the internet to France. 

In France, the edit decision list (EDL) was conformed to the raw footage with VFX and colour grading applied. Etminan supervised all of this via a video call from Tehran. 

DoP Jafari kept the number of takes to a minimum to reduce the risk of discovery and to avoid having to handle and secrete large volumes of data. 

“One reason I went to the set each day was to calculate the length of every shot,” Etminan elucidates. “I had to tell them whether the shot we were filming fitted with the shot from before and afterwards. We would calculate that on set for every scene across the whole film using notes that were all in my head.” 

He adds: “Mr Panahi was so accurate that after the rough cut, we didn't have more than 10 minutes of unused footage leftover.” 

Flip the script

The drama itself deals with weighty moral issues about justice, torture, and forgiveness, but set in deliberately absurd scenarios, which would not be out of place in a Coen Brothers’ fiction. 

Etminan explains that such “bitter comedy” is a coping mechanism. 

“Real life is a combination of comedy and tragedy,” he says. “In the worst situations and the most bitter part of our lives, we retain our humour. That kind of saves us and helps us survive.” 

Much of the dialogue in the film comes directly from political prisoners’ experiences in Iran.  

“Comedy also helps the storytelling. Psychologically, using comedic moments and small jokes helps us to avoid the complete darkness of such a harsh reality.” 

Similarly, the tale begins by evoking the audience’s sympathy for a father who accidentally hits a dog while driving his family home. This incident means their car needs an emergency repair at a nearby garage, where we are introduced to a suspicious mechanic, Vahid. From this point, the roles of victim and villain reverse. 

According to Etminan, this is achieved through the writer-director’s approach to story. “In Panahir’s films, the camera won't move before the characters or before the characters’ actions,” Etminan explicates. “Instead, the character moves the camera from one spot to another.” 

In the first few minutes of the film, Eghbal is the main character, and the audience gets to know his family. The camera is motivated by him. Then, Panahir puts the focus on the character of Vahid, a mechanic. Vahid then becomes the one who motivates the camera to move from one spot to another. 

“Even when all our characters are inside the car, we do not move the camera outside the car because Vahid has not moved outside the car. These shifts in perspective between protagonist and antagonist are the classic forms of storytelling, but in the independent cinema of Iran, we prefer not to use those classic forms.” 

Facing the future

The risks of making such films are borne by everyone involved, Etminan included. In 2022, after making the feature No End, directed by Nader Saeivar about the Iranian secret police, the editor was forced to flee the country to live in Turkey. 

Several times he has been interrogated by the police, but says these interviews weren’t “harsh”. 

“The situation for students and young people in Iran is much worse. In general, the behaviour of the police in Iran towards people working in cinema is a bit more polite, because they know it won’t look good for them when news of their [detainment or suppression] is broadcast internationally.” 

After Panahi was arrested and sentenced to six years in 2022 for ‘propaganda against the regime’ (for expressing solidarity with fellow Iranian filmmakers Mohammad Rasoulof and Mustafa Al-Ahmad), filmmakers at the Venice Film Festival and around the world voiced their own support.  

Awarding It Was Just An Accident with the Palme d’Or has been seen as the international film community’s continued support for artists’ work against state oppression everywhere. 

“[Panahi’s] persecution has caused global backlash. He’s become a thorn in their side,” Etminan says. “Personally, I still travel to Iran whenever I want. I go there. I come back again. Honestly, we’re not afraid of repression or prison. After [experiencing it], you get used to it and you stop caring. 

“What I learned from Mr Panahi is that this is our country. If one day someone needs to leave, it's not us, it's them. It's the regime.”

 

Friday, 10 October 2025

Mystery, Beauty and Threat: The Endangered World of the Pangolin Revealed

 interview and copy written for RED

article here

Pangolin: Kulu’s Journey, now streaming on Netflix, was recently honored with the 2025 Jackson Wild Media Award for Conservation Long-Form for its powerful storytelling and impact. The film’s intimate cinematography and urgent conservation message immerse viewers in the mysterious and captivating world of pangolins. The small, scaly mammals found in Asia and Africa may look unassuming, but they’re among the most poached and trafficked animals on the planet. Academy Award–winning director Pippa Ehrlich tells the story of one pangolin rescued from illegal trade, as the film follows wildlife photographer and cinematographer Gareth Thomas on his mission to rehabilitate the vulnerable animal and prepare it for a life of freedom in the wild.

Back in 2022, Thomas was a conservation volunteer and took part in a sting operation that rescued the pangolin that came to be known as Kulu. He explains that after rescue, the pangolins need an intensive care process at the Johannesburg Wildlife Veterinary Hospital which requires that they be walked every day.

“Because Temminck’s pangolins don't eat so well in captivity you have to take it out into the wilderness and let it forage for its own food. I was pretty much out with pangolins from two to eight hours every day and since I was spending so much time with them and building these relationships I started noticing all this unseen and unusual behavior. I began to film it just for documentation purposes. That's where it started.”

With wildlife cinematographer Steven Dover, Thomas put together a five-minute teaser from footage shot over two years with the objective to pitch it to producers and get a longer form film made. “At the top of our list was Pippa,” he says.

This was in 2021 when Ehrlich, a fellow South African, was a director in demand following the global acclaim of My Octopus Teacher, which highlighted human interaction with a wild animal.

“I was getting a lot of emails from people about films they wanted me to make and I was looking at a lot of teasers,” she recalls. “When I saw Gareth’s teaser, there was a lot about sting operations and trafficking that I felt I’d seen before, but then I looked at the footage of the pangolins themselves and it was just absolutely beautiful and intimate. It was the first time I'd seen these creatures in a way where I actually got a sense of what they are as beings.”

She adds, “I realized that these guys really knew how to capture the essence of what a pangolin is. I spoke to Nick Shumaker [producer at Anonymous Content] and he was also really excited when he saw the footage, that here was a story here worth pursuing.”

While Thomas and Dover continued gathering material, Ehrlich and Anonymous Content secured the commission from Netflix.

“It was Steve who encouraged me to call RED,” Thomas says. “I was still shooting on my Canon EOS R5 but Steve was a very experienced wildlife cinematographer. He knew that we had to shoot on RED no matter what. I made a call to RED and told them we were shooting this film and that we'd love to be able to amplify it.

“It meant that for the final 18 months of production we were hyper-focused on shooting real Blue Chip natural history, incredibly beautiful material, not just of the pangolin but all the other wildlife in the film.”

The biggest advantage of the camera for the filmmakers on this project was V-RAPTOR’s Super 35 sensor for macro photography. “The S35 sensor gave us the ability to crop into the frame so that we could get even closer without always having to put doublers on the end of macro lenses,” Thomas says. “That was the big selling point for the S35.”

The filmmakers shot 8K macro knowing they could crop to 4K in post for delivery. “That gave us an incredible amount of zoom on the macros which was extremely helpful. It's nice being able to have a big frame shooting macro, but when you're seeing it on a smaller screen you're losing a lot of the detail so you want to be able to zoom in as much as possible.”

Much of the activity of the teeming insect world shown in the film as well as the flicking action of the pangolin’s long tongue eating ants happens so fast they shot V-RAPTOR at 240 frames per second in 4K.

“This was incredibly valuable just to slow the action down to something that the human eye can register,” he relays.

Aside from the macro Canon RF 100mm they packed RF 24-105mm and RF70-200 zooms and a 50mm Sumire Prime. “The biggest focus when filming pangolin is to have the most minimal rig possible. I don't even like shooting with a matte box on. It's pretty much just the brain screen and the lens and the top handle. You've got to be mobile the whole time because you're running through the bush.

“You also want to be as close to the ground as possible. It's not a case of putting it on a tripod. To get down to an ant’s level we’d often improvise and steady the RED on bean bags.”

Ehrlich and Thomas began the project with another pangolin in mind. Then they met Kulu.

“I went to visit Gareth at the Wildlife Hospital and while we were there met this absolutely traumatized, hyperactive, inconsolable little animal,” she explains. “He was originally called Gijima (which means ‘to run’ in Zulu). He was the smallest pangolin in the ward and impossible to manage.

“So, I was surprised when Gareth called me up two months later and told me he was taking that ‘crazy little pangolin’ up to Lapalala. I told him to film everything because who knew what was going to happen? The story evolved from there. We were able to get such incredible material of Kulu because Gareth was there for the entire process but also because we had such incredible access at Lapalala Wilderness Reserve. To have that kind of freedom to move around in the bush just makes everything so much more practical.”

Lapalala Wilderness is a 48,000-hectare reserve, a four-hour drive from Johannesburg, where Thomas guided the young creature (now nicknamed Kulu which means ‘easy’ in Zulu) on the next stage of its rehabilitation. The film crew including associate producer Corné van Niekerk secured filming permits and coordinated with the on-site anti-poaching unit to ensure safety for the pangolins and those transporting Kulu.

“It’s a Big Five reserve meaning there are big cat predators, elephants and rhino wandering wild,” he says. “You can't go walking out 10 kilometers in the middle of the night. The entire principle is to carry as minimal gear as possible and make walking through the environment as easy as possible.”

At the end of the film, Kulu is successfully released into the wild and an onscreen title informs us that it is unlikely a human will ever see him again. In the film, you can see Thomas almost tear-up at the thought that this day would come.

“Pangolins are very individual and they are natural loners,” he says. “So, all the ones being rehabilitated get to that point where they want to be wild. Kulu wanted to be wild before I could allow him to be wild! So that was always his ultimate goal and mine too. There's no sense of sadness on my part. Of course, I miss the little guy. He was my best friend for a year and a half and my guide into the wilderness but there's no part of me that is left wanting. It’s just happiness.”

The film was graded by Colorist Greg Fisher at Company 3 in London and produced with the help of the African Pangolin Working Group (APWG), of which Thomas is an ambassador.

The hope is that Pangolin: Kulu’s Journey will bring a new level of public awareness that has the potential to change the outcome for these unique animals.

A special thanks to Pippa Ehrlich and Gareth Thomas for capturing a rarely seen world with intimacy and precision, and for sharing a story that quietly underscores what’s at stake—and what’s still possible—for one of the planet’s most endangered species.

Filming Real Fire: ‘The Lost Bus’ and the Art of Authentic Chaos

RedShark News
Inside the making of The Lost Bus, where the team ditched virtual sets to shoot real flames, dusk light, and dynamic camerawork for a visceral survival thriller.

article here

At one point in The Lost Bus the heat from the fire surrounding the titular vehicle is so intense that the imagery turns abstract with just flickering black silhouettes against dark orange.
“Paul wanted it to be like the children’s game of peek-a-boo – when you have to close your eyes so you only see fractions of movement,” says cinematographer PÃ¥l Ulvik Rokseth of director Paul Greengrass’ motivation. “It's just a wall of smoke and amber like a horror film or a sinister kind of science fiction. It’s a nightmare experience.”
Given that this based on fact account of a bus load of primary school children stranded in the middle of a raging inferno features actual kids the only way to tell it safety first was virtually. At least, so they thought.
ILM worked up pre-visualization for the journey the bus would take on a Volume stage in which the set would be surrounded 360-degrees by LED panels.
“The plan was to light with the LED wall and also for the actors to see and react to the environment,” Rokseth explains.
They were pretty advanced down the virtual road before Greengrass got cold feet. “I just felt it wasn't who I am as a filmmaker,” he said in a QA following the film’s screening. “It didn't feel ultimately real enough for me.”
It appears that the stop-start calibration of camera tracking with digital backgrounds was anathema to Greengrass’ signature fluid docu-style.
“It was just so much more inspiring for the actors and for the camera team if we were on a moving bus reacting to actual fire and smoke,” says the DoP who previously shot Greengrass’ docu-drama July 22. “We found that we could tell the story with more realism and more dynamism if we shot it practically.”
They located production to an abandoned campus in Sante Fe, New Mexico with enough space for roads and to create traffic and laid gas lines to burn controllable jets of fire.
This gave Rokseth the challenge of recreating lighting conditions realistic to those of huge wildfires which effectively block out most of daylight.
“Real wildfires produce this strange, occluded light — like a prolonged eclipse but there was no way we could recreate that over such a large surface area so we shot almost the entire film at dusk. Hopefully, it doesn’t feel like fantasy fire; it feels lived. That authenticity was always the goal.”
When filmmakers shoot at magic hour they are normally seeking that romantic sunset glow which looks so good on skin tones. Not here. With burnt faces caked in soot the aim was to throw the characters into an apocalypse.
“PÃ¥l has this wonderfully unvarnished aesthetic that I share,” Greengrass explains. “I told him I didn’t want to ‘light’ this film in the conventional sense. I wanted the fire to light it as much as possible.”
They rehearsed sequences during the day then shot just a handful of passes in the 90 minutes preceding, during and just after dusk. Much of the daytime was given to practicing safety procedures and timings for the gas burner FX.
Rokseth says, “When you shoot with Paul you never set up a shot. You shoot the whole scene in one, so you have to light 360. It's more like observing and documenting the story within the parameters that he sets.”
In this sense, he says filming The Lost Bus was like capturing a concert performance. “You can prep and prep and do whatever you can to make everything work but when you go on stage you have to deliver. Paul’s approach to this story was for us to go in and document what was happening in front of us as if we were seeing it for the first time.”
Since gas burners output a yellowish light rather than the variety of warmth familiar from burning carbon, Rokseth had to augment fire elements with tungsten lights, backlighting through heavy smoke FX.
He populated the set with cherry pickers and condors “like football stadium gantry lightning” as well as LED Tungsten SkyPanels. “By dimming those down and making the tungsten bulb glow is where I started to find the right characteristics of fire mixed with the power of the actual flames.”
The bus interior was lit with small LEDs and tubes to fill in the shadow and the DP relied on the ARRI Alexa 35 to do the rest.
“The Alexa 35 has an enormous amount of dynamic range, especially for fire. The colour reproduction is one of the best I've seen, the reds become even more vibrant plus you can see all the way into the fire so we can pull it back in post.”
Scenes were principally shot two camera with coverage fleshed out with pocket DJI Osmos, Red Komodos hard mounted on vehicles and DJI Inspire 3 drones.
Rokseth himself operated A-cam while James Goldman commanded B-cam. Their choreography was a key part of prep.
“The vibe of the whole film is to tell the world from the point of view of the characters. We prepped moves with the actors during the day but in a sense that goes out of the window when you film when you are in the heat of the moment guided by instinct. You cannot anticipate. It is not a knowing camera. It is basically observing from the character’s perspective at the same time we have to give the editor latitude to play with so we can’t leave a shot too early.”
Naturally, this piles a lot of responsibility on the camera operators which is why Rokseth likes to take the lead.
“You can always have another camera pick up the shots that you don't get but to tell this story it's difficult to send in somebody else. It’s better for me to do it  myself because then it's my decisions and because it's in the heat at the moment I can do this instantly. I couldn’t do that remotely by relaying instructions to another operator.”
While Rokseth would follow one character – Matthew McConaughey’s hero bus driver for example, the B-cam would go in another direction. The trick was not to capture each other in frame or get in each other’s way.
“Not only do we have stay out of the other shot but also have to know what you did in the first take so on the second take you can do the opposite,” he says.
There was no time to review on monitors so Greengrass would give immediate feedback after each take. “Paul was our eye. He tells us what worked, what we missed and what he needed more of.”
Their choice of zoom reflected Greengrass’ reportage roots. “A zoom has long been the documentary maker’s choice and with a Super 16 format you have a tool that is light, nimble and able to tell multiple shots in one take,” Rokseth says. “We wanted the package to be the digital equivalent of Super 16.”
This turned out to be the Canon 8-64mm T2.4, which enabled the operators to carry handheld or shoulder mounted and frame for wide or tight shots. It also meant cropping into the S35 format and halving the 4.5K resolution but by shooting open gate ArriRaw the picture could be blown back up to full frame in post and the output returned to 4K.
With Stephen Nakamura of Company 3, Rokseth built a LUT which was a mix between a print emulation and film stock emulation. “If you shoot everything on one film stock it's more practical because you don't have to divide everything up into interiors, exteriors etc and since LUTs are essentially film stock, I usually just tend to use one for the whole project.”
He describes the colour scheme as “red fire, black night and weird amber universe” which contrasts with scenes set in normal daylight.

Scenes of the fire get more abstract and almost expressionistic the deeper the bus travels into the conflagration. This is most apparent in a scene in which school teacher Mary (America Ferrara) gets out to look for water at a trailer park. The action here, which includes exploding gas tanks, is shown with flickering almost strobe-like effects in black silhouettes against dark orange.

“We had a massive amount of smoke effects and it's disorientating. When you see real wild fire references and when you talk to the firefighters who were there at the real event in 2018 you realise how much of a horror scene it was.”

Wednesday, 8 October 2025

Behind the scenes: The Lost Bus

IBC

article here

White knuckle ride disaster movie with burning environmental message plotted by editor William Goldenberg and director Paul Greengrass
Despite best efforts to contain the wildfire engulfing the Northern Californian town of Paradise, the local fire chief, exhausted, admits defeat. At a press conference he declares, “We're creating more fires, bigger fires, every year. We are being foolish.”
It’s the nearest The Lost Bus comes to damning us all to hell for the environmental damage that helped turn Paradise into a tinderbox in the real life story depicted in the film.
“We went back and forth with this line and almost took it out but then when the fires happened in the Palisades and Eaton we just had to leave it in,” explains William Goldenberg, the film’s editor about the wild fire that destroyed large areas of Santa Monica in Los Angeles last January.
At the time, Goldenberg was in postproduction on The Lost Bus with director Paul Greengrass.  “We were very worried. I was wondering if my house was still there, which it was, but I got lucky. I have a lot of friends who lost homes, but life goes on.”
The Pacific Gas and Electric Company were found culpable for the 2018 Paradise fire which cost dozens of lives, caused an estimated $16.5 billion in damage and for which PG&E had to pay $13.5 billion in restitution - as the film makes clear in an onscreen coda.
“We didn’t want to cross the line into preachy or being too on the nose but it was also too important to leave out,” Goldenberg says. “I appreciate the fact that Paul, as a writer, is able to have this strong message in the movie without hammering you with it. First and foremost this is a great yarn and a movie celebrating heroism.”
The Lost Bus recounts the events from the point of view of Kevin, a bus driver played by Matthew McConaughey, and Mary, a primary school teacher (America Ferrera) and their attempt to save 22 children from fire in the forested and hilly neighbourhood.
It is filmed with the signature kineticism that Greengrass brought to the Bourne franchise and Captain Phillips with shaky handheld camerawork and rapid cutting that intend to leave the viewer with the feeling of fighting for air.
“Paul and I are instinctual filmmakers,” says Goldenberg who won the Oscar for editing Argo. “We’re constantly asking, ‘Is it exciting? Are we getting the story across? Can we tell multi-layered stories at the same time as being clear?’ You use your best instincts but you can never know what's in the mind of someone else.”
The first third of the film establishes the geography of the forested and residential area including where Kevin’s yellow school bus is in relation to the fire, the evacuation zones and how quickly the fire grew out of control. It’s a constantly shifting series of events that underlines the scale and force of the natural disaster.
Test screenings were invaluable as a way of checking if the audience were too disoriented by the blizzard of information presented via maps, TV and radio reports and first responder communications.
“You're constantly surprised by what people get and what they don’t,” Goldenberg says. “A lot of times what you think might be clear is just confusing and other times something you think is going to be completely confusing, people say, ‘No, I saw that one blink of an eye and I knew exactly what they were thinking.’”

For all the apparent complexity of the edit the story itself is a simple ticking clock of survival. “The fire crew are trying to put the fire out. He's trying to save the kids. At a basic level, even if you don't get every detail, you understand the story.”
An example of the complexity with which Goldenberg had to grapple is a scene of first responders gathering around a large table to coordinate their response to saving life. Greengrass shot it in one set up with all the characters talking at the same time.
“Everybody is individually miked but when you hear the mix track you can't hear a thing. It’s just noise,” relays Goldenberg. “There’s at least seven people around that table talking at the same time so you have to watch the dailies seven times because you have to listen to the Chief’s dialogue and then the deputy chief’s dialogue and so on. There will be a gem of dialogue that you might miss otherwise.”
Greengrass shot several scenes in this fashion to encourage improvisation from a cast that included actors (such as Yul Vazquez playing the fire chief Martinez) and non-actors like John Messina playing Martinez’ deputy and himself a former fire fighter who actually led the response to the original fire on that day. Goldenberg also interweaves about 15 shots of real news footage from the Paradise fires.
“We had a tonne of video from mobile phones but the vertical format kind of pulled us out of the movie so we rejected those whereas video shot in landscape format blended better.”
A scene in which Mary leaves the bus to go find water for the children is depicted almost entirely with black silhouettes against the dark orange of the blaze.
“At this point they are inside the smoke column and all daylight is blocked out. It was supposed to be impressionistic and abstract.  We added a lot of smoke and the sparks and embers in post because  obviously we couldn't have real fire around those kids. We were able to dial the effect up or down to make it experiential because that's what it is really like if you’re in that situation. Some of the shots were spectacular but we also had to trim it down and use the pieces that were the most story illustrative. The pieces of film that would tell the story the best.”
Goldenberg was finishing work on his directorial debut Unstoppable for Amazon when filming began on The Lost Bus. Two other editors worked with Greengrass to assemble dailies and to cut the first pass.
These were Paul Rubell, with whom Goldenberg had worked on Miami Vice (2006) and whom Goldenberg says has a similar editing style, and Peter Dudgeon who assisted Goldenberg on two previous films for Greengrass, 22 July and News of the World.
“While they were shooting, I was constantly looking at dailies and giving feedback because I didn't want to just drop into the production cold without having seen anything. Every weekend, Paul would send me a cut, and I'd watch it or he’d send scenes as they got bigger and bigger which I’d give feedback on. When they were finished shooting I took over and did the post-production part.”
This included integrating the sound effects of the fire and the score by James Newton Howard. “As we're cutting supervising sound editors Rachael Tate and Oliver Tarney are delivering us tracks. They're taking scenes and cleaning up the dialogue because in production there were a lot of fans and bus noise rendering some of the dialogue inaudible. We'd give the cut to Rachel and Oliver for cleaning and it would come back and we could magically hear it all.”
According to Goldenberg, this process happened earlier on in postproduction than is normal. “What generally happens is that you only get here to the sound mix on the dubbing stage and it wastes a lot of time. Here, Oliver and Rachael got going right away so by the time we were start screening for people we had a nearly fully developed soundtrack.”
The same happened with the score. Newton Howard, who also composed for News of the World, began writing as soon as principal photography had finished. “We were constantly putting his material in, giving him feedback and developing it so the sound of music has a chance to evolve, as opposed to being added right at the end of the process.
Goldenberg says the concept for the film’s music was to begin with it feeling very subdued and as the film goes along, it grows and swells with the emotion “so by the end we're playing the music full-on but hopefully it just kind of evolves naturally.”
As propulsive as the action is there are moments in the film when the camera is still and the soundtrack falls silent, such as when Mary and Kevin try to comfort each other in their darkest hour.

“In film theory they call this dynamic range,” he says. “Part of the job as an editor is to figure out where the film needs to stop and breathe and where it doesn't. If you go like a freight train all the way through then it's going to get boring. So, we're very conscious of having those moments for the audience to come up for air before we go dive down again.”
Filming fire in magic hour
The initial plan was to make the film virtually, on a Volume stage. Greengrass envisioned something with the immersive scope of the Sphere in Las Vegas.
“That was my first instinct. I went to see the U2 show at the Sphere and it was absolutely extraordinary. You're in this huge theatre with a wrap-around screen and you feel like you're in the desert.”
Given all the safety issues with fire and a cast of kids, shooting on a stage seemed to be the only way to do it. They made lots of tests and were pretty far along the process when Greengrass got cold feed.
“I just felt it wasn't who I am as a filmmaker,” he says. “It didn't feel ultimately real enough for me.”
Instead, they located the production at an abandoned campus in Sante Fe (New Mexico) in an area several times the size of Pinewood Studios. It had roads and the space to drive a bus, create traffic and lay gas lines to burn controllable jets of fire.
The fire from the gas burners was augmented in CG at facilities including Cinesite. VFX Supervisor Charlie Noble had his team photograph lots of real fires to add to the authenticity of fire enveloping a landscape.
The last piece of the jigsaw was shooting a story in lighting conditions which to be realistic to the experience of huge wildfires, blocked out the light. “It's like being in a deep, deep eclipse,” describes Greengrass.
The solution was to shoot in the 90 minutes or so of fading light at dusk. With cinematographer PÃ¥l Ulvik Rokseth (July 22) they rehearsed sequences with the actors including all safety procedures and gas burner FX during the day then shot three takes consecutively in magic hour. The entirety of wildfire sequences, which is most of the film, were shot this way.
Goldenberg says, “They would rehearse during the afternoon and then do really long takes with multiple cameras, maybe two or three passes, and then it got too dark. They would also shoot their rehearsals and stuff that was really too dark but that sometimes we were able to use.”
For Greengrass this truncated film window “created a rawness and immediacy, almost like a stage play where you can only perform once.”
“It gave the film a live, visceral quality — you really feel like you’re on that bus with them.”