Tuesday, 10 February 2026

ISE2026: Thriving on an integrated identity

IBC

A show which mixes a vast number of different business areas shouldn’t work but it does because the underlying technology is finally integrated.

article here

ISE 2026 in Barcelona attracted a record 92000 attendees as the remarkable rise of professional AV shows no sign of slowing down. In the 22 years since the first ISE event drew 3500 people to Geneva, the Pro AV industry has professionalised and matured. Numbers this year were undoubtedly boosted by the permanent closure of 30-year old exhibition Prolight + Sound but what was notable about this edition of ISE was an industry confident of its identity.

This has not always been the case. ISE stands for Integrated Systems Europe, words which once stood for the technical and localised business of systems integration but which now carries more universal weight. A show which still mixes expensive home cinema with bus stop signage and police control rooms with giant stadium displays finally seems to be speaking the same language. It has become integrated.

As Futuresource Consulting note, “ISE has evolved from a collection of specialist AV exhibitions into a single, integrated platform representing a fully connected industry.”

The applications for offices, classrooms, churches, sports venues, hotels and supermarkets may remain distinct but the boundaries between them in technology terms is being erased.

You could readily imagine how streamers and broadcasters could stake out territory at future ISEs as TV and film production and distribution itself becomes almost an application area of AV. Broadcast engineering, like AV, has become increasingly standardised and software-driven. Live sports are already being broadcast in experiential venues like COSM while esports is built on equipment that works in both broadcast and AV.

The majority of AV products now operate as network-connected devices rather than standalone systems, enabling greater flexibility, scalability and integration across applications. It is this which is driving the global market for Pro AV up to $258 billion by 2030 - an astonishing 21 percent growth rate - according to Caretta Research.

Indeed controls, infrastructure and software now represent 37% of the entire pie at $79bn [per Caretta] as the industry evolves beyond hardware to AV-over-IP and centralised management systems.

“It’s clear that integration is where the momentum is,” said Sean Wargo, consultant at Apogee Insight. “The highest areas of growth are coming from services including cloud services and data management.”

Another significant driver is the experience economy—"technology that enables personalisation, interactivity, and immersive experiences,” said Mike Sullivan, Senior Industry Analyst at AVIXA. “Pro AV is central to this trend, supported by advances in display technologies, AI integration, and control systems that make experiences more accessible and engaging.”

 

A new era of broadcasting

The AV market has become one of the most dynamic growth areas for companies traditionally operating in the broadcast and cine sectors. What was once a relatively separate domain is now rapidly merging with the broader world of live events, corporate communication, education, and hybrid production.

“People often say that television or broadcasting is dead. It absolutely isn’t,” said Ciarán Doran, curator and chair of the Broadcast AV Summit. “We’re entering a new era of broadcasting — one where brands and corporates are becoming broadcasters themselves, creating their own channels.”

The Broadcast AV Summit brought together the streaming economy, “where brands connect directly with end users through their own channel or deliver content straight into their inboxes”, with the Creator Economy headlined by YouTuber Callum Hewitt.

“The Experience Economy is where much of the technology you see here at ISE becomes essential,” Doran said. “The era when traditional broadcasters were the sole gatekeepers of high‑quality content is over. Think of the Harry Potter experiences or the immersive exhibitions at the Science Museum in Paris — these emotional, sensory environments are no longer created solely by the broadcast world. They’re built with professional AV technology, and they represent the new frontier of broadcasting.”

The big beasts of broadcast manufacturing have now made ISE home with many, like Blackmagic Design, Sony, Grass Valley or AJA expanding their real estate.

David Ross who runs hardware vendor Ross Video keynoted the Broadcast AV summit and highlighted opportunities for his business within in‑house studios, governments, and ‘architainment’.

“This is where architecture and media come together through the innovative use of LED and projection,” Ross said. “It might be a building façade, an atrium or a feature on a wall in the lobby, but the intent is the same - to make the space itself part of how the organisation communicates.”

“AV end-users increasingly expect broadcast-level image quality with consumer-level usability,” said Guilhem Krier, head of new business and market development for Broadcast & ProAV at Panasonic Connect. “They want cinematic colour, stable IP streaming, remote control, automated operation, and scalable systems, all without needing highly trained operators on site.”

He considers the convergence of AV and broadcast to be a fundamental shift that will define the next decade of video production.

“This will result in IP-native production becoming the new standard; AI-assisted automation playing an increasingly prominent role in camera switching, mixing, and real-time analytics; virtual and hybrid production becoming more accessible; and increased interoperability.”

AI with everything

It was impossible to discuss the future of anything at ISE without mentioning AI but the overwhelming theme was how AI can be integrated into existing systems and practices.

For example, AI might improve integrator efficiency, productivity, and the ability to deliver managed services.

“For decades, the industry has sought to shift from transactional, multi‑year upgrade cycles toward recurring revenue models,” Wargo said. “AI‑powered design tools, automated drawings, and intelligent product recommendations will accelerate this shift.”

Vibe coding already allows developers to delegate writing source code to a GenAI. Krish Shah, product head and founder of voice automated start-up Phonx AI said he only learned to code because of it. “There’s no way I could have learned everything I know in the last two or three years without AI. Today I’m running a team of seven developers and managing interns. That would have been impossible without AI.”

The technology is now advancing to agentic AI, where a large language model is given real‑world agency—allowing it to operate a web browser, perform transactions, and take actions on your behalf. Watch out though, because it will soon have the smarts to do this behind your back.

“We now have AI agents that can interact with each other through A2A (agent‑to‑agent) protocols and MCP, which function like APIs. This opens the door to multi‑agent back‑end systems inside companies,” said Rich Green, Founder, Rich Green Design.

One agentic AI tool called OpenClaw went viral on launch last month because it puts the ability to network a smart home in the hands of anybody. This is a security risk because it could link systems from phones and TVs to doorbells without any human intervention, Green said. He also said agentic software was so potent it threatened to disintermediate the entire professional development community.

Here come the robots

After agentic AI comes physical AI or embodied intelligence: when an AI agent is put into a robot.  “There’s a huge amount of activity in this space right now, especially around building world models so these physical AIs can understand and operate within real environments,” Green said.

It now feels like a race to get robots into our homes.

“We’re already seeing this in AV spaces: automated camera robots, traditional industrial robots, and pet robot companions,” said Rebekka Gingell of Lang. “As we introduce more and more robots into our everyday lives I believe it’s very important that we have laws about how robots are governed, especially as these become increasingly autonomous with AI.”

She added, “A cute robot pet doesn’t feel threatening but a humanoid robot has cameras for eyes. Inviting that into your home feels like inviting full‑time surveillance.”

Resilience and sovereignty are essential

The impact of tariffs and geopolitical uncertainty is upending the supply chain. Dependence on platforms and services that originate outside Europe’s legal framework, or political system creates significant risk. Sovereignty is becoming a guiding principle in procurement decisions — covering AI models, cloud services, and traditional software.

“For years, we’ve talked about ‘Made in China’ — whether LED panels or AI systems — and European companies and governments have been cautious,” said Florian Rotberg, MD, invidis Consulting. “But for the first time, we’re seeing ‘Made in the USA’ being blocked as well. That’s entirely new. Half a year ago, who would have imagined U.S.-made products being restricted because U.S. authorities or intelligence services were uncomfortable with their use?

“This is the new reality we’re operating in. The strategy is no longer just de‑risking from China — it’s also de‑risking from the U.S. Resilience and sovereignty are becoming essential.”

The EU is now recommending that corporations and governments adopt a two‑supplier strategy to ensure independence if something goes wrong. This matters for the entire industry since 80% of all cloud solutions used in Europe come from U.S. companies.

“That level of dependency is concerning,” said Rotberg. “Customers are now asking questions they never asked before: Where is the CMS developed? Is the software European, American, Middle Eastern? What happens if a service is shut down?”

Invidis consulting has recently been flooded with inquiries from major European organisations — including Olympic committees — asking whether alternatives exist to their current U.S.-based hyperscalers.

“The key message for the Pro AV industry is that everyone needs to recalibrate. De‑risking doesn’t mean changing everything, but it does mean building a more balanced ecosystem with at least two suppliers and understanding where the real risks lie.”











Into the Wild, “Nightmares of Nature”

 my interview and write up for RED Digital Cinema

article here

Just like the best horror films, wildlife docuseries Nightmares of Nature plunges viewers into a terrifying world where nothing and no-one is safe.

The opening of episode one establishes the mood. “Nature is full of wonder and beauty,” says narrator Maya Hawke (Stranger Things) over shots of delicate butterflies, an innocent looking frog and a cute mouse. “But for the creatures who live out in the wild, it’s also full of monsters.”

Cue, a slithering snake and a montage of bugs being eaten alive.

The groundbreaking Netflix docu-horror hybrid is produced by Jason Blum, the master of modern horror (Get Out, Insidious, The Black Phone and M3GAN) and award-winning documentary specialist Plimsoll Productions (A Real Bugs Life (National Geographic); Incredible Animal Journeys (Disney+), Big Beasts (AppleTV+).

“The stumbling block we came up against was how to make a horror natural history look like it isn’t just a regular natural history show that’s been graded a bit spookily and has loud sound design,” explains filmmaker Nathan Small of the project’s conception. “When Blumhouse Television got involved, they pushed it towards much more of a narrative-led cinematic thriller following characters as they go on a journey.”

Netflix loved the idea and commissioned two seasons of three episodes: Cabin in The Woods and Lost in the Jungle follow the fight for survival of a pregnant mouse and other heroes including a baby opossum, a raccoon, and an iguana plus a jumping spider barely the size of a dime. The aim was to capture the authentic behaviors of these elusive, speedy and tiny creatures while delivering a polished and visually stunning horror-infused spectacle.

“Our characters were chosen because they exhibited interesting adaptations and cool behaviors that would work well with the storylines,” says Charlotte Lathane who directed Lost in the Jungle. “We spoke to scientists and experts to back the research up and engineered scenarios in which our hero creatures can use that adaptation or superpower to get out of a life-threatening situation. The scenarios may be conjured up, but all the behavior is real.”

Exteriors, mostly shot in Costa Rica, combined with interior sets dressed as an abandoned creepy wood cabin and abandoned laboratory, were lit to mimic the look of classic Hammer haunted house or slasher films.

“We loved that we could lean into horror tropes to capture the drama, danger and dark beauty of nature,” says Small, who directed Cabin in the Woods. “We watched a number of Blumhouse horror films so that we could work that cinematic language into the style and cut a lot of test sequences using our natural-history archive.”

The very specific and challenging lighting design req uired a set of tools capable of photographing extreme close-ups as well as shadow detail and bright highlights.

“Lighting is so important in horror. Being able to create pools of light and pools of darkness and have things appear out of the darkness is so crucial,” says Small. “Being able to operate in low light with confidence was vital. In this case the Dual ISO (800 and 3200) of RED GEMINI was super helpful for us.”

“When it comes to filming authentic animal behavior you don’t want to have to throw loads of light in your animal’s faces,” says Lathane. “The dual sensitivity of the camera meant we’re not blasting loads of light at our creatures to be able to see them and we can shoot with a low light cinematic feel. It means we could be a lot subtler with the lighting.”

RED cameras are the workhorses for natural history, and we are all familiar with them. “We've all been using them for years; the cameras are super reliable,” adds Small.

Cabin in the Woods was shot by DOP Chris Watts using custom-made scope lenses equipped with high-end front objective lenses to offer a range of focal lengths - essentially a full macro prime set.

“At that scale, the depth of field is paper-thin, and a knock or a wobble looks like an earthquake on screen, so to maintain that control, we used a custom motion-control rig,” explains Small. “It enabled us to execute perfectly timed push-ins, creeping pans and slow, deliberate tracking shots at tiny scales — cinematic language, just shrunk down to the size of a cockroach.

For everything beyond those ultra-close moments, they shot with rehoused Contax Primes and high-quality diopters. “The Contax glass gave us that cinematic fall-off and organic softness while the diopters let us push right up against our subjects without losing depth or quality. Together, they gave the woods texture and mood — beautiful, tactile and tiniest bit unsettling, with pin-sharp clarity and distinctive character.”

Lost in the Jungle was shot by DOP Robert Hollingsworth principally with Mamiya primes with the majority of the spider work shot by award winning cameraman Simon de Glanville.

“We chose to pair the GEMINI with Mamiya primes to match the scale of the lens to the scale of our subject and take advantage of the extended depth of field that those lenses allow for,” Lathane says. “The aim being to immerse the audience into the scale of our characters.”

“GEMINI is good in low light and allowed us access to an excellent bit rate and higher frame rates that are required for natural history storytelling. Horror is as light as it is dark, so having a camera that can see into the shadows enabled Rob to light with a higher contrast ratio to create the negative space in the frame. That would allow the audience a sense of uneasiness.”

The camera was gripped using a variety of equipment such as jibs, sliders, and a bespoke macro motion control gantry. The Mamiyas were paired with extension tubes and diopters for some of the trickier shots, and they even found a use for Vaseline on lenses to give a dreamlike feel for certain scenes.

“Horror is nothing without lighting, and the compact shooting package with the small grip fit perfectly in the practical world that we were shooting in, which was a disused building in Atlanta. This enabled us to light for the space as opposed to the character and get practical fittings in close by. Despite the grip used, the camera often remained stationary, allowing our character to drive the narrative.”

A set of vintage Nikon lenses and Laowa 24mm T8 2x Macro Pro2bes were used across both series while Aputure LEDs were the principal light source used for controlling a number of hues instantly – a luxury that filmmakers don’t enjoy in the wild.

“Clearly we don't want animals to be hurting each other so we've had to be a little creative to achieve some shots,” Small says. “Where we've got predator-prey scenarios we've had to figure out ways to get these two shots without ever actually putting them in the same space and harming them.”

For example, a shot of a mouse scurrying away from camera then pulls focus into a spider wrapping its prey was a composite of two shots. Similarly, an opossum and a snake appear to be present in the same frame when in fact they are two shots seamlessly married together.

“We've used a lot of motion control so we don't even have to have locked off shots. We can have the camera moving, capture the first plate that we need of one of the animals then reset everything. Provided no one touches or moves - which is difficult in a cabin because everything's creaky and wobbly – we can get the second animal in and repeat the move - exactly.”

De Glanville also used motion control when filming the jumping spider (who was only around 10mm in length) - most notably to corkscrew the camera down a ventilation shaft in a chase sequence.

Other techniques included shooting at a faster frame rate to be slowed down on screen making the action appear more suspenseful to human eyes.

“A lot of what happens when you watch it happening for real can feel quite underwhelming but when you get it into post and then add the sound design you being to understand that going to work,” Lathane says. “RED just gives you that confidence that what you're looking at on the monitor will hold when we get through post and into the grade. We can trust that the blacks are going to hold up and the shadows are going to look good when it's graded down.”

The natural world may be dog eat dog but editorially it was important to show the horrific impact of humans on the environment. Half-way through episode 1 of Jungle the baby opossum’s family come to a grisly end under the wheels of a vehicle. “The scene was staged to look that way but the number one cause of death for opossums in the wild is being roadkill so we’re not shying away from these facts,” Lathane says. “Having the licence to go for it in this dramatic way was kind of an homage to Final Destination.

She continues, “Often in natural history you are filming the attack of one animal preying on another on a long lens and we often cut away before the critical moment. On this show we need to kill off characters because otherwise the audience will think ‘it's fine everyone's going to get out alive’. Once in a while, we felt it was okay to kill off a character just to let them the audience know not to necessarily expect a happy ending.”

The grade was completed at Films@59 in Bristol by colorists Christian Short and Wes Hibberd. “We’re extremely happy with the look that they've given both of the shows,” Small says. “I think they’re both unique and unlike traditional natural history but also not too crazy into horror. It feels like a rich and classy version of a natural history show.”

With the formula established and with so much ‘horror’ out there in the natural world it feels like the team could be back for more.

“There are endless ecosystems that we could turn our attention to, and we have lots of ideas,” says Small. “We're all really keen to do more and we’ve lots of learnings from the first ones that we’d take into the next time."

Thursday, 5 February 2026

BTS Marty Supreme

IBC

The creative team behind Uncut Gems translates its brash beauty and adrenaline rush to 1950s New York City in screwball drama.

article here

The kinetic narrative of Marty Supreme may be driven by the intoxicating charm of its title character but it’s the pantheon of indelible supporting characters which brings the film to life.

“There are more than a hundred featured characters in the film — every day on set different actors arrived with these unforgettable faces,” says cinematographer Darius Khondji (Delicatessen, Seven). “The faces look like something out of a Honoré Daumier painting — [and] were incredible to photograph.”

Loosely based on the autobiography of flamboyant table tennis hustler Marty Reisman, director and co-writer Josh Safdie sets his tale amid the teeming working-class life of 1950s Lower East Side Manhattan. Timothée Chalamet stars as the bold, fast-talking dreamer, hellbent on turning an overlooked sport into a personal springboard to glory.

Khondji, reuniting with Safdie after collaborating on Uncut Gems, shot Marty Supreme on 35mm film (specifically Kodak VISION3 500T 5219) using Arricam-LT cameras and vintage Panavision C Series anamorphic lenses.

“The old glass of the anamorphic format appears to make the actor bigger,” says the two-time Academy Award nominee. “It has the strength of black and white film. You can tell a very intimate story with anamorphic.”

They strived for the same “brash beauty” of Uncut Gems which Safdie urged Khondji to revisit “as if discovering it in 1952.”

“I photograph faces all the time, but this changed my way of thinking about film and digital," says the DP. “When I push the negative slightly, it gives a special texture to the image that I cannot get from digital.”

For reference, Khondji checked out the work of 1950s street photographer Helen Levitt and colour pioneer Ernst Haas, as well as American experimental filmmaker Ken Jacobs. Specifically a 1955 colour short called Orchard Street which Jacobs shot guerrilla style in the area and documenting the daily life of mainly immigrant Jews in the Lower East Side. From 19th century French artist Daumier and turn of the 20th century American realist painter Georges Bellows he took the idea of lighting portraits with a warm light from below.

A scene in which Marty talks with fading movie star Kay Stone (Gwyneth Paltrow) on the phone was shot in realtime with both actors in different rooms on adjacent sets. Khondji had to light both rooms and shoot with two cameras.

“I love the blurred line between documentary and fiction,” he says.

They only had one day to shoot in the bowling alley so Safdie requested the space be lit in 360-degrees so he could move the camera around really quickly.

For the film’s table-tennis matches, Khondji used three cameras fitted with wide-angle lenses to capture the game’s dizzying back-and-forth. “Sometimes we were directly in the line of fire, with two cameras shooting at each other, one hidden between two actors,” explains Khondji. “It felt like a documentary style of filmmaking, photographing what was playing out in front of us in the limited time we had to shoot it.”

He credits camera operators Colin Anderson (who also worked on One Battle After Another) and Brian Osmond, gaffer Ian Kincaid, and colourist Yvan Lucas.

“I’ve worked with a lot of directors and I was surprised by how much Josh had laid out the scenes in his head before we filmed,” says Khondji. “Every director has their own way of doing things, but Josh has an obsessive, intuitive way of making movies. Stylistically speaking, he knows you usually don’t capture wide-angle shots using long lenses — but the rules don’t matter to him.”

A face tells a thousand stories

Khondji also credits casting director Jennifer Venditti with finding the extraordinary number and variety of people – professional actors and first timers alike – to inhabit the film’s world.

Using a process that began with Heaven Knows What (Venditti’s first collaboration with Safdie) and further developed on Good Time and Uncut Gems, she scouted streets for hundreds of unforgettable faces.

“There’s no pretence when someone is owning who they are, and that’s what I’m always looking for when I’m casting un knowns,” says Venditti. “It’s all lived experience — sometimes it’s rough, other times it can be gorgeous to watch. We are taking non-professionals and putting them into this fictitious world. Their signature authenticity is the alchemy.”

Venditti employed five street scouts and two casting associates to help her scour New York City, looking for faces on Coney Island, in city parks, at farmers markets and street fairs, and in table tennis clubs. For one scene set in a New Jersey bowling alley, Venditti cast young men she scouted at a memorabilia convention of sports trading-card aficionados. For scenes set overseas, including a gathering of journalists in London, she scouted faces at Tea & Sympathy, a West Village hangout popular with British expats.

Existential medieval duel

The intensity of the script comes from the writing process between Safdie and Ronald Bronstein. They first teamed up in 2009 on Daddy Longlegs, a film they also co-wrote and co-edited and which Bronstein acted in, winning an award for his performance. Their screenwriting and co-editing continues in Marty Supreme on which Bronstein also shares a producer credit.

“We're brutal on one another,” Bronstein says. “It might just come from a pathological fear of boring people and that in itself can turn into panic. Every single idea has to be torn apart and rebuilt through the other person's brain.

“The ideas themselves are so personal - everything gets highly abstract by the time it reaches the screen - but every exchange is coming from some lived in experience. So we're sharing very intimate things with each other. The process is invasive and we're not nice in the sense of not being sensitive to the other's experience. One person throws an idea out and then immediately the other person is tying it to a chair and beating the shit out of it, trying to get it to confess its weaknesses.”

He says, “We once had a day long argument about what happened to a character when he was eight years old. The ad hominem attacks, which you then see in the movie, like some existential medieval duel, [well] that’s what’s happening between us in our process.”

This combat extends into post-production where “all reverence for the script disappears, to the point of self-abnegation,” says Bronstein. He describes their approach to the edit as “archaeologists uncovering a massive cache of raw footage,” adding, “Our job is to first stamp intentionality onto it — to shape it into something that feels new to us.”

Having done this for so many years over many projects he says he’s passed the point of worrying that their friendship will be affected.

Building the world on sets and streets

Unlike the contemporary setting of Uncut Gems where Safdie shot on the streets of New York without needing to worry if he captured passer’s by in shot, every inch of the post-Second World War environment had to be plotted from costume to colour palette.

They wanted to depict the state of table tennis at the time as a subculture full of schemers, geniuses, and outcasts played in smoky backrooms, penthouse parties, YMCAs, Ivy League dorms, and downtown tenements.

Oscar winning sound designer Skip Lievsay (Gravity) approached the highly complex ebb and flow of dialogue on soundtrack by working between score and needle drops (from Fats Domino to Tears for Fears) and the many scenes featuring crowds, audiences or events. Then he’d vary the volume of the soundtrack and the density of other elements like dialogue and sound effects, to, in his words, “amp up every situation to get the juices flowing, like caffeine.”

Three time Oscar winning production designer Jack Fisk (There Will Be Blood, The Revenant, Killers of the Flower Moon) resuscitated the period look of Marty’s Lower East Side neighbourhood through set-dressing facades on existing NYC streets.

“There’s a haunting presence on the Lower East Side that wouldn’t have the same impact if you recreated it on stage,” says Fisk.

The production made use of three city blocks on Orchard St to create Marty’s world, from the cramped tenement that he shares with his mother, to his Uncle’s shoe store, to the pet store where Rachel works, and the surrounding streets and alleys where Marty races to evade the police.

“These buildings were designed in the 1800s and we were bringing them back to the 1950s era through their facades and interiors,” says Fisk. “You can still discover the old spirit of the neighbourhood and its vibrant street life.”

For the wealthiest quarter of Manhattan during the ‘50s — the Upper East Side and Fifth Avenue, Fisk scouted and decorated a Manhattan building designed by Frank Woolworth, (founder of the high street store brand).

His biggest challenge was designing the film’s sprawling table tennis sequences, spanning England, Japan, France, Sarajevo, and Egypt. For the British Open, the production took over Meadowlands Arena in New Jersey, installing 30,000 square feet of wooden flooring to host dozens of players and thousands of spectators.

 

Monday, 2 February 2026

Winter Wonderland All the tech at Olympics Milano Cortina

IBC

First-Person-View drones, expanded real-time 360-degree replays, stroboscopic replays, dynamic graphics, cinematic cameras and a massive virtualized production setup make Milano Cortina 2026 a major step forward in immersive, scalable and sustainable Olympic broadcasting.
article here
“We are not tech narcissists,” insists Yiannis Exarchos, CEO, Olympic Broadcasting Services. “We have a team which is fully immersed in technology but we always need to remember that this is about telling the stories of the most important athletes in the world and the values and the emotions that are being generated by them.”
Nonetheless, the OBS which has responsibility for delivering official coverage across successive Games, continues to unleash an arsenal of technologies that meet the expectations of its media rights holders (MRH) and to reach as many demographics as possible with a huge variety of types of content and formats.
At the Winter Games in Italy next month this includes an expansion in the use of IP related systems to produce and distribute 6500 hours of coverage. 5600 hours of that is non-competition coverage and includes VR and vertical video for mobile phones as well as behind the scenes material. AI is helping automating highlights creation for rights holders to tailor content and a bespoke new language model ‘Olympic GPT’ is being debuted for use by anyone to quiz Olympic content and search results on the IOC website.
Distributed venues
The biggest challenge for Milano Cortina is the distributed nature of the main venues which seems unprecedented. Milan is the base for less than half the athletes with others based in the mountains to the North at Cortina, Livigno and Tesero. Connectivity is notoriously patchy in the Dolomites and one of OBS’ most important jobs, partnering with Telecom Italia, was to secure capacity. Even then physical transport between site is not practical, meaning many broadcasters and stakeholder like FIS, the international ski federation, have to double up their presence.
It also means the opening ceremony featuring performances from Mariah Carey and Andrea Bocelli is distributed geographically.  The absence of ‘clusterisation’ - putting many venues together – “does create significant operational challenges and also additional costs,” Exarchos admitted. “And in Milano Cortina people cannot easily move from one site to another. Despite having teams of athletes in all four locations we want to have a sense of unity, especially in the parade of nations. We want them to feel that they are parading together at the same time. We did a very detailed rehearsal a couple of months ago that went very well and I believe that because of the size of these places, their incredible beauty and the tradition that exists locally, the atmosphere is going be fantastic.”

Thousands of hours of coverage
OBS will provide 6,500+ hours to MRHs of which 900+hours are dedicated to live action and 5,600+ hours for additional content.  With Samsung, OBS will broadcast to mobile using a feed filmed on mobile phones.
Unlike other major sports events like Uefa Champions League, the Olympics continues to capture in 4K UHD HDR “reflecting the industry’s growing adoption of higher-resolution and high dynamic range workflows,” it says. Down-scalers will convert content to 1080p HD 50, ensuring compatibility with HD broadcasters while supporting the transition to UHD HDR.
8K production will be implemented for the Opening and Closing Ceremonies, as well as for figure skating and short track speed skating, in partnership with CMG.  In addition, OBS has deployed an 8K theatre at the IBC, enabling broadcasters and partners to experience content at the highest resolution.
Key components of the virtual production
OBS production for Milano Cortina is significantly more virtualised and remote than at any previous Olympics, representing a clear evolution in the delivery of live sports coverage.
“MC26 marks a turning point in Olympic broadcasting, cloud integration and remote production,” says Isidoro Moreno, OBS Head of Engineering. “At these Games, software-defined broadcasting (SBD) is driving the shift towards the virtualisation of core OBS operations. This approach creates a reimagined broadcast environment where cloud technology, remote workflows, and AI-powered tools work together, transforming how content is captured, managed, and delivered.”
Central to this transformation is the shift away from traditional, hardware-heavy OB vans towards a virtualised OB van (VOB) model built on a private, commercial off-the-shelf (COTS) cloud infrastructure. Following a successful proof of concept at curling in Beijing 2022, this model is now fully deployed at three venues (curling stadium, sliding centre, speed skating stadium) for the Games. The VOBs have led to a reduction in compound space by more than 50%, cutting power consumption by up to 50% in some cases, and enabling full remote production with the same operational capabilities as conventional OB vans.
OBS is also introducing fully virtual Technical Operations Centres (TOCs) for the first time at an Olympic Games, replacing on-site technical rooms with remote, dashboard-based operations for signal transmission to the International Broadcast Centre (IBC). The TOCs will be deployed at four venues (both ice hockey venues, and the venues for figure skating/short track speed skating and speed skating)
In parallel, OBS is piloting a fully cloud-based Master Control Room, enabling remote feed switching and management while minimising physical infrastructure and on-site staffing.
The IBC in Milan is a quarter size smaller than in Beijing 2022 with a 33% reduction in power. Building on this experience, the model planned for Dakar 2026 Summer Youth Games is expected to require 75% less rack space and consume 65% less power at OBS HQ, and a 50% faster IBC rollout, with systems operational two months before the Games.
Hundreds of specialist cameras
OBS will deploy more than 810 camera systems across the Games, including a wide range of specialty systems designed to enhance immersion, storytelling and analysis.
POV capture is delivered through multiple solutions. In ski and snowboard cross, goggle cameras are deployed rather than helmet cameras to provide a more natural athlete perspective. Additionally, in partnership with Worldwide TOP Partner Alibaba, OBS produces short-term, on demand VR 360-degree videos from athletes’ perspectives, optimised for social media platforms.
Across outdoor sports, OBS will operate 24 drones, nearly double the number used at Beijing, including 15 First-Person-View drones and nine traditional drones. Making their Olympic Winter Games debut, FPV drones are small and agile, able to follow athletes along the field of play and deliver a thrilling first-person perspective that highlights speed and skill.
OBS will deploy 32 cinematic cameras, first introduced at Paris 2024, to capture key storytelling moments: from athletes arriving and preparing to compete to celebrating victories and behind-the-scenes interactions with crowds and teammates. With a shallow depth of field, these cameras draw focus to human emotion, capturing moments of intensity or joy. Enhanced colour, texture, and detail amplify the emotional impact. New for Milano Cortina 2026, the cameras will also support on-screen graphic overlays such as athlete names.
Replay innovation continues to advance rapidly. A total of 17 AI-powered real-time 360° replay systems, developed in collaboration with Alibaba, will be deployed across 17 sports and disciplines, up from 10 at Beijing 2022, enabling multi angle, frozen frame and slow-motion analysis. In parallel, 12 stroboscopic replay systems, developed with Alibaba and Omega, will be deployed across 15 sports and disciplines, compared with only one system utilised at Beijing. Using AI, these systems highlight key body positions and movement trajectories in a single, easy to follow visual sequence.
Innovative perspectives
In addition, several other sports will feature innovative camera angles:
Curling: An overhead rail camera spans the full length of the sheet, capturing the movement and speed of play. Combined with cameras close to the ice, this setup delivers dynamic angles, immersive replays, and visuals that highlight athlete emotion and the intensity of the competition.
Figure Skating: An on-ice camera operator will capture cinematic close-ups before and after competition routines, as well as full skating performances on the ice during the gala.
Biathlon: AI-driven camera systems allow MRHs to spotlight national athletes, offering personalised, real-time coverage of shooting lanes with live data and split-screen options.
AI-augmented and automated workflows
OBS uses AI to produce highlights at scale, quickly and efficiently, for both OBS and MRHs. A proof-of-concept was completed at Gangwon 2024, and the system was launched at Paris 2024 with 14 sports and disciplines, generating more than 100,000 highlights – a volume impossible manually. At Milano Cortina 2026, the system will be available for all sports. It is fully customisable, allowing highlights to be tailored by length, athletes, sport, nationality, competition parameters, archived content, and even reframing horizontal broadcasts into vertical formats for social platforms.
OBS is testing an Automatic Media Description platform to manage the vast volume of live video. AI breaks broadcasts into searchable clips, suggests shot descriptions and keywords, and helps teams quickly find key moments and highlights, making storytelling faster, more efficient, and easier to scale.
Olympic athletes also have access to AI-driven highlights from their own competitions for dissemination on their own socials which Excharcos calls “a major breakthrough.”

Expanded 5G contribution
For the Opening Ceremony, a private 5G network will be deployed, enabling more than 20 Samsung mobile devices to capture backstage and on-field activity using advanced 5G transmission. Dedicated mobile feeds and vertical video outputs will allow MRHs to share the energy and atmosphere of the event in real time, offering dynamic, social-first coverage.
On-screen graphics advances
Aside from the stroboscopic replay and 360° replay systems key graphic techniques include:
  • Live course and position tracking: Tools such as Course Tracker, Position on Course, and Pinning visually follow athletes in real time, showing location, ranking, speed, and progress directly on the course or within the live image.
  • Cross Country and Nordic Combined course mapping: Viewers can track up to three athletes or groups simultaneously on a dynamic course map, including live gaps and relative positions over long distances.
  • Comparison and performance-to-leader graphics: ‘Virtual Line to Beat’, ‘Live Speed / Live Delta’, and ‘Comparison to Leader’ provide instant context on how competitors are performing relative to the fastest athlete or group.
  • Terrain and effort visualisation: Incline and gradient analysis show elevation profiles and terrain difficulty in relation to athlete position.
  • Course animations: Used across multiple sports to explain course layout, key technical sections, and race flow, helping viewers understand competition before and during events.
  • Team Radio graphics: New to alpine skiing coverage, these visuals highlight real-time communication between athletes and coaches, giving insight into strategic and emotional moments immediately before a run.
AI-powered stone tracking in curling
Exarchos says, “We have been trying for many years to find a way to help viewers understand what's going on in curling because most have the perception that it's a simple throw of a stone that glides along the ice. Curling is an incredibly technical sport and unless you really understand the technical difficulty you can’t fully appreciate the effort and the capabilities of these athletes.

“Now, with the use of AI technology, this is possible.  In parallel with live video of the competition itself you see the rotations of the stone not just its location of the stone. You also understand the frequency of the sweeping that team members do and why.”
Social media content, formats and delivery
At every venue, OBS has dedicated digital producers working alongside venue teams to identify and capture moments for digital and social platforms. Their role goes beyond content curation. They guide live production teams to adopt a digital-first mindset, encouraging camera operators and directors to capture crowd reactions, behind-the-scenes rituals, and other moments that add depth to the venue’s visual narrative.
Platform-native social media creators also capture human-interest stories and behind-the-scenes footage using mobile devices. Curated content is uploaded to OBS’ Content+ platform, making it instantly available for MRHs to enrich their coverage.
In addition, OBS offers MRHs influencer positions within venues, allowing broadcasters to position their own creators near key action areas, such as athlete arrival zones, warm-up spaces, and podiums, to capture authentic, social-first content using mobile or 360° cameras. This approach helps broadcasters engage younger, mobile-first audiences with personality-driven storytelling and unique perspectives that resonate on social media platforms.
Future planning
OBS is already planning for the IBC at LA28 to be half the size of what the IBC was in Rio whilst producing more than two times the amount of content.
Its partnership with Chinese cloud solutions provider Alibaba began in 2018 ahead of the pandemic delayed Tokyo Games is not only the basis of its ability to virtualise these operations but is significant because the IOC/OBS rely on such third party tech partners to help fund innovation.
“When Alibaba came into the Games, they were Cloud sponsors and we were thinking of our administration systems and stuff like that,” says Exarchos. “They approach us to ask ‘how about broadcast? We started talking and we both realized that there's something there. They have been great partners. We developed Virtual OB van capability with Intel (no longer a partner) and we're looking into such opportunities with Omega.”
In Brisbane 2032 the target is an IBC, which will be essentially the size of the Winter Olympics rather than the Summer Games “and probably be able to do much more. We do not know how to do this yet but we try to be agile.”
Such long term planning and efficiency savings would not be possible, he argued, if individual host broadcasters were still in charge of each games, rather than the internal IOC broadcasting division.

Friday, 30 January 2026

Horror Film Good Boy: Ben Leonberg on His Directorial Debut

postPerspective

A haunted house horror story told from the perspective of a dog is the story behind the indie film Good Boy, the feature debut of Ben Leonberg, who co-wrote the script with Alex Cannon. He also directed, photographed and edited the 72-minute film over a three-year period, with help from his wife Kari Fischer (also the film’s producer) and with their own dog, Indy, as the star.

Positive word of mouth at its SXSW premiere has continued since its release into cinemas by IFC, turning the microbudget indie drama into a viral hit. In fact, Good Dog has gotten some award love recently, including as a Top 10 Independent Film of 2025 by the National Board of Review, and it was nominated in the Best Editing category at the 2026 Independent Spirit Awards.

Leonberg and Fischer adapted their own home in a rural part of New York state into a creepy haunted house set for Indy, a red-haired Nova Scotia Duck Tolling Retriever, to play in.

As Leonberg explains, the dog had no idea it was making a movie, nor did they teach Indy any new tricks or commands. “He has no understanding of marks or cues, and he spent most of the shoot napping. Yet his on-screen presence is so magnetic that I put the whole movie on his oblivious little shoulders.”

Good Boy might appear to have come out of nowhere, but you have a solid background as a filmmaker. Can you explain?
Like a lot of people my age, I grew up making movies on VHS tapes and MiniDV. I didn’t have a formal film school education, so I was kind of self-taught, especially on the technical side. I learned how to make movies with a group of friends, shooting sketches for improv or making little commercials for the businesses in the town where I went to college. When I got into the real world, my first gig was in advertising for athletic apparel at Adidas and Reebok.

I started out as a one-man band filming smaller assets, such as a football player throwing a ball around with high school kids. This was during the DSLR revolution of 2009-2010, and I was one of the first people at Reebok and Adidas who knew how to use those cameras. My experience and crews grew, and although I never made a Super Bowl commercial, I did make one for the Stanley Cup.

I decided to go to film school at Columbia University for my master’s because I had never really taken a screenwriting class or a real directing class. I returned to my commercials work at a different level and with a new focus, and I began developing Good Boy on the side until we felt it was ready.

What was the light bulb moment that made you want to dedicate the best part of four years working on this story?
It came to me after watching Poltergeist, probably for the millionth time. If you remember, it begins with a golden retriever wandering through the house, aware of the haunting before the humans catch on. I thought somebody should tell a story entirely from that kind of character’s point of view: The dog who knows better.

There’s something so creepy about that in a horror movie, where you can’t help but imagine the worst. Even though it’s a traditional haunted house story, because we’re seeing it from Indy’s POV, it’s almost like we’re seeing a side of the story we haven’t seen before. As someone who loves dogs and grew up with them, I felt like that was a movie I would want to see.

I already had a technical background, but after my MA, I finally understood that story is the most important part. Everything flows from story. I became interested in how making every shot either of the dog or from his point of view could unfold a narrative in a new way.

The problem, of course, is that you can’t say to a dog, “Just look a little bit over here” or “Stop on this mark” the way you can to an actor.

I started making test films with Indy to figure out how to do even the basics, like shot/reverse shot for an actor who doesn’t know he’s in a movie. What sustained me was that I believed in the idea. Plus, I like a challenge.

To what extent did you storyboard the film?
Like most scripts, we worked on Good Boy for a long time before starting to film. The conceptual challenge was trying to stick to the rules of a canine protagonist. He’s not going to be able to speak. He’s limited to doing what a dog can actually do, so it was about using those limitations as an asset. The discipline meant telling the story from the point of view of what Indy sees, smells or hears.

Storyboards were super-important, and I created them on an iPad. I’m not a very good illustrator. They were stick figures, but the most important thing I got from doing it was the idea of how to use shot size, what angle the camera should be in relation to the line of action, and lens choice. Plus, Indy has a very neutral but intense expression, so can I use that to tell the story?

How did you solve the challenge of getting repeatable takes with Indy?
I would spend the day setting up the shot, doing everything from rearranging the props to doing the electrics. Sometimes, since this is an old house, I literally had to create outlets in places where none existed before. In the time I had left, I’d look at the previous day’s footage.

As unusual as the film is,  we applied the fundamentals of filmmaking quite practically. We would approach a scene logically. You’d start with the widest coverage, then work your way in to a close-up. That’s conventional to shooting, lighting and managing props, but with Indy, it was also an opportunity to set his blocking as we moved in.

Let’s say there’s a scene where Indy walks into a new space. I’d have a wide-angle shot of the room, then he would walk in and freeze because he hears a strange noise. We might shoot this 40 times, from which there might be eight usable takes. In each of those eight takes, he is hitting very different marks, so I have to pick one and then adapt the rest of the shots with lighting design, props and so on to match.

Every day, it was like making a bespoke custom setup that was in relation to what we had done either the day before or, in some cases, weeks or years before. In addition to that unusual way of making the film, I would often roll the camera and then run around to get into the shot with Indy because I was also training him and standing in as the body of the human actor. That was another level of complexity added on top.

At what point did you decide that Red was the right camera for this film?
When I got into commercials, I had used the Red One for years and had known it well. It was a camera I had worked on in the equipment room in my grad program at Columbia, so while I had experience with a lot of different cameras, I completely knew the Red ecosystem and workflow.

I’d filmed tests with Indy on a Red One, and one of the things I realized was that it was going to be extremely beneficial to shoot at a higher resolution than our ultimate delivery. To get the best framing for Indy, I would want to have the ability to crop and reframe in post.

As mentioned, he can’t hit exact marks. I was almost approaching every setup a little bit wider and a little bit further back through the lens or the camera placement so I could then reframe to account for Indy’s variability. That’s when the Red Dragon came into the equation. We started out with a Red Dragon-X 5K and then upgraded the firmware so it could shoot 6K, which was perfect for us. I was already comfortable using the camera, and the extra resolution enabled us to reframe in post. That was one of the most important reasons to shoot on this camera.

What was your lens choice?
The Red has a very color-accurate, clinical representation of the world as a baseline, which you can push against using older glass. I wanted to marry the bold color you can get from the high-dynamic-range of the camera with a more textured, handmade look through vintage lenses. I tried several different versions, including Leica and Canon FD, but I really like the Nikon AI lenses, both for how they looked and their focal range.

The hero lens of the film is a 15mm specialty wide-angle lens. It’s got a lot of quirks. It can’t focus to infinity until you get to f8 or above, but because it’s a real wide-angle lens that isn’t full of fisheye distortion, it’s perfect for a canine face. Normally, if you were to use that kind of a lens for a close-up on a person, it would not be super-flattering, but for Indy, it produces a beautiful shot because he has a big, long nose and big ears that stick out to the side. It’s a close-up, but you have a beautiful, deep background behind him that you wouldn’t otherwise get if you were shooting on the standard 35mm, 50mm or 75mm lenses that get used for close-ups of human actors.

What was your editing package?
I edited in Adobe Premiere partly because the Red workflow with Adobe is so streamlined. It’s fast and nimble, especially with the way that we were shooting. The ease of adding to my DIT log every single day and logging shots, tracking what was working, and numbering shots was practical. These mundane but important administrative-type functions were super-critical in making it all work.

How did you store and manage the high-resolution files?
The short answer is: with a lot of storage! I must credit my post supervisor, Michael Cacioppo Belantara [of NY boutique Alchemist Post], and my colorist, Jeff Sousa. From other projects I’ve done, I know how much Red RAW R3D can bring things to life. Jeff and I were very much aligned in the look we wanted to achieve, embracing what was already great about the Red footage and taking into account our aesthetic and lighting choices.  I edited in Premiere using proxy files and then reconformed for Jeff to grade from the R3Ds.

Over 400-plus days, I had a lot of unusable footage, and I didn’t throw things out as I was going. I’m sure I could have saved hard drive space if I had, but it felt like bad practice to be deleting potentially usable footage. It’s around 73 terabytes of R3D footage. Also, I’m a DIT purist, so I had it backed up in triplicate. We spent a lot on hard drives.

Sound is a big part of any horror movie. How did you approach sound on Good Boy?
From the very beginning, my co-writer and I were thinking about how sound would play in this horror. There are scenes where Indy is at the top of the stairs, looking down at an empty space, and we tried to figure out how long we could sustain those pauses and beats of tense silence. We knew sound was going to be really important.

Brian Goodheart [co-producer and re-recording engineer] marshalled the whole post sound team and was involved from the start. He wasn’t on-set, but he was always seeing cuts and getting the raw production audio as well, which was not usable. It was almost all thrown out and then rebuilt in post.

Brian was responsible for the rebuild of the natural soundscape — the things that should be there diegetically. He worked with mixers and designer Kelly Oostman to add supernatural textures that accentuate tone and tension. Then, with composer Sam Boase-Miller, they each took a pass at the film. We’d get a pass with all natural sounds, then another version of the movie with just the supernatural sound design, and then a version just with musical swatches, then final music as we got further along.  As the director, it was great to be able to isolate the sounds and music and see how we could blur them to create tension or elevate some scenes. I’m passionate about sound, and it was a huge part of the odyssey to make this film.

Are you fighting off other point-of-view pet pictures, or do you want to do something completely different?
I’m very excited for my next film. I’m committed to a project that will have human actors who know they’re in a movie. I have gotten a few animal scripts sent my way, which is fun, but I don’t think I’ll make a pet movie for movie No. 2.

What I certainly will do is continue to use perspective in a unique and novel way. Not to chase a gimmick — the camera’s not always going to be on the ceiling for the next movie — but to see how I can use perspective, subtle lens choices and technology that backs it up to tell a story that, even though it might seem like it has familiar beats, looks very new and fresh because of the way it’s told.