Friday, 30 January 2026

Horror Film Good Boy: Ben Leonberg on His Directorial Debut

postPerspective

A haunted house horror story told from the perspective of a dog is the story behind the indie film Good Boy, the feature debut of Ben Leonberg, who co-wrote the script with Alex Cannon. He also directed, photographed and edited the 72-minute film over a three-year period, with help from his wife Kari Fischer (also the film’s producer) and with their own dog, Indy, as the star.

Positive word of mouth at its SXSW premiere has continued since its release into cinemas by IFC, turning the microbudget indie drama into a viral hit. In fact, Good Dog has gotten some award love recently, including as a Top 10 Independent Film of 2025 by the National Board of Review, and it was nominated in the Best Editing category at the 2026 Independent Spirit Awards.

Leonberg and Fischer adapted their own home in a rural part of New York state into a creepy haunted house set for Indy, a red-haired Nova Scotia Duck Tolling Retriever, to play in.

As Leonberg explains, the dog had no idea it was making a movie, nor did they teach Indy any new tricks or commands. “He has no understanding of marks or cues, and he spent most of the shoot napping. Yet his on-screen presence is so magnetic that I put the whole movie on his oblivious little shoulders.”

Good Boy might appear to have come out of nowhere, but you have a solid background as a filmmaker. Can you explain?
Like a lot of people my age, I grew up making movies on VHS tapes and MiniDV. I didn’t have a formal film school education, so I was kind of self-taught, especially on the technical side. I learned how to make movies with a group of friends, shooting sketches for improv or making little commercials for the businesses in the town where I went to college. When I got into the real world, my first gig was in advertising for athletic apparel at Adidas and Reebok.

I started out as a one-man band filming smaller assets, such as a football player throwing a ball around with high school kids. This was during the DSLR revolution of 2009-2010, and I was one of the first people at Reebok and Adidas who knew how to use those cameras. My experience and crews grew, and although I never made a Super Bowl commercial, I did make one for the Stanley Cup.

I decided to go to film school at Columbia University for my master’s because I had never really taken a screenwriting class or a real directing class. I returned to my commercials work at a different level and with a new focus, and I began developing Good Boy on the side until we felt it was ready.

What was the light bulb moment that made you want to dedicate the best part of four years working on this story?
It came to me after watching Poltergeist, probably for the millionth time. If you remember, it begins with a golden retriever wandering through the house, aware of the haunting before the humans catch on. I thought somebody should tell a story entirely from that kind of character’s point of view: The dog who knows better.

There’s something so creepy about that in a horror movie, where you can’t help but imagine the worst. Even though it’s a traditional haunted house story, because we’re seeing it from Indy’s POV, it’s almost like we’re seeing a side of the story we haven’t seen before. As someone who loves dogs and grew up with them, I felt like that was a movie I would want to see.

I already had a technical background, but after my MA, I finally understood that story is the most important part. Everything flows from story. I became interested in how making every shot either of the dog or from his point of view could unfold a narrative in a new way.

The problem, of course, is that you can’t say to a dog, “Just look a little bit over here” or “Stop on this mark” the way you can to an actor.

I started making test films with Indy to figure out how to do even the basics, like shot/reverse shot for an actor who doesn’t know he’s in a movie. What sustained me was that I believed in the idea. Plus, I like a challenge.

To what extent did you storyboard the film?
Like most scripts, we worked on Good Boy for a long time before starting to film. The conceptual challenge was trying to stick to the rules of a canine protagonist. He’s not going to be able to speak. He’s limited to doing what a dog can actually do, so it was about using those limitations as an asset. The discipline meant telling the story from the point of view of what Indy sees, smells or hears.

Storyboards were super-important, and I created them on an iPad. I’m not a very good illustrator. They were stick figures, but the most important thing I got from doing it was the idea of how to use shot size, what angle the camera should be in relation to the line of action, and lens choice. Plus, Indy has a very neutral but intense expression, so can I use that to tell the story?

How did you solve the challenge of getting repeatable takes with Indy?
I would spend the day setting up the shot, doing everything from rearranging the props to doing the electrics. Sometimes, since this is an old house, I literally had to create outlets in places where none existed before. In the time I had left, I’d look at the previous day’s footage.

As unusual as the film is,  we applied the fundamentals of filmmaking quite practically. We would approach a scene logically. You’d start with the widest coverage, then work your way in to a close-up. That’s conventional to shooting, lighting and managing props, but with Indy, it was also an opportunity to set his blocking as we moved in.

Let’s say there’s a scene where Indy walks into a new space. I’d have a wide-angle shot of the room, then he would walk in and freeze because he hears a strange noise. We might shoot this 40 times, from which there might be eight usable takes. In each of those eight takes, he is hitting very different marks, so I have to pick one and then adapt the rest of the shots with lighting design, props and so on to match.

Every day, it was like making a bespoke custom setup that was in relation to what we had done either the day before or, in some cases, weeks or years before. In addition to that unusual way of making the film, I would often roll the camera and then run around to get into the shot with Indy because I was also training him and standing in as the body of the human actor. That was another level of complexity added on top.

At what point did you decide that Red was the right camera for this film?
When I got into commercials, I had used the Red One for years and had known it well. It was a camera I had worked on in the equipment room in my grad program at Columbia, so while I had experience with a lot of different cameras, I completely knew the Red ecosystem and workflow.

I’d filmed tests with Indy on a Red One, and one of the things I realized was that it was going to be extremely beneficial to shoot at a higher resolution than our ultimate delivery. To get the best framing for Indy, I would want to have the ability to crop and reframe in post.

As mentioned, he can’t hit exact marks. I was almost approaching every setup a little bit wider and a little bit further back through the lens or the camera placement so I could then reframe to account for Indy’s variability. That’s when the Red Dragon came into the equation. We started out with a Red Dragon-X 5K and then upgraded the firmware so it could shoot 6K, which was perfect for us. I was already comfortable using the camera, and the extra resolution enabled us to reframe in post. That was one of the most important reasons to shoot on this camera.

What was your lens choice?
The Red has a very color-accurate, clinical representation of the world as a baseline, which you can push against using older glass. I wanted to marry the bold color you can get from the high-dynamic-range of the camera with a more textured, handmade look through vintage lenses. I tried several different versions, including Leica and Canon FD, but I really like the Nikon AI lenses, both for how they looked and their focal range.

The hero lens of the film is a 15mm specialty wide-angle lens. It’s got a lot of quirks. It can’t focus to infinity until you get to f8 or above, but because it’s a real wide-angle lens that isn’t full of fisheye distortion, it’s perfect for a canine face. Normally, if you were to use that kind of a lens for a close-up on a person, it would not be super-flattering, but for Indy, it produces a beautiful shot because he has a big, long nose and big ears that stick out to the side. It’s a close-up, but you have a beautiful, deep background behind him that you wouldn’t otherwise get if you were shooting on the standard 35mm, 50mm or 75mm lenses that get used for close-ups of human actors.

What was your editing package?
I edited in Adobe Premiere partly because the Red workflow with Adobe is so streamlined. It’s fast and nimble, especially with the way that we were shooting. The ease of adding to my DIT log every single day and logging shots, tracking what was working, and numbering shots was practical. These mundane but important administrative-type functions were super-critical in making it all work.

How did you store and manage the high-resolution files?
The short answer is: with a lot of storage! I must credit my post supervisor, Michael Cacioppo Belantara [of NY boutique Alchemist Post], and my colorist, Jeff Sousa. From other projects I’ve done, I know how much Red RAW R3D can bring things to life. Jeff and I were very much aligned in the look we wanted to achieve, embracing what was already great about the Red footage and taking into account our aesthetic and lighting choices.  I edited in Premiere using proxy files and then reconformed for Jeff to grade from the R3Ds.

Over 400-plus days, I had a lot of unusable footage, and I didn’t throw things out as I was going. I’m sure I could have saved hard drive space if I had, but it felt like bad practice to be deleting potentially usable footage. It’s around 73 terabytes of R3D footage. Also, I’m a DIT purist, so I had it backed up in triplicate. We spent a lot on hard drives.

Sound is a big part of any horror movie. How did you approach sound on Good Boy?
From the very beginning, my co-writer and I were thinking about how sound would play in this horror. There are scenes where Indy is at the top of the stairs, looking down at an empty space, and we tried to figure out how long we could sustain those pauses and beats of tense silence. We knew sound was going to be really important.

Brian Goodheart [co-producer and re-recording engineer] marshalled the whole post sound team and was involved from the start. He wasn’t on-set, but he was always seeing cuts and getting the raw production audio as well, which was not usable. It was almost all thrown out and then rebuilt in post.

Brian was responsible for the rebuild of the natural soundscape — the things that should be there diegetically. He worked with mixers and designer Kelly Oostman to add supernatural textures that accentuate tone and tension. Then, with composer Sam Boase-Miller, they each took a pass at the film. We’d get a pass with all natural sounds, then another version of the movie with just the supernatural sound design, and then a version just with musical swatches, then final music as we got further along.  As the director, it was great to be able to isolate the sounds and music and see how we could blur them to create tension or elevate some scenes. I’m passionate about sound, and it was a huge part of the odyssey to make this film.

Are you fighting off other point-of-view pet pictures, or do you want to do something completely different?
I’m very excited for my next film. I’m committed to a project that will have human actors who know they’re in a movie. I have gotten a few animal scripts sent my way, which is fun, but I don’t think I’ll make a pet movie for movie No. 2.

What I certainly will do is continue to use perspective in a unique and novel way. Not to chase a gimmick — the camera’s not always going to be on the ceiling for the next movie — but to see how I can use perspective, subtle lens choices and technology that backs it up to tell a story that, even though it might seem like it has familiar beats, looks very new and fresh because of the way it’s told.

Grammy winning artists set gold standard for stadium shows with RED Cine‑Broadcast

my interview & words for RED Digital Cinema

article here

2025 was a banner year for Fuse Technical Group, highlighted by high‑profile projects including 48 dates of Grammy‑winning, multi‑Platinum R&B artist Chris Brown’s Breezy Bowl XX Tour 2025 and the record‑setting closing concert of Zach Bryan’s Quittin’ Time Tour in Michigan. Fuse developed a unified technical blueprint that performed seamlessly across both productions, designing and building two custom camera systems to support the shows, which were staged in major stadium venues throughout the United States and Europe.

“Our key differentiator is the ability to reinvent the industry with custom solutions,” says Ben Johnson, project manager for Fuse. “When you call us, you are connecting with the most skilled, creative collection of brain power in the staging industry. RED Cine-Broadcast is our flagship system now. When someone wants our best option, it is RED.”

Johnson notes that Fuse has been receiving a lot of requests over the past few years for a cine-style broadcast solution. “We had tried a couple of systems but they simply were not as seamless as we needed it to be,” he explains. “We wanted a solution that would integrate fluently with our standard touring systems. When Justin Collie, live event production designer at Nimblist, requested a cinema option for Chris Brown around the same time that RED announced their Cine-Broadcast system, the timing worked out perfectly. We didn’t want to go down any other route.”

Fuse designed and built a camera system based around a Ross Ultrix media processing platform and another system around Grass Valley K-Frame. Fuse purchased 17 V-RAPTOR XL cameras with Cine-Broadcast modules and base stations to integrate seamlessly with their infrastructure, which already included 360-degrees of LED screens of ROE Visual CB5 and a Disguise GX3 media server package and all lights, video and rigging for Brown.

“We had multiple RED cameras on the show plus all the SMPTE fiber connectors,” explains Josiah Battles, video director on Brown’s Breezy Bowl XX Tour. “There have definitely been times with other camera products where I’ve not had as much support from the camera company, but RED was different, and I felt very comfortable knowing they were there. They were extremely helpful in prepping the show and I was excited by the final result.”

The camera package comprised two V-RAPTOR for front of house with 24-300 Fujinon Devo zooms; another in the Concourse with a 25-1000 lens; one or two wireless handheld V-RAPTOR on stage with 14-100mm; three cameras mounted on remotely operated dollies (supplied by Luna Remote Systems) and two Tower cams 26ft above the stage also with 24-300 Duos. Battles augmented the REDs with Sony PTZs which he was able to color match by applying a RED LUT.

“The goal was to try to keep the zoom lenses with a focal length between 2.8 and 5.0,” he explains. “We were able to see that shallow depth of field many times during the show and especially when Chris would sing to the Tower cameras.”

Beautifully cinematic sharp foregrounds against blurred backgrounds are only one of the advantages of using larger sensor cine cameras in a broadcast environment.

“The dynamic range step was a big step forward,” Battles says. “Even the design team at front of house, who aren't necessarily video technicians were immediately able to recognize the difference with these cameras. Previously, when Chris was on stage you couldn’t see his face clearly all the time on the IMAGs, especially in the dark. With RED they could. That alone was enough for them to appreciate that what a cine camera can do.”

The Log workflow to capture the extra detail in the bright and dark areas was new to the production team. Shooting Log gives the video files a higher dynamic range than a standard gamma curve. The camera Log files were fed to a central disguise media server and synced with a change in color space to the final Rec.2020 HDR image.

“The biggest draw for Fuse with RED’s Cine-Broadcast system was being able to bring a cinema camera into our established infrastructure of Ross and Grass Valley switchers for remote color grading. All the LUTs are applied in the server so the camera operators don’t need to worry about shading,” Johnson says. “It's definitely a different workflow than a standard broadcast show or even a standard touring show.”

Battles adds, “After that it wasn’t hard to sell them on keeping these cameras at all. Even the lighting designer told me that now he doesn't have to light for the camera as much as before. He can design lighting for the whole show knowing that RED is going to capture all the detail whether in deep contrast or extreme brightness and display that on screen.”

The global shutter of the V-RAPTOR crisply captured all the laser lights and pyrokinetics without the strobing or motion blur familiar to cine cameras with rolling shutter, a prerequisite in live event production.

“Whatever the camera was showing on screen was true to the effect in the venue,” Battles reports. “The lighting team began experimenting with the lasers from show to show, remodeling the refresh rates of the lasers, depending on the kind of look they were going for.

When multi-Platinum, Grammy-winning country singer-songwriter Zach Bryan set the record for the largest ticketed U.S. concert in history last September he did so with 112,408 fans enjoying the full scale, energy, and atmosphere of the historic night in astounding cinematic imaging.

“Their key request was to shoot a cinematic version of the show,” recalls Johnson. “However, they also wanted to minimize the loss of seats which meant we couldn't introduce a whole set of additional cameras. Instead, we converted the touring camera system to a cine-broadcast system. And that's where RED came in.”

The set-up was a spectacular finale to the Quittin’ Time Tour which began in 2023. Fuse was proud to support the milestone event at Michigan’s famed “Big House,” integrating a fleet of RED Digital Cinema V-RAPTOR XL cameras with Cine-Broadcast Modules into a live multi-camera broadcast workflow.

Another exciting creative advantage for artists with RED Cine-Broadcast is the ability to create different looks for songs in their set.

Battles explains, “You can do a black-and-white grade for one song or really enhance the pinks and reds for another or bring down all the contrast in the next. You could even do this verse by verse because all of the looks can be time-coded, programmed and played back via the server. It means the design team has more control over the final look. The look can be more cohesive and the show even more dynamic.”

A challenge with any concert lighting is capturing the true vibrancy of colors like reds and purples in camera. “With RED we can grade the live output in DaVinci Resolve and really bring out those reds and purples for certain songs. Being able to grade is the money right there.”

Many artists also tour with their own content creation team, capturing material for social media. Now, that content can be captured and produced cinematically.

“The content lead for Chris Brown came to me and said he wished we had footage in log so he could grade it,” Battles recalls. “This year, I said that’s possible now that we have our RED workflow. We have hours of 12G Raw footage ready to grade. They also shoot their own behind the scenes footage using KOMODO, shooting Raw for the grade and which goes viral all the time on social media. Now that the RED concert cameras blend with their RED cameras, it boosts the whole overall production of those social media posts.”

The RED Cine-Broadcast Module allows live event and music productions to leverage world class image quality for their shows while slotting seamlessly into standard broadcast workflows. It’s a package that Fuse Technical Group will continue to deliver for artists and venues of the highest caliber.

“RED Cine-Broadcast is the first camera system Fuse has owned that has been able to integrate with SMPTE fiber, the cable stock we've been using for a long time in broadcast and touring,” Johnson concludes. “It’s exciting for us to have an infrastructure that we're already very familiar with alongside a cinema broadcast camera that brings amazing images. We’ve also built flypacks for touring and remote live event work and it was great to be able to integrate RED Cine-Broadcast directly into those. It's flyable. It’s travelable. It’s Go Global. Anywhere.”

 

Monday, 26 January 2026

MovieLabs builds on two decades of achievement in film and TV to become the guiding voice for the future of media creation

written for MovieLabs

article here 

At a time when media and entertainment faces changes from everywhere all at once, MovieLabs offers a clearer vision and path for the future. For two decades it has built practical solutions to real world problems by working closely together with industry stakeholders.  It’s an approach that has produced tangible results. Whether lifting media out of proprietary silos, mapping a path toward collaboration in the cloud or guiding the secure, interoperable use of artificial intelligence, the primary focus is never about technology for technology’s sake. It is to empower creative teams to be able to achieve more. If workflows can adapt to new situations and technologies, then creativity can be more flexible. If production processes are automated and sped up without losing creative control, there’s simply more of the most precious resource – time.

Despite its foundation by Hollywood Studios with the name ‘Movie’ in the title, these ideas have resonated far beyond film and TV. In January 2025, the independent research lab launched the MovieLabs Industry Forum to embrace new members with companies spanning technology to talent, from global industry leader to start-up. The collective aim is to enable creativity with greater efficiency and flexibility, in the knowledge that no one company can do so alone. “The MovieLabs 2030 Vision is now the industry’s vision for the future of media creation,” says Richard Berger, MovieLabs’ CEO.

The MovieLabs origin story

At the time of founding in 2006, the film and TV industry was beginning a generational transition away from physical media to digital delivery. This required the entire infrastructure for distribution and protection to be standardized and upgraded with more efficient, more secure workflows using the latest technologies and software systems. For 15 years the organization’s efforts were central to a number of standards, common specifications and best practices that streamlined and automated distribution chains and secured creative assets. The goal was always to deliver new experiences to viewing audiences worldwide. Its achievements include a suite of specifications for online distribution (MovieLabs Digital Distribution Framework / MDDF) and the Entertainment ID Registry (EIDR), a universal unique identifier for content that automated digital distribution of film and TV titles just as UPC codes had revolutionized traditional retail. Both technologies won technical Emmy’s for their contribution to the industry. Further work devising the Enhanced Content Protection (ECP) scheme helped secure digital content for consumer distribution of new formats including UltraHD and HDR. Widely implemented since 2013, the latest updates to ECP were published in August 2024 in response to evolving threats.

Launching the 2030 Vision

In 2019, while continuing to innovate in distribution, MovieLabs opened a parallel track in media creation. Building on its heritage as a forum for cross company cooperation, MovieLabs engaged its studio members and, crucially, the wider production, postproduction and technology community. It quickly found alignment around a bold vision that extended production, post and VFX into the cloud, commonly referred to as the MovieLabs 2030 Vision. It helped that MovieLabs is an independent non-profit. Since it doesn’t make products or services it is not in competition with any of the companies it works with, clearing the way to focus on a common agenda. “We could bring in market competitors to sit side by side in our meetings and on our panels,” says Berger. “We have competing cloud companies and creative application companies talking together. This is essential to achieve interoperability. Our formula is to be very transparent about where we’re going and what we’re doing.” The outcome was a blueprint for the evolution of media creation. This 10-year plan for a more efficient media pipeline established principals for moving all assets to the cloud, for a security and access methodology based on Zero Trust and for standardized deployment of software-defined workflows.

Additionally, MovieLabs has released the Ontology for Media Creation (OMC) and continues to extend its functionality. The OMC is a set of defined terms and a common data model enabling interoperability between people, organizations, and software. “Creative enablement is at the heart of what we’re doing,” explains Berger. “We want to facilitate more secure, efficient, and interoperable media creation   workflows where creators can choose whichever tools and services they want and just know they’ll work seamlessly together. We’re enabling friction-free collaboration from wherever you are and whatever tools you’re on.” Fortunately, MovieLabs developed the concept before COVID hit when the entire world had to pivot to remote distributed connections overnight. In a post-pandemic world everyone understands there are many good reasons to keep doing it this way. Eddie Drake, SVP/CTO of Disney Studio Technology says, “The economic landscape has changed, but the Vision is still extremely relevant. While we have to be more efficient, we also have to enable the best experiences we can for the creative community.”

 

Practical action, tangible benefits

The 2030 Vision was never a prediction or a proscription; instead, it is the ‘North Star’ and blueprint to guide the industry. “The reason why the 2030 mission is still relevant is because it is a set of principles for making the future what we want it to be,” says Drake. Since every cog in the machine is moving at a different pace it was always likely that the transition in some parts of the industry will happen into the next decade. At the same time, dozens of companies have already implemented parts of the Vision. MovieLabs has been collecting some of these case studies as public reference points under the 2030 Showcase Program. This series of case studies recognizes an array of organizations including Lionsgate, Riot Games, Marvel Studios, the Royal Opera House and Accenture that are applying emerging cloud and production technologies in accordance with 2030 Vision principles.

MovieLabs is now working on the next phase of implementation. The 2030 Greenlight program matches technology companies with service providers and creatives to build and deploy solutions to everyday challenges and inefficiencies using the 2030 Vision as a template. According to Berger, “This process highlights gaps in the 2030 Vision, providing an honest assessment of what went well, where the industry needs to improve, and how the vendor community can help in solving issues.” “While we’ve made meaningful progress, there’s still important work ahead for the industry,” says Drake. “I’m excited to see the solutions we’ll build together.”

 

Dealing with the security challenge

Perhaps the biggest hurdle in the 2030 roadmap is production security. Swapping out decades of ingrained thinking in terms of locking down a physical facility to one based on a Zero Trust approach to data on a network is a monumental piece of change management. Berger explains, “Productions are naturally very risk-averse so changes to any aspect is very challenging. Most security today isn’t security by design. It is security as an add-on after the workflow has been designed. There’s a perception that better security will get in the way of the creative process but doesn’t need to be the case.”

MovieLabs has prioritized a Zero Trust education program and has also partnered with the Trusted Partner Network (TPN) which writes and maintains Motion Picture Association content security best practices. One of MovieLabs’ key messages is the principle of ‘least privilege’. This fundamental concept in information security states that a user, process, or program should have access to only the specific data, resources, and applications necessary to perform its intended function. Least privilege aims to minimize the risk of unauthorized access and misuse of sensitive information. “Security requires a lot of planning,” says Drake. “Applying Zero Trust to legacy infrastructure is tough. It’s easier when we can look at greenfield opportunities and design Zero Trust from scratch without legacy facilities.”

 

Industry Forum for dialogue

Work on developing and implementing the 2030 Vision is grounded in the MovieLabs Industry Forum. This provides a safe space for vendors and their clients to come together in frank discussions about interoperability, cloud workflows and metadata exchange. “Forum members can be there to tell us of a solution or advise on what we need to do to make the Vision a reality,” says Drake. “For us, the Forum is invaluable because we can provide insights that inform vendor roadmaps. We can talk about the technology challenges that we’re seeing with vendors who can then bring that information back to their dev teams and react to it.” He adds, “We also hear a lot from vendors that it’s much easier to enhance their products to reach the goals of the Vision when studios are all aligned. At the end of the day, we need to be working together to move the industry forward.” Yoshikazu Takashima, SVP Advanced Technology at Sony Pictures Entertainment agrees; “MovieLabs has access to a wide community of creatives, technologists and academics who can collectively test ideas far quicker than we could alone. We appreciate the honest, direct feedback we get at the Forum.”

 

Future media creation through 2030

As technology has converged and video as a communications tool has become ubiquitous, a far wider community of technology companies and creative businesses have coalesced around the 2030 Vision. The MovieLabs Industry Forum has expanded to facilitate common ground for any company that is actively re-inventing and re-tooling their supply chains in alignment with the 2030 Vision. Nearly 50 organizations as diverse as Final Draft, United Talent Agency, Prime Focus Technologies, and Bria.ai have joined forces with the Forum’s Leadership Council (Adobe, AWS, Avid, Dolby, DreamWorks Animation, Microsoft, Paramount Global, Skywalker Sound, Slalom, Sony Pictures Entertainment, The Walt Disney Studios, Universal Pictures, and Warner Bros. Discovery) to shape the future. The door is open to technology and creative service providers, application developers, production companies, and infrastructure providers. “Only by embracing expertise across the entire digital media value chain will the industry be able to align on, collaborate and solve issues common to all,” affirms Leon Silverman, Chair of the MovieLabs Industry Forum. Where some see only uncertainty and fragmentation, the MovieLabs Industry Forum points toward the future of media production.

 

Artificial Intelligence and 2030 Vision

No issue is more urgent than assessing the impact of Artificial Intelligence. There are many dimensions to the technology so it’s worth stressing that MovieLabs’ focus is on how AI can be applied in the context of helping achieve the 2030 Vision. “We coined the term IA (Intelligent Automation) as a powerful combination with AI,” says Berger. “That pairing can be very effective in taking some of the mundane tasks out of the workflow. Using AI for more creative tasks is a choice for creative teams like any other creative technology.” Pertinent questions for MovieLabs Industry Forum include ‘Can AI enhance interoperability?’ ‘Is there a common approach to improving GenAI outputs?’ ‘Would a standard vocabulary for training AI models be beneficial?’ And ‘How do you track the provenance of creations from both GenAI and humans within a workflow’? Since AI introduces new threats and security considerations – as well as potential solutions for defence – the risk and merits for content protection is another key consideration.

At a crucial point in the industry’s evolution MovieLabs stands as a beacon for collaboration for a future that honors the past while embracing innovation. “It is essential to continue to embrace emerging technologies in ways that empower storytellers and the entire creative community,” Silverman says. “While much work lies ahead, we are gathering the right companies and voices to realize the 2030 Vision future that has inspired so many.” James Crossland, EVP, Head of Global Content Operations at Warner Bros. Discovery, concludes, “We’ve got a huge change management challenge in front of us, but if there is a through line, we will find it in the MovieLabs Industry Forum. We will do our part to continue the mission to apply the 2030 Vision and as we find more and more use cases you will see an accelerating groundswell of adoption.”

 


Wednesday, 21 January 2026

Hanging by a Wire: Nail-biting documentary delicately balances archive with fresh footage

 my interview and words written for RED

article here

The life and death rescue of a group of boys and two adults set against the ticking clock of a single frayed cable is the subject of a hair-raising feature documentary premiering at the 2026 Sundance Film Festival.

Hanging by a Wire is the edge-of-the-seat account of a true-life incident from 2023 beginning when the teenagers began their routine journey to school by cable car across a mountain pass in northwestern Pakistan. When two of the system’s cables snapped, the passengers were left in mortal danger nearly one mile above the ground and with time running out before their lifeline of a third cable would give way.

Multi-Emmy-nominated Pakistani filmmaker Mo Naqvi (The Accused: Damned or Devoted; Turning Point: 9/11 and the War on Terror) and producer Bilal Sami (David Blaine – Do Not Attempt) conceived Hanging by a Wire as a real-time thriller built from footage shot moment-by-moment as it unfolded. The hours and hours of growing tension and multiple failed rescue attempts were captured from inside the cable car on the boys’ phones, from drones, a military helicopter, and from the crowd below.

“I was immediately hooked by Mo’s vision,” recalls cinematographer Brendan McGinty (Secrets of the Neanderthals; Welcome to Earth) who was invited by Naqvi to shoot the film. “The archive footage of the event was mind-blowing, a group of schoolboys hanging a mile high from a damaged cable car in the remote mountain region of Battagram in Pakistan. We all understood from the get-go that we needed to hold true to the inherent drama of this real situation, and that we wanted to meet this documentary reality with all of the cinematic flair of a Hollywood action thriller.”

Their visual approach was based in part on their cinematic aspirations for the film, largely founded on the incredible archive footage of the event, but also on the wealth of rich recce material of the region and of the real-life key protagonists. There followed two shoots in 2025, one more documentary-focused at the start of the year and a second more dramatic-focused towards the end.

“The daytime sequences come almost entirely from what people filmed on their phones,” McGinty explains. “Our own shooting focused on interviews and observational material—traditional documentary work—used to understand the characters and their relationships.

“But the most extraordinary part of the rescue happens at night, and there is almost no footage of it. Phones stop working. There is no light. Yet the rescue unfolds over many terrifying stages, nearly failing multiple times, with lives repeatedly at risk. The challenge then becomes an ethical one: how do you tell this story truthfully without inventing anything?”

The solution they arrived at was to involve the real protagonists in the retelling. The boys in the cable car and the rescuers shared authorship of their own story. They physically reenacted what happened, placing themselves in a replica of the cable car (all under studio-safe conditions) guiding the filmmakers through each moment.

“Nothing was scripted. They weren’t performing lines; they were remembering. That level of truth made the work extraordinarily difficult but also incredibly powerful. As a cinematographer, shooting drama under those constraints—where reality is the benchmark and ethics are paramount—is far harder than shooting fiction. You are constantly checking yourself against the truth of what happened.”

From inception, an IMAX presentation was part of the discussion. It was one of several reasons why McGinty selected RED V-RAPTOR for the production.

“To be honest, I don't think there was any other choice for me. The documentaries I’ve seen in IMAX have been wonderful immersive experiences and our story would be perfect for this. The minute those discussions began I'm thinking about protecting the resolution.”

Importance of Optical Low-Pass Filters

He elaborates, “Shooting more resolution than you actually need is always the way I would want to go. Shooting 8K on the V-RAPTOR for a 4K finish means you could reframe and push in to the image but because the resolution is so high you don't need false detail.

“A lot of lower-end cameras which barely reach 4K employ automatic sharpening techniques so the picture bristles with more detail than is actually there. It's false detail. When you're shooting 8K you really don't need any of that. Shooting 8K with RED feels very much like how my eye sees the world. It's a very soft, naturalistic rendition of texture.”

Texture is important to McGinty who shot this film in very high contrast environments. “There are brittle trees and bushes in hard light and shadows. You end up with footage that really tests the fine detail. If your camera doesn’t have Optical Low-Pass Filters (OLPFs) it can produce false, high-frequency detail, which is actually a bit of a nightmare to manage in post. You can end up with these very shimmering images where you’re not looking at real detail but at a computer's version of what detail might look like.”

Protecting the digital negative

McGinty began his career nearly 30 years ago shooting shorts and independent features on film which is why he prefers to shoot raw. “It’s very important to me that I protect the digital negative. Raw is the only way to shoot digital and among high-end cine cameras RED owns compressed raw.”

However, shooting uncompressed raw for docu-style footage is hardly practical, especially at 8K and in a remote region. “You would record terabytes and terabytes of data,” he says. “The files are too big for field work. So, one of the real geniuses of RED from the beginning is REDCODE RAW. I want a raw image to go back to which always protects and holds highlights and where I know there's more in the shadows. So, shooting on location in high contrast situations and in low light with R3D was a big deal for me.”

McGinty shot a lot of handheld and appreciates the ergonomics of the camera. “V-RAPTOR does everything I need it to do in a very simple way. It has very high frame rates if I need them. It has a very simple interface which is ideal for documentary work. You don't want a camera with infinite complexity and menus. You just want something to help you be responsive to a situation.”

Majesty of Vista Vision

The final reason behind his choice was V-RAPTOR’s VV sensor which frames for an aspect ratio of 17:9. “We’re in this exquisite mountainous region of North Pakistan, a mountain climbing Mecca, so I knew that to capture some of that monumentality and majesty I wanted to be shooting VistaVision. Also, in terms of telling our story I felt the absolute heroism of everyone concerned. These are real heroes, people who will step into danger to save the lives of others. I thought an appropriate canvas for us to work on would be large format VistaVision. The RAPTOR just ticked all those boxes for me.”

He paired the V-RAPTOR with Angenieux EZ zooms 22-60mm and 45-135mm occasionally dipping into some macro and fast aperture prime options. “I particularly like the close focus on the EZs and at the 135mm end of the lens this produces very intimate moments and details. With these on the V-RAPTOR we could move from the wide-angle majesty of the rugged Pakistani mountain landscapes to the close-up detail of an eye, without losing the moment to a lens change.”

For stabilized work, McGinty used the DJI Ronin 4D 8K, specifically because it includes an OLPF. This was fitted with a cine-vised set of Nikon Nikkor AIS primes. They also used GoPros, drones and some Sony cameras, including FX3s and FX6s, mostly for second-unit work.

He was aided by AC and long-time collaborator Charlie Perera and Taseer Ali who operated B-camera on the first shoot.

Pushing the picture in post

McGinty’s camera choice was further vindicated in post at Molinare, London working with colorist Jake Davies where he found he was able to push the RED raw files further than he thought possible.

“I would happily shoot 2000 ISO on V-RAPTOR, maybe 3200, and know that I'm still going to have a comfortable picture with not too much noise but what was interesting in the grade is how much noise there was in both the 4D footage and a lot of the Sony footage.

“Jake and I found we were able to regularly push the RED to higher ISOs than I was getting on the Sony and the 4D with—importantly—less noise. The moment we began to push the pictures to try to lift the 4D footage we couldn't. It was pretty maxed out on that. I was shooting 3200 ISO to ProRes raw on the 4D on some of the night stuff but the picture fell apart quite rapidly. There wasn't anywhere near as much there as there was in the R3D files.

“What shooting RED raw means when you’re shooting in low light is the lack of noise in highlights and shadows and the ability to push the picture way beyond my exposure on the day while still not falling apart. Our ability to grade the R3D footage was head and shoulders above any of the other camera formats that were in there.”

He and Davies explored film emulation and lens emulation ideas as a way to tie the footage together. “There is a vast archive at the heart of the film, along with second-unit work and extensive RED material. A unifying aesthetic, such as film emulation, offers an effective way to bring it all together.”

Editor William Grayburn meticulously reconstructed the timeline from multilingual interviews, archive, and new footage. That process revealed gaps in the visual storytelling, which guided the second production shoot. “Without that editorial clarity, the film simply wouldn’t exist in its current form,” McGinty says.

“Hand on heart, this is a phenomenal film. It’s gripping, emotionally honest, and visually powerful. It has the pull of a thriller but the integrity of a documentary. I’m incredibly proud to have been part of it.”

The documentary is produced by Naqvi alongside EverWonder Studio and Mindhouse Productions and premieres in Park City, Utah, on January 22.

Producer Sean Sforza on making The Beast in Me

interview and words for Sohonet

article here

There’s a potential murderer on Long Island in psychological thriller The Beast in Me as new neighbors played by Claire Danes and Matthew Rhys face off in a deadly game of cat and mouse. Netflix latest hit miniseries is created, written and executive produced by Gabe Rotter (The X-Files), executive produced and written by Daniel Pearle, and led by showrunner by Howard Gordon (24; Homeland). Executive producers also include Jodie Foster and Conan O'Brien. Production ran from September 2024 through February 2025 and the program debuted on Netflix in November to critical acclaim. Running the show’s extensive postproduction operation was producer Sean Sforza whose extensive credits include Julia (HBO Max), Empire (Fox) and Bull (CBS).

While Sforza says his goal when entering the industry some 25 years ago was to work on features he instantly thrived on the unique pressures of fast turnaround episodic television.

“They were shooting two features in the same building where we were making a series I was on. Over the course of ten months, we completed 24 episodes, and they were still only a quarter of the way through their movie. I remember thinking, ‘I don’t know if I have the patience for this.’

“I realized I love the pace of television. We don’t have enough time to endlessly go over things. We have to look immediately at what’s important to the story. Some of my favorite showrunners ask, ‘Are we fooling ourselves by being in love with our work or is it actually moving the story forward?’ That question sticks with me.

“Howard Gordon and I have worked together on multiple projects. I’m very fortunate: I’m in post, but I’ve been able to contribute to so many aspects—main titles, visual effects, color, and even time in the edit room. I get to have my hands involved in much of the process from start to finish.

Sohonet: Regarding The Beast in Me, what were your responsibilities, and what were the key components—people, facilities, technology—that you brought on board?

Sean Sforza: A bit of everything. I set up workflow pipelines. Production was in New Jersey, and the rest of editorial and post were in New York. We had four editors working on blocks of two episodes each one of whom was located in Los Angeles. We also had to connect our composers (Sean Callery, along with Sara Barone and Tim Callobre). During production, Howard was constantly traveling—in Canada for Accused, then later Spain and Cuba for another project—so setting up a workflow that kept everyone connected was crucial.

We coordinated dailies from set in New Jersey to be processed and edited in New York and Los Angeles. Our cinematographer, Lyle Vincent, and director, Antonio Campos (who was also one of the executive producers), originally envisioned the series being shot on film, but when budgetary constraints arose, we faced the challenge of transitioning to digital without compromising the intended look and depth of film. Clearview played a crucial role in sharing files of our look tests, including daylight, night, interior, and exterior shots, to maintain the feel and look of our world. However, this can be tricky because these large files get very compressed in the editorial process and Clearview helped us have the confidence that everyone in their different locations was viewing the same look and color.

I also hire the editorial team—visual effects, sound, post staff and vendors that best suit the project. It’s a very collaborative effort between the creatives and the studio and network to make sure we have all our players on board.

Sean Sforza, Post Producer

How did you use ClearView to connect your production staff and executives on The Beast in Me?

I’d used ClearView countless times on several projects, including on Julia, a wonderful series about Julia Child that debuted in 2022. On The Beast in Me, ClearView was essential part of our team.

When dailies come in, the high-definition files go onto a viewing platform holding terabytes of data. We compress the files into a smaller QuickTime format to ensure easy accessibility for all team members giving each department a thorough review—allowing us to verify that all necessary material was captured during the previous day's work and maintain continuity when returning to a scene that may be shot up to several weeks later. But those files don’t show the true look or sound.

ClearView was an essential tool. If a problem came up on the day or the next morning the production had to verify, we could all jump on immediately—the director on set during lunch, the editor and EP in their office, sometimes even in their cars. Howard was traveling constantly; and would jump on from his car or at the airport. ClearView let us all see the same high-quality image at the same time and exhale: ‘Okay, that color looks right, we have the coverage we need to complete the sequence. We’re good.’”

Being able to send a link and suddenly be ‘in the same room’ despite being scattered everywhere—that’s huge in modern filmmaking.

Do you calibrate devices so everyone sees the same thing?

Yes. iPad Pros are the best—absolutely worth the premium. Their resolution is reliable, and I trust them more than laptops, whose screens deteriorate over time. Editors work on calibrated monitors in their offices. Most of us review on our laptops for size of the screen and extra convenience of it being on hand, but when in doubt, we switch to iPads to double-check the look.

You mentioned using ClearView during production. What about during review and approval in editorial?

We couldn't do what we do without. For example, if a director shot one half of a scene and needed to return days later or a shot that was never intended to be VFX, we are now including the Manhattan skyline, the editor and director could meet on ClearView to review assembled footage and confirm which pieces were needed to complete the scene.

Once they’ve wrapped one show, directors often move on to prep their next project and aren’t always available come into editorial. We’d send them a cut overnight as a link on their email that they can watch; they’d provide notes, and the next day they could jump on ClearView with the editor. They might say, ‘Let’s go to five minutes in,’ and the editor can work on their notes in real time together. Its' also not uncommon for the director and editor to talk through alt takes and try a different approach of a scene. Once the editor understands what the director has in mind, the director can step away to allow the editor time to assemble the sequence and jump back on seamlessly once they are ready.

With producers, getting everyone physically in the same room at the same time is harder than ever. Once we’re in the producer cut, ClearView makes it simple: anyone can join from any location at any step of the process allowing the most efficient use of everyone's time.

We don’t use it as heavily with studios unless a note requires deep detail in reviewing footage, but for our internal process—and problem-solving tough edits or visual effects—it’s invaluable. Once we’re in the edit phase, ClearView is used as often as a keyboard— easily 60% of our day is spent collaborating together on Clearview.

Did you also use it with the sound?

Yes. In spotting sessions for sound, music, and VFX, sometimes dues to time constraints or deadlines we would all be in separate locations watching the same feed. If a music cue wasn’t working, we’d play the scene once with the temp cue, then again muted in real time so the composer could focus on getting the pacing and rhythm.

Before this technology, we’d have to send files back and forth and manually try to sync playback on both ends—not ideal.

We also used Sohonet extensively for our sound mixes. Mixes were done in New York. Most EPs, including myself, were on the dubbing stage, but when that wasn't the case we’d send the ClearView feed out for them to remote into the session; while we can’t control their listening environment, good headphones or proper monitors ensured they were hearing the mix accurately and ClearView ensured the signal we sent reached them without losing high or low end frequency along the way.

Each rooms varies, of course—sound reflects differently—but with ClearView I’m confident the feed being delivered to just about anywhere in the world is being delivered and heard as intended.

Presumably, the Netflix deliverables were HDR 4K?

Yes. Netflix deserves a lot of credit—they care deeply about giving filmmakers a chance to review the final product after it goes through their pipeline and make any changes necessary to ensure the look and sound goes to air as intended. We deliver HDR picture and Atmos sound.

It seems like you welcome the challenge of high end TV production and creative problem-solving it requires. Would that be fair?

Absolutely. It’s like a bullet train heading toward a track switch. You have to make decisions—do we omit this scene or keep it? You must stay completely in tune with how the story is playing. Once it’s on air, there are no redos and no explanations like, ‘Why didn’t they ever show what the note said?’ Sometimes we wish we had, but in the moment we have to choose what best serves the story and moves it forward.

Capturing data for autonomous vehicles

IEC E-Tech

article here 

Sensors replace human vision in autonomous cars, and the tech is rapidly evolving as data informs R&D teams the world over. But what are the standards?

As vehicles become more autonomous, the amount of data needed to ensure passenger safety has steadily increased. While early debates focused on the number and type of sensors required, attention has now shifted towards how data is processed, stored and leveraged to achieve higher levels of autonomy.

“Autonomous driving is fundamentally a data-driven development process,” says Oussama Ben Moussa, Global Automotive Industry Architect at an international IT and consulting group. “Mastery of data — both physical and synthetic — will determine the pace of innovation and competitiveness in the industry.”

Sensors reach maturity for AVs

This new autonomous taxi van from a major German automotive manufacturer integrates 27 sensing devices into its advanced driver-assistance systems (ADAS). It has been tested to Level 4, which means that the vehicle is capable of operating without human intervention within designated areas. 

The ADAS requires precise information about what's happening inside and outside the vehicle. While an array of technology combines to sense the natural environment and detect objects around a vehicle, applications inside the car monitor driver behaviour and machine diagnostics.

“Sensors have reached the required maturity to be able to support most automated driving scenarios, and they are also two to three orders of magnitude better than a human driver,” says Nir Goren, Chief Innovation Officer at an Israel-based developer of light detection and ranging (LiDAR) technologies and perception software. “We have the sensor technology, the range, the resolution and the multi-modalities. It’s not only that sensors are scanning and updating all sides of the vehicle all of the time – which a human driver cannot do – but they also have superhuman vision way beyond what we can see with our eyes.”

The optimum combination of sensors

The market for autonomous driving passenger cars is estimated to generate USD 400 billion within a decade, according to a 2023 report by Mackinsey. The market for autonomous driving sensors is expected to skyrocket accordingly, from USD 11,8 billion in 2023 to reach over USD 40 billion by 2030, with some predictions estimating that 95% of all cars on the road will be connected.

The exact mix of sensors varies by car maker. One manufacturer, for example, has concentrated development on “vision-only” information culled from an array of eight cameras spanning the car’s entire field of view augmented by artificial intelligence (AI).

“Sensors are a strategic choice for original equipment manufacturers (OEMs), impacting both features and safety,” says Ben Moussa. “One well-known autonomous vehicle (AV) manufacturer relies on cameras only, while others insist on active LiDAR sensors – which work by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver – to handle cases such as foggy nights or poorly marked roads.”

A key test case is being able to identify debris, such as a tyre, on the road ahead. “Even during daylight, this is hard to spot from 200 metres away in order to take action (break or change lanes),” says Goren. “On a dark road, it is beyond the capabilities of human vision and computer vision, but accurate information is clearly necessary for safe driving. This is why many experts are of the view that AVs require LiDAR sensors as well as cameras.”

Other types include ultrasonic sensors, which emit high-frequency sound waves that hit an object and bounce back to the sensor, calculating the distance between sensor and object. Since ultrasonic sensors work best at close range, they tend to be complemented by sensors which are more proficient at detecting objects at a distance, such as LiDAR, and their velocity, which is what radars do best.

In addition, inertial measurement units, like gyroscopes and accelerometers, support the overall navigation system. Infrared cameras inside the car record images of the driver’s eyes and blend this with real-time data about road conditions to detect if a driver is paying attention at potentially hazardous moments.

“In one semi-autonomous architecture I’ve worked on, there are 12 cameras (front, corners, rear, mirrors, cockpit for driver monitoring and sometimes thermal cameras), plus more than four radars, one LiDAR and at least eight ultrasonic sensors. Altogether, the minimum number of sensing devices is around 24,” says Ben Moussa.

The five levels of autonomy

Autonomous driving levels are defined by the Society of Automotive Engineers (SAE). Level 1 qualifies vehicles for assistive driving systems like adaptive cruise control. Level 2 is where ADAS kicks in: the vehicle can control steering and accelerating/decelerating or automatically move the steering wheel to keep in lane, but the driver remains in charge.

“There’s a huge gap between Level 2 and Level 3,” says Goren. “Level 3 is ‘hands off, eyes off’, which means that you can push a button and the car drives, leaving you free to read the newspaper. If anything goes wrong, then it's the responsibility of the car.”

Level 4 applies to passenger vehicles but today is commercialized only in robotaxis  and robo-trucks, where the car is capable of full automation, and some vehicles no longer have a steering wheel. Level 4 restricts operation to designated geofenced zones, whereas Level 5 vehicles will theoretically be able to travel anywhere with no human driver required.

Data generation and management

AVs generate vast amounts of data based on the number of sensors and the level of autonomy. Goren calculates that a single high-definition camera generates hundreds of megabytes of data per second, while a single LiDAR sensor generates one gigabyte (GB) of data per second.

In day-to-day operations, however, vehicles can store only a fraction of this potential data. For every five hours driving, only around 30 seconds can be stored because of the cost of storage and the delay in routing data from the car to the cloud and back again. Vast amounts of data are, however, collected during the engineering and development phase.

Ben Moussa explains, “During R&D, OEMs run fleets across many countries with different geographies and conditions to collect diverse data. This data, estimated to generate up to 22 terabytes (TR) per vehicle per day, is used to build a universal software that will operate across the fleet when vehicles are in service. In the engineering phase, we are storing most of the data because we need to capture all of the specificities about road, weather conditions and so on.”

For some projects, OEMs operate hundreds of cars driving in more than 50 countries and over millions of kilometres to collect data for use in autonomous driving development. In daily operations, powerful chipsets running AI algorithms enable data to be processed onboard the vehicles (at the network edge) with response times in milliseconds. This includes the aggregation and analysis of raw data from multiple sensors (a process known as sensor fusion) to obtain a detailed and probabilistic understanding of the surrounding environment and automate response in real time.

Select data is uploaded to the OEM’s cloud during EV charging or Wi-Fi connection. This data tends to be triggered by anomalies (e.g. animals crossing the road) and used to train, refine and update the OEM’s universal platform.

In order for autonomous driving to scale, a key challenge is to decrease the dependency on physical, real-world data. Development is focusing on distributed or hybrid databases, using virtual information.

“Hybrid means a mix between physical data gathered from sensors in the real environment plus virtual or synthetic data from digital twins,” explains Ben Moussa. “For example, we are building digital twins of cities based on a simulation platform in which we drive virtual cars and collect synthetic data from sensors as if we were driving in the real world. This will accelerate autonomous driving development.”

The value of standards

Automated vehicles require the highest levels of safety and failsafe testing, and these objectives lie at the core of the international standards calibrated and published by the technical committees of the IEC. IEC TC 47 is the committee developing international standards for semiconductor devices. Among dozens of its publications, it is working on the first edition of IEC 63551-6, which addresses chip-scale testing of semiconductor devices used in AVs.

When it comes to the safety of cameras for AVs, IEC TC 100 publishes several documents which can prove useful. One of its publications is IEC 63033-1, which specifies a model for generating the surrounding visual image of the drive monitoring system, which creates a composite 360° image from external cameras. This enables the correct positioning of a vehicle in relation to its surroundings, using input from a rear-view monitor for parking assistance as well as blind corner and bird’s eye monitors.

The recently published IEC 60730-2-23 outlines the particular requirements for electrical sensors and electronic sensing elements. As is pointed out in this IEC article, this is intended to help manufacturers ensure that sensors perform safely, reliably and accurately under normal and abnormal conditions and that any embedded electronics deliver a dependable output signal. Conditioning circuits that are inseparable from the control for which the sensing element relies on to perform its function are evaluated under the requirements of the relevant Part 2 Standard and/or IEC 60730-1.

These standards are published by IEC TC 72, the IEC technical committee responsible for automatic electrical controls. Its work supports global harmonization and enhances the safety and performance of devices used in everyday life.

The joint IEC and ISO committee on the Internet of Things (IoT) and digital twin, ISO/IEC JTC 1/SC 41, sets standards ensuring the safety, reliability and compatibility of connected devices across various applications. Another subcommittee of JTC 1, SC 38, prepares standards for cloud computing, including distributed cloud systems or edge computing.

Conformity assessment (CA) is also key for industry stakeholders to be able to trust that the parts used to make AVs follow the appropriate standards. The IEC Quality Assessment System, IECQ, proposes an approved components certification, which is applicable to various electronic components, including sensors that adhere to technical standards or client specifications accepted within the IECQ System.

As the industry continues to grow, standards and CA are increasingly indispensable for it to mature safely and efficiently.

 

Tuesday, 20 January 2026

WBD debuts technology platform for Winter Olympics and beyond

Streaming Media

article here

Warner Bros. Discovery (WBD) is deploying a purpose-built broadcast platform and a large physical presence to deliver fully immersive coverage to audiences for the upcoming Winter Olympic Games.

Scott Young, EVP at WBD Sports Europe, said the company’s approach reflects its ambition to make “every moment of the Games discoverable and viewable” across 47 markets and 21 languages including on HBO Max.

“This is not a passive production,” Young said. “It’s a fully immersive, hands-on operation. We want every moment the host broadcaster produces, and we curate that across our platforms in a way that best suits each local market.”

New technology platform

Central to WBD’s strategy is the debut of a purpose-built technology platform known as iBuild, which is being deployed for the first time at the International Broadcast Centre (IBC) in Milan.

“A couple of years ago, our technology team decided it was time to build our own platform,” Young explained. “iBuild receives multiple inbound feeds from OBS and allows us to manipulate and distribute them in ways that best suit our linear, streaming, web, app and social platforms.”

Unlike previous Games, where similar systems were rented, iBuild is owned outright by WBD and physically installed at the IBC.

“This is a physical build in Milan,” he said. “It was assembled and tested for months in a warehouse in the UK and contains around 22 kilometres of cabling. It’s a highly advanced, purpose-built piece of technology designed specifically for our business. We’re able to start the manipulation of content on the ground in Milan rather than just feeding signals back from OBS.”

Key suppliers include Riedel intercoms at the at the front end, an Arista switch network, for manipulation and Appear for onward distribution of signals. The advantages are cost saving and control not just for Milan Cortina but over serial major live events.

“We’ve always had to rent this equipment so when you look forward into our business, knowing we have the Olympic rights until Brisbane 2032, hopefully beyond, and then also for other events like Roland Garros and tennis Grand Slams - anywhere where we’re on site receiving a large number of feeds and distributing them across multiple markets, this technology becomes a real superpower for us.”

 

On-site presence across multiple clusters

WBD will have around 150 staff on the ground in Milan, managing operations at the IBC, alongside an extensive studio and production footprint spread across several locations.

The broadcaster will operate two major on-site studio hubs. One is a bespoke “snow dome” studio in Livigno, created in partnership with the local city authorities.

“We’re building an igloo that will act as a broadcast centre,” Young said. “It’s perfect for leaning into the culture and energy of the snow sports.”

A second, more traditional multi-storey studio complex is built in Cortina, featuring three studios capable of serving any market.

“The backdrop is the Cortina mountain range,” he said. “It’s an incredible location, close to the alpine venues and sliding centre, and flexible enough to host everything from stand-ups to interviews — even for external partners like CNN.”

In addition, each major market will continue to present from its home base, reflecting the increasingly remote nature of winter sports broadcasting.

Managing a geographically complex Games

The distributed nature of the Games — spread across multiple clusters — has influenced staffing and logistics decisions.

“Our initial reaction was: don’t move people,” Young said. “The travel time, the weather conditions, and the complexity make that risky. Instead, teams are dedicated to specific locations and sports.”

He said the model mirrors lessons learned from Paris, with every venue connected back to the IBC. “From a technical standpoint, the philosophy is the same — minimise movement, maximise connectivity, and keep people safe.”

Close collaboration with OBS

Young said WBD has worked closely with Olympic Broadcasting Services (OBS) since the earliest planning stages, particularly around connectivity and access to individual feeds.

“We rely on volume,” he said. “We don’t just take a single world feed. We want the individual feeds because our commitment is to broadcast every moment of the Games.”

He added that OBS was fully aligned with that ambition. While OBS continues to innovate with drones, enhanced camera systems and data-rich coverage, Young said WBD’s own innovation is rooted in storytelling.

“OBS delivers world-class coverage,” he said. “Our innovation is what we do with it.”

Storytelling and local relevance

WBD will employ 107 Olympians across its coverage, representing experience from 218 Olympic Games and 109 medals, 41 of which are gold. They include Slovenian skier Tina Maze and Germany’s four-time Olympic luge champion, Natalie Geisenberger. “That’s the real innovation for us,” Young said. “Our philosophy is simple: ‘take me there and make me care.’ Former Olympians can explain what it really means to win — or lose — a medal. That’s how audiences resonate with the athletes.”

WBD will be ingesting all 6500 hours of content produced by OBS but will use its own experienced editorial teams to tailor content for different market. Highlights for instance will be curated manually by WBD’s editorial teams rather than relying on automated or AI systems. “Our audience is local, not pan-regional so highlights need to reflect what matters in each market,” Young said.

Social, vertical video and virtual studios

WBD will also place a major emphasis on social and mobile-first content, with around 25 social media staff on site. “The Olympics aren’t a weekend event,” Young said. “Social media keeps the Games front of mind every morning, all day, and into the evening.”

The broadcaster will produce bespoke vertical video content and integrate OBS material where appropriate. “Virtual studios are a key part of our philosophy,” Young said. “Whether it’s AR overlays, green-screen studios, or hybrid physical-virtual sets, nearly every market will use some form of virtual enhancement.

“This is about building for the future,” he said. “Not just these Games, but how we tell Olympic stories for the next decade.”