Wednesday, 21 January 2026

Hanging by a Wire: Nail-biting documentary delicately balances archive with fresh footage

 my interview and words written for RED

article here

The life and death rescue of a group of boys and two adults set against the ticking clock of a single frayed cable is the subject of a hair-raising feature documentary premiering at the 2026 Sundance Film Festival.

Hanging by a Wire is the edge-of-the-seat account of a true-life incident from 2023 beginning when the teenagers began their routine journey to school by cable car across a mountain pass in northwestern Pakistan. When two of the system’s cables snapped, the passengers were left in mortal danger nearly one mile above the ground and with time running out before their lifeline of a third cable would give way.

Multi-Emmy-nominated Pakistani filmmaker Mo Naqvi (The Accused: Damned or Devoted; Turning Point: 9/11 and the War on Terror) and producer Bilal Sami (David Blaine – Do Not Attempt) conceived Hanging by a Wire as a real-time thriller built from footage shot moment-by-moment as it unfolded. The hours and hours of growing tension and multiple failed rescue attempts were captured from inside the cable car on the boys’ phones, from drones, a military helicopter, and from the crowd below.

“I was immediately hooked by Mo’s vision,” recalls cinematographer Brendan McGinty (Secrets of the Neanderthals; Welcome to Earth) who was invited by Naqvi to shoot the film. “The archive footage of the event was mind-blowing, a group of schoolboys hanging a mile high from a damaged cable car in the remote mountain region of Battagram in Pakistan. We all understood from the get-go that we needed to hold true to the inherent drama of this real situation, and that we wanted to meet this documentary reality with all of the cinematic flair of a Hollywood action thriller.”

Their visual approach was based in part on their cinematic aspirations for the film, largely founded on the incredible archive footage of the event, but also on the wealth of rich recce material of the region and of the real-life key protagonists. There followed two shoots in 2025, one more documentary-focused at the start of the year and a second more dramatic-focused towards the end.

“The daytime sequences come almost entirely from what people filmed on their phones,” McGinty explains. “Our own shooting focused on interviews and observational material—traditional documentary work—used to understand the characters and their relationships.

“But the most extraordinary part of the rescue happens at night, and there is almost no footage of it. Phones stop working. There is no light. Yet the rescue unfolds over many terrifying stages, nearly failing multiple times, with lives repeatedly at risk. The challenge then becomes an ethical one: how do you tell this story truthfully without inventing anything?”

The solution they arrived at was to involve the real protagonists in the retelling. The boys in the cable car and the rescuers shared authorship of their own story. They physically reenacted what happened, placing themselves in a replica of the cable car (all under studio-safe conditions) guiding the filmmakers through each moment.

“Nothing was scripted. They weren’t performing lines; they were remembering. That level of truth made the work extraordinarily difficult but also incredibly powerful. As a cinematographer, shooting drama under those constraints—where reality is the benchmark and ethics are paramount—is far harder than shooting fiction. You are constantly checking yourself against the truth of what happened.”

From inception, an IMAX presentation was part of the discussion. It was one of several reasons why McGinty selected RED V-RAPTOR for the production.

“To be honest, I don't think there was any other choice for me. The documentaries I’ve seen in IMAX have been wonderful immersive experiences and our story would be perfect for this. The minute those discussions began I'm thinking about protecting the resolution.”

Importance of Optical Low-Pass Filters

He elaborates, “Shooting more resolution than you actually need is always the way I would want to go. Shooting 8K on the V-RAPTOR for a 4K finish means you could reframe and push in to the image but because the resolution is so high you don't need false detail.

“A lot of lower-end cameras which barely reach 4K employ automatic sharpening techniques so the picture bristles with more detail than is actually there. It's false detail. When you're shooting 8K you really don't need any of that. Shooting 8K with RED feels very much like how my eye sees the world. It's a very soft, naturalistic rendition of texture.”

Texture is important to McGinty who shot this film in very high contrast environments. “There are brittle trees and bushes in hard light and shadows. You end up with footage that really tests the fine detail. If your camera doesn’t have Optical Low-Pass Filters (OLPFs) it can produce false, high-frequency detail, which is actually a bit of a nightmare to manage in post. You can end up with these very shimmering images where you’re not looking at real detail but at a computer's version of what detail might look like.”

Protecting the digital negative

McGinty began his career nearly 30 years ago shooting shorts and independent features on film which is why he prefers to shoot raw. “It’s very important to me that I protect the digital negative. Raw is the only way to shoot digital and among high-end cine cameras RED owns compressed raw.”

However, shooting uncompressed raw for docu-style footage is hardly practical, especially at 8K and in a remote region. “You would record terabytes and terabytes of data,” he says. “The files are too big for field work. So, one of the real geniuses of RED from the beginning is REDCODE RAW. I want a raw image to go back to which always protects and holds highlights and where I know there's more in the shadows. So, shooting on location in high contrast situations and in low light with R3D was a big deal for me.”

McGinty shot a lot of handheld and appreciates the ergonomics of the camera. “V-RAPTOR does everything I need it to do in a very simple way. It has very high frame rates if I need them. It has a very simple interface which is ideal for documentary work. You don't want a camera with infinite complexity and menus. You just want something to help you be responsive to a situation.”

Majesty of Vista Vision

The final reason behind his choice was V-RAPTOR’s VV sensor which frames for an aspect ratio of 17:9. “We’re in this exquisite mountainous region of North Pakistan, a mountain climbing Mecca, so I knew that to capture some of that monumentality and majesty I wanted to be shooting VistaVision. Also, in terms of telling our story I felt the absolute heroism of everyone concerned. These are real heroes, people who will step into danger to save the lives of others. I thought an appropriate canvas for us to work on would be large format VistaVision. The RAPTOR just ticked all those boxes for me.”

He paired the V-RAPTOR with Angenieux EZ zooms 22-60mm and 45-135mm occasionally dipping into some macro and fast aperture prime options. “I particularly like the close focus on the EZs and at the 135mm end of the lens this produces very intimate moments and details. With these on the V-RAPTOR we could move from the wide-angle majesty of the rugged Pakistani mountain landscapes to the close-up detail of an eye, without losing the moment to a lens change.”

For stabilized work, McGinty used the DJI Ronin 4D 8K, specifically because it includes an OLPF. This was fitted with a cine-vised set of Nikon Nikkor AIS primes. They also used GoPros, drones and some Sony cameras, including FX3s and FX6s, mostly for second-unit work.

He was aided by AC and long-time collaborator Charlie Perera and Taseer Ali who operated B-camera on the first shoot.

Pushing the picture in post

McGinty’s camera choice was further vindicated in post at Molinare, London working with colorist Jake Davies where he found he was able to push the RED raw files further than he thought possible.

“I would happily shoot 2000 ISO on V-RAPTOR, maybe 3200, and know that I'm still going to have a comfortable picture with not too much noise but what was interesting in the grade is how much noise there was in both the 4D footage and a lot of the Sony footage.

“Jake and I found we were able to regularly push the RED to higher ISOs than I was getting on the Sony and the 4D with—importantly—less noise. The moment we began to push the pictures to try to lift the 4D footage we couldn't. It was pretty maxed out on that. I was shooting 3200 ISO to ProRes raw on the 4D on some of the night stuff but the picture fell apart quite rapidly. There wasn't anywhere near as much there as there was in the R3D files.

“What shooting RED raw means when you’re shooting in low light is the lack of noise in highlights and shadows and the ability to push the picture way beyond my exposure on the day while still not falling apart. Our ability to grade the R3D footage was head and shoulders above any of the other camera formats that were in there.”

He and Davies explored film emulation and lens emulation ideas as a way to tie the footage together. “There is a vast archive at the heart of the film, along with second-unit work and extensive RED material. A unifying aesthetic, such as film emulation, offers an effective way to bring it all together.”

Editor William Grayburn meticulously reconstructed the timeline from multilingual interviews, archive, and new footage. That process revealed gaps in the visual storytelling, which guided the second production shoot. “Without that editorial clarity, the film simply wouldn’t exist in its current form,” McGinty says.

“Hand on heart, this is a phenomenal film. It’s gripping, emotionally honest, and visually powerful. It has the pull of a thriller but the integrity of a documentary. I’m incredibly proud to have been part of it.”

The documentary is produced by Naqvi alongside EverWonder Studio and Mindhouse Productions and premieres in Park City, Utah, on January 22.

Producer Sean Sforza on making The Beast in Me

interview and words for Sohonet

article here

There’s a potential murderer on Long Island in psychological thriller The Beast in Me as new neighbors played by Claire Danes and Matthew Rhys face off in a deadly game of cat and mouse. Netflix latest hit miniseries is created, written and executive produced by Gabe Rotter (The X-Files), executive produced and written by Daniel Pearle, and led by showrunner by Howard Gordon (24; Homeland). Executive producers also include Jodie Foster and Conan O'Brien. Production ran from September 2024 through February 2025 and the program debuted on Netflix in November to critical acclaim. Running the show’s extensive postproduction operation was producer Sean Sforza whose extensive credits include Julia (HBO Max), Empire (Fox) and Bull (CBS).

While Sforza says his goal when entering the industry some 25 years ago was to work on features he instantly thrived on the unique pressures of fast turnaround episodic television.

“They were shooting two features in the same building where we were making a series I was on. Over the course of ten months, we completed 24 episodes, and they were still only a quarter of the way through their movie. I remember thinking, ‘I don’t know if I have the patience for this.’

“I realized I love the pace of television. We don’t have enough time to endlessly go over things. We have to look immediately at what’s important to the story. Some of my favorite showrunners ask, ‘Are we fooling ourselves by being in love with our work or is it actually moving the story forward?’ That question sticks with me.

“Howard Gordon and I have worked together on multiple projects. I’m very fortunate: I’m in post, but I’ve been able to contribute to so many aspects—main titles, visual effects, color, and even time in the edit room. I get to have my hands involved in much of the process from start to finish.

Sohonet: Regarding The Beast in Me, what were your responsibilities, and what were the key components—people, facilities, technology—that you brought on board?

Sean Sforza: A bit of everything. I set up workflow pipelines. Production was in New Jersey, and the rest of editorial and post were in New York. We had four editors working on blocks of two episodes each one of whom was located in Los Angeles. We also had to connect our composers (Sean Callery, along with Sara Barone and Tim Callobre). During production, Howard was constantly traveling—in Canada for Accused, then later Spain and Cuba for another project—so setting up a workflow that kept everyone connected was crucial.

We coordinated dailies from set in New Jersey to be processed and edited in New York and Los Angeles. Our cinematographer, Lyle Vincent, and director, Antonio Campos (who was also one of the executive producers), originally envisioned the series being shot on film, but when budgetary constraints arose, we faced the challenge of transitioning to digital without compromising the intended look and depth of film. Clearview played a crucial role in sharing files of our look tests, including daylight, night, interior, and exterior shots, to maintain the feel and look of our world. However, this can be tricky because these large files get very compressed in the editorial process and Clearview helped us have the confidence that everyone in their different locations was viewing the same look and color.

I also hire the editorial team—visual effects, sound, post staff and vendors that best suit the project. It’s a very collaborative effort between the creatives and the studio and network to make sure we have all our players on board.

Sean Sforza, Post Producer

How did you use ClearView to connect your production staff and executives on The Beast in Me?

I’d used ClearView countless times on several projects, including on Julia, a wonderful series about Julia Child that debuted in 2022. On The Beast in Me, ClearView was essential part of our team.

When dailies come in, the high-definition files go onto a viewing platform holding terabytes of data. We compress the files into a smaller QuickTime format to ensure easy accessibility for all team members giving each department a thorough review—allowing us to verify that all necessary material was captured during the previous day's work and maintain continuity when returning to a scene that may be shot up to several weeks later. But those files don’t show the true look or sound.

ClearView was an essential tool. If a problem came up on the day or the next morning the production had to verify, we could all jump on immediately—the director on set during lunch, the editor and EP in their office, sometimes even in their cars. Howard was traveling constantly; and would jump on from his car or at the airport. ClearView let us all see the same high-quality image at the same time and exhale: ‘Okay, that color looks right, we have the coverage we need to complete the sequence. We’re good.’”

Being able to send a link and suddenly be ‘in the same room’ despite being scattered everywhere—that’s huge in modern filmmaking.

Do you calibrate devices so everyone sees the same thing?

Yes. iPad Pros are the best—absolutely worth the premium. Their resolution is reliable, and I trust them more than laptops, whose screens deteriorate over time. Editors work on calibrated monitors in their offices. Most of us review on our laptops for size of the screen and extra convenience of it being on hand, but when in doubt, we switch to iPads to double-check the look.

You mentioned using ClearView during production. What about during review and approval in editorial?

We couldn't do what we do without. For example, if a director shot one half of a scene and needed to return days later or a shot that was never intended to be VFX, we are now including the Manhattan skyline, the editor and director could meet on ClearView to review assembled footage and confirm which pieces were needed to complete the scene.

Once they’ve wrapped one show, directors often move on to prep their next project and aren’t always available come into editorial. We’d send them a cut overnight as a link on their email that they can watch; they’d provide notes, and the next day they could jump on ClearView with the editor. They might say, ‘Let’s go to five minutes in,’ and the editor can work on their notes in real time together. Its' also not uncommon for the director and editor to talk through alt takes and try a different approach of a scene. Once the editor understands what the director has in mind, the director can step away to allow the editor time to assemble the sequence and jump back on seamlessly once they are ready.

With producers, getting everyone physically in the same room at the same time is harder than ever. Once we’re in the producer cut, ClearView makes it simple: anyone can join from any location at any step of the process allowing the most efficient use of everyone's time.

We don’t use it as heavily with studios unless a note requires deep detail in reviewing footage, but for our internal process—and problem-solving tough edits or visual effects—it’s invaluable. Once we’re in the edit phase, ClearView is used as often as a keyboard— easily 60% of our day is spent collaborating together on Clearview.

Did you also use it with the sound?

Yes. In spotting sessions for sound, music, and VFX, sometimes dues to time constraints or deadlines we would all be in separate locations watching the same feed. If a music cue wasn’t working, we’d play the scene once with the temp cue, then again muted in real time so the composer could focus on getting the pacing and rhythm.

Before this technology, we’d have to send files back and forth and manually try to sync playback on both ends—not ideal.

We also used Sohonet extensively for our sound mixes. Mixes were done in New York. Most EPs, including myself, were on the dubbing stage, but when that wasn't the case we’d send the ClearView feed out for them to remote into the session; while we can’t control their listening environment, good headphones or proper monitors ensured they were hearing the mix accurately and ClearView ensured the signal we sent reached them without losing high or low end frequency along the way.

Each rooms varies, of course—sound reflects differently—but with ClearView I’m confident the feed being delivered to just about anywhere in the world is being delivered and heard as intended.

Presumably, the Netflix deliverables were HDR 4K?

Yes. Netflix deserves a lot of credit—they care deeply about giving filmmakers a chance to review the final product after it goes through their pipeline and make any changes necessary to ensure the look and sound goes to air as intended. We deliver HDR picture and Atmos sound.

It seems like you welcome the challenge of high end TV production and creative problem-solving it requires. Would that be fair?

Absolutely. It’s like a bullet train heading toward a track switch. You have to make decisions—do we omit this scene or keep it? You must stay completely in tune with how the story is playing. Once it’s on air, there are no redos and no explanations like, ‘Why didn’t they ever show what the note said?’ Sometimes we wish we had, but in the moment we have to choose what best serves the story and moves it forward.

Capturing data for autonomous vehicles

IEC E-Tech

article here 

Sensors replace human vision in autonomous cars, and the tech is rapidly evolving as data informs R&D teams the world over. But what are the standards?

As vehicles become more autonomous, the amount of data needed to ensure passenger safety has steadily increased. While early debates focused on the number and type of sensors required, attention has now shifted towards how data is processed, stored and leveraged to achieve higher levels of autonomy.

“Autonomous driving is fundamentally a data-driven development process,” says Oussama Ben Moussa, Global Automotive Industry Architect at an international IT and consulting group. “Mastery of data — both physical and synthetic — will determine the pace of innovation and competitiveness in the industry.”

Sensors reach maturity for AVs

This new autonomous taxi van from a major German automotive manufacturer integrates 27 sensing devices into its advanced driver-assistance systems (ADAS). It has been tested to Level 4, which means that the vehicle is capable of operating without human intervention within designated areas. 

The ADAS requires precise information about what's happening inside and outside the vehicle. While an array of technology combines to sense the natural environment and detect objects around a vehicle, applications inside the car monitor driver behaviour and machine diagnostics.

“Sensors have reached the required maturity to be able to support most automated driving scenarios, and they are also two to three orders of magnitude better than a human driver,” says Nir Goren, Chief Innovation Officer at an Israel-based developer of light detection and ranging (LiDAR) technologies and perception software. “We have the sensor technology, the range, the resolution and the multi-modalities. It’s not only that sensors are scanning and updating all sides of the vehicle all of the time – which a human driver cannot do – but they also have superhuman vision way beyond what we can see with our eyes.”

The optimum combination of sensors

The market for autonomous driving passenger cars is estimated to generate USD 400 billion within a decade, according to a 2023 report by Mackinsey. The market for autonomous driving sensors is expected to skyrocket accordingly, from USD 11,8 billion in 2023 to reach over USD 40 billion by 2030, with some predictions estimating that 95% of all cars on the road will be connected.

The exact mix of sensors varies by car maker. One manufacturer, for example, has concentrated development on “vision-only” information culled from an array of eight cameras spanning the car’s entire field of view augmented by artificial intelligence (AI).

“Sensors are a strategic choice for original equipment manufacturers (OEMs), impacting both features and safety,” says Ben Moussa. “One well-known autonomous vehicle (AV) manufacturer relies on cameras only, while others insist on active LiDAR sensors – which work by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver – to handle cases such as foggy nights or poorly marked roads.”

A key test case is being able to identify debris, such as a tyre, on the road ahead. “Even during daylight, this is hard to spot from 200 metres away in order to take action (break or change lanes),” says Goren. “On a dark road, it is beyond the capabilities of human vision and computer vision, but accurate information is clearly necessary for safe driving. This is why many experts are of the view that AVs require LiDAR sensors as well as cameras.”

Other types include ultrasonic sensors, which emit high-frequency sound waves that hit an object and bounce back to the sensor, calculating the distance between sensor and object. Since ultrasonic sensors work best at close range, they tend to be complemented by sensors which are more proficient at detecting objects at a distance, such as LiDAR, and their velocity, which is what radars do best.

In addition, inertial measurement units, like gyroscopes and accelerometers, support the overall navigation system. Infrared cameras inside the car record images of the driver’s eyes and blend this with real-time data about road conditions to detect if a driver is paying attention at potentially hazardous moments.

“In one semi-autonomous architecture I’ve worked on, there are 12 cameras (front, corners, rear, mirrors, cockpit for driver monitoring and sometimes thermal cameras), plus more than four radars, one LiDAR and at least eight ultrasonic sensors. Altogether, the minimum number of sensing devices is around 24,” says Ben Moussa.

The five levels of autonomy

Autonomous driving levels are defined by the Society of Automotive Engineers (SAE). Level 1 qualifies vehicles for assistive driving systems like adaptive cruise control. Level 2 is where ADAS kicks in: the vehicle can control steering and accelerating/decelerating or automatically move the steering wheel to keep in lane, but the driver remains in charge.

“There’s a huge gap between Level 2 and Level 3,” says Goren. “Level 3 is ‘hands off, eyes off’, which means that you can push a button and the car drives, leaving you free to read the newspaper. If anything goes wrong, then it's the responsibility of the car.”

Level 4 applies to passenger vehicles but today is commercialized only in robotaxis  and robo-trucks, where the car is capable of full automation, and some vehicles no longer have a steering wheel. Level 4 restricts operation to designated geofenced zones, whereas Level 5 vehicles will theoretically be able to travel anywhere with no human driver required.

Data generation and management

AVs generate vast amounts of data based on the number of sensors and the level of autonomy. Goren calculates that a single high-definition camera generates hundreds of megabytes of data per second, while a single LiDAR sensor generates one gigabyte (GB) of data per second.

In day-to-day operations, however, vehicles can store only a fraction of this potential data. For every five hours driving, only around 30 seconds can be stored because of the cost of storage and the delay in routing data from the car to the cloud and back again. Vast amounts of data are, however, collected during the engineering and development phase.

Ben Moussa explains, “During R&D, OEMs run fleets across many countries with different geographies and conditions to collect diverse data. This data, estimated to generate up to 22 terabytes (TR) per vehicle per day, is used to build a universal software that will operate across the fleet when vehicles are in service. In the engineering phase, we are storing most of the data because we need to capture all of the specificities about road, weather conditions and so on.”

For some projects, OEMs operate hundreds of cars driving in more than 50 countries and over millions of kilometres to collect data for use in autonomous driving development. In daily operations, powerful chipsets running AI algorithms enable data to be processed onboard the vehicles (at the network edge) with response times in milliseconds. This includes the aggregation and analysis of raw data from multiple sensors (a process known as sensor fusion) to obtain a detailed and probabilistic understanding of the surrounding environment and automate response in real time.

Select data is uploaded to the OEM’s cloud during EV charging or Wi-Fi connection. This data tends to be triggered by anomalies (e.g. animals crossing the road) and used to train, refine and update the OEM’s universal platform.

In order for autonomous driving to scale, a key challenge is to decrease the dependency on physical, real-world data. Development is focusing on distributed or hybrid databases, using virtual information.

“Hybrid means a mix between physical data gathered from sensors in the real environment plus virtual or synthetic data from digital twins,” explains Ben Moussa. “For example, we are building digital twins of cities based on a simulation platform in which we drive virtual cars and collect synthetic data from sensors as if we were driving in the real world. This will accelerate autonomous driving development.”

The value of standards

Automated vehicles require the highest levels of safety and failsafe testing, and these objectives lie at the core of the international standards calibrated and published by the technical committees of the IEC. IEC TC 47 is the committee developing international standards for semiconductor devices. Among dozens of its publications, it is working on the first edition of IEC 63551-6, which addresses chip-scale testing of semiconductor devices used in AVs.

When it comes to the safety of cameras for AVs, IEC TC 100 publishes several documents which can prove useful. One of its publications is IEC 63033-1, which specifies a model for generating the surrounding visual image of the drive monitoring system, which creates a composite 360° image from external cameras. This enables the correct positioning of a vehicle in relation to its surroundings, using input from a rear-view monitor for parking assistance as well as blind corner and bird’s eye monitors.

The recently published IEC 60730-2-23 outlines the particular requirements for electrical sensors and electronic sensing elements. As is pointed out in this IEC article, this is intended to help manufacturers ensure that sensors perform safely, reliably and accurately under normal and abnormal conditions and that any embedded electronics deliver a dependable output signal. Conditioning circuits that are inseparable from the control for which the sensing element relies on to perform its function are evaluated under the requirements of the relevant Part 2 Standard and/or IEC 60730-1.

These standards are published by IEC TC 72, the IEC technical committee responsible for automatic electrical controls. Its work supports global harmonization and enhances the safety and performance of devices used in everyday life.

The joint IEC and ISO committee on the Internet of Things (IoT) and digital twin, ISO/IEC JTC 1/SC 41, sets standards ensuring the safety, reliability and compatibility of connected devices across various applications. Another subcommittee of JTC 1, SC 38, prepares standards for cloud computing, including distributed cloud systems or edge computing.

Conformity assessment (CA) is also key for industry stakeholders to be able to trust that the parts used to make AVs follow the appropriate standards. The IEC Quality Assessment System, IECQ, proposes an approved components certification, which is applicable to various electronic components, including sensors that adhere to technical standards or client specifications accepted within the IECQ System.

As the industry continues to grow, standards and CA are increasingly indispensable for it to mature safely and efficiently.

 

Tuesday, 20 January 2026

WBD debuts technology platform for Winter Olympics and beyond

Streaming Media

article here

Warner Bros. Discovery (WBD) is deploying a purpose-built broadcast platform and a large physical presence to deliver fully immersive coverage to audiences for the upcoming Winter Olympic Games.

Scott Young, EVP at WBD Sports Europe, said the company’s approach reflects its ambition to make “every moment of the Games discoverable and viewable” across 47 markets and 21 languages including on HBO Max.

“This is not a passive production,” Young said. “It’s a fully immersive, hands-on operation. We want every moment the host broadcaster produces, and we curate that across our platforms in a way that best suits each local market.”

New technology platform

Central to WBD’s strategy is the debut of a purpose-built technology platform known as iBuild, which is being deployed for the first time at the International Broadcast Centre (IBC) in Milan.

“A couple of years ago, our technology team decided it was time to build our own platform,” Young explained. “iBuild receives multiple inbound feeds from OBS and allows us to manipulate and distribute them in ways that best suit our linear, streaming, web, app and social platforms.”

Unlike previous Games, where similar systems were rented, iBuild is owned outright by WBD and physically installed at the IBC.

“This is a physical build in Milan,” he said. “It was assembled and tested for months in a warehouse in the UK and contains around 22 kilometres of cabling. It’s a highly advanced, purpose-built piece of technology designed specifically for our business. We’re able to start the manipulation of content on the ground in Milan rather than just feeding signals back from OBS.”

Key suppliers include Riedel intercoms at the at the front end, an Arista switch network, for manipulation and Appear for onward distribution of signals. The advantages are cost saving and control not just for Milan Cortina but over serial major live events.

“We’ve always had to rent this equipment so when you look forward into our business, knowing we have the Olympic rights until Brisbane 2032, hopefully beyond, and then also for other events like Roland Garros and tennis Grand Slams - anywhere where we’re on site receiving a large number of feeds and distributing them across multiple markets, this technology becomes a real superpower for us.”

 

On-site presence across multiple clusters

WBD will have around 150 staff on the ground in Milan, managing operations at the IBC, alongside an extensive studio and production footprint spread across several locations.

The broadcaster will operate two major on-site studio hubs. One is a bespoke “snow dome” studio in Livigno, created in partnership with the local city authorities.

“We’re building an igloo that will act as a broadcast centre,” Young said. “It’s perfect for leaning into the culture and energy of the snow sports.”

A second, more traditional multi-storey studio complex is built in Cortina, featuring three studios capable of serving any market.

“The backdrop is the Cortina mountain range,” he said. “It’s an incredible location, close to the alpine venues and sliding centre, and flexible enough to host everything from stand-ups to interviews — even for external partners like CNN.”

In addition, each major market will continue to present from its home base, reflecting the increasingly remote nature of winter sports broadcasting.

Managing a geographically complex Games

The distributed nature of the Games — spread across multiple clusters — has influenced staffing and logistics decisions.

“Our initial reaction was: don’t move people,” Young said. “The travel time, the weather conditions, and the complexity make that risky. Instead, teams are dedicated to specific locations and sports.”

He said the model mirrors lessons learned from Paris, with every venue connected back to the IBC. “From a technical standpoint, the philosophy is the same — minimise movement, maximise connectivity, and keep people safe.”

Close collaboration with OBS

Young said WBD has worked closely with Olympic Broadcasting Services (OBS) since the earliest planning stages, particularly around connectivity and access to individual feeds.

“We rely on volume,” he said. “We don’t just take a single world feed. We want the individual feeds because our commitment is to broadcast every moment of the Games.”

He added that OBS was fully aligned with that ambition. While OBS continues to innovate with drones, enhanced camera systems and data-rich coverage, Young said WBD’s own innovation is rooted in storytelling.

“OBS delivers world-class coverage,” he said. “Our innovation is what we do with it.”

Storytelling and local relevance

WBD will employ 107 Olympians across its coverage, representing experience from 218 Olympic Games and 109 medals, 41 of which are gold. They include Slovenian skier Tina Maze and Germany’s four-time Olympic luge champion, Natalie Geisenberger. “That’s the real innovation for us,” Young said. “Our philosophy is simple: ‘take me there and make me care.’ Former Olympians can explain what it really means to win — or lose — a medal. That’s how audiences resonate with the athletes.”

WBD will be ingesting all 6500 hours of content produced by OBS but will use its own experienced editorial teams to tailor content for different market. Highlights for instance will be curated manually by WBD’s editorial teams rather than relying on automated or AI systems. “Our audience is local, not pan-regional so highlights need to reflect what matters in each market,” Young said.

Social, vertical video and virtual studios

WBD will also place a major emphasis on social and mobile-first content, with around 25 social media staff on site. “The Olympics aren’t a weekend event,” Young said. “Social media keeps the Games front of mind every morning, all day, and into the evening.”

The broadcaster will produce bespoke vertical video content and integrate OBS material where appropriate. “Virtual studios are a key part of our philosophy,” Young said. “Whether it’s AR overlays, green-screen studios, or hybrid physical-virtual sets, nearly every market will use some form of virtual enhancement.

“This is about building for the future,” he said. “Not just these Games, but how we tell Olympic stories for the next decade.”

 


Monday, 19 January 2026

Milano Cortina 2026: Winter Olympics Host Broadcaster Rules Out Remote Production

Streaming Media

article here

Is the Olympic Committee's host broadcaster warming to the notion of transitioning to virtual or REMI production for this winter's fast-approaching Milan Cortina games

Yiannis Exarchos, CEO, Olympic Broadcasting Services (OBS) insists that virtualizing broadcast and streaming workflows on such concentrated scale is simply not "realistic." 

“Being realistic, I don't think that we can say that we will stop having an International Broadcasting Center (IBC) and everything can be remote or virtualized,” said Yiannis Exarchos, CEO, Olympic Broadcasting Services (OBS).

Speaking as part of a round table event three weeks out from the opening ceremony of the XXV Winter Games in Northern Italy, Exarchos explained that the sheer scale of an Olympics and current technological limitations on behalf of rights holding broadcasters meant elimination of a physical IBC was not likely soon, or even desirable.
“We need to understand the sheer physical realities of an Olympic Games,” he said. “In Milan Cortina we will have 6000 broadcasters present. Why? Because they want to be close to the action and this is entirely legitimate. Most of them need to do that. We should remember, not all broadcasters are at the same level of development as NBC [which has a huge remote operation for the Olympics out of its home base in Connecticut]. Even if we imagine that all broadcasters could receive all the signals remotely at home they simply don't have the capacity in their existing facilities to deal with it all.”
For this Winter Games OBS will produce 1000 hours of competition coverage and another 5000 hours of additional support material for rights holders to tailor their coverage across multiple platforms. In the Summer Olympics, that volume doubles.
“Even a big TV network would only produce this amount of content over three years,” he said. “No broadcaster is set up to be handling this volume in such a short period. This is why remote working in the case of the Olympics is only half the solution. The ultimate solution is virtualized broadcasting, meaning that you actually do not really need a proper physical facility to transport media.”
He contrasted the unique Olympic production to that of the next largest global sports event, the FIFA World Cup. “You might have four football matches every day but can do that completely remotely including with a commentator reporter, but here you have, at times, 27 events going on at the same time.
At the Paris Games over 24,000 media were present, over double the number of athletes.
“It is too much, but it's not like we can make things to have only one thousand media representatives,” he said. “Actually, this would mean that there is no interest in the Games. What we don't want is that they are forced to bring people here to do something that they could be doing on the other side of the world. We do not want them to come if their role can be easily replicated remotely.”
The Virtual OB van (VOB) model introduced in Tokyo then Paris continues to advance with a virtualised, private COTS cloud-based infrastructure. The VOB delivers over 50% savings in compound space, up to 50% lower power use and reduced costs by replacing bespoke OB vans with industry-standard cabins, enabling remote production for curling, sliding sports, and speed skating.
Technology is not the goal – it is the enabler
“If you had a completely virtualized Olympics, this would not be an Olympics, because things that are very human are incredibly important for what we do and for our values and for the creation of emotion. Even if technology allowed for a complete virtualization of broadcasting there needs to be a measured approach.
“Technology is not the goal – it is the enabler,” Exarchos insisted. “Every innovation we adopt is driven by the purpose of elevating the Olympic experience for audiences around the world. For Milano Cortina 2026, our focus is not on a single innovation, but on the ability to integrate and scale multiple new technologies to Olympic level. This approach will enable us to deliver the most immersive and dynamic Olympic Winter Games broadcast to date.” 
Beginning a decade ago at the Rio Olympics OBS began to switch out hardware broadcast technologies for IP systems running on COTs. It began this journey long before most broadcasters and is now reaping the dividends since in Milan the IBC is 25% smaller than in Beijing 2022 with a 33% reduction in power.
In Milan OBS is piloting a fully cloud-based Master Control Room and virtualized technical operations centers. These will also be used at the Youth Games in Dakar later this year where it is claimed will deliver 75% less rack space, use 65% lower power and a 50% faster IBC rollout
For LA28 further gains of ~40% space and ~30% power reductions are expected, with earlier system testing ahead of venue readiness.
 
Cloud expanded with Alibaba
Media functions are no longer confined to the IBC, but extend across both the IBC and competition venues, enabling more efficient workflows for OBS and MRHs. This enables MRHs to manage their own content selection via cloud-based switching, reducing reliance on on-site infrastructure. The resulting hybrid model, combining private and public cloud, results in a reduction in the size of the IBC and associated power consumption, contributing to lower carbon costs. This approach has become the industry standard for global content distribution, particularly for UHD and HDR live transmission (the master production format for MC26 is 4K HDR).
Working with tech partners
Samsung is another tech giant at Milan Cortina and providing backing for live feeds to mobile phone for the first time.
“The good thing is that because of the importance of the Olympics, a lot of the companies in the industry feel that it's beneficial for them to be investing in the Olympic Games for moving forward themselves for adopting new technologies,” Exarchos said.
Out of the IBC during the Games will be internet traffic feeding broadcasters across the world representing 70 percent of Milan’s normal internet connectivity of the city of Milan,
“The capacity that is used for the games would allow you to download a full 4K feature film in half a second,” he said. “This is serious infrastructure and takes time to be developed.”
Key to local connectivity, especially in the remote and under connected mountain regions is Telecom Italia. “Because of the distributed geography of Milano Cortina you will be losing essentially a working day moving from one location to another, and also there are limitations in terms of accommodation availability. The [3 venues outside of Milan] are small places not massive, industrialized skiing areas and this is part of their charm and their beauty. From day one, and we insistent on that we have a system that enables broadcasters to do anything from everywhere, to be able to work very, very easily and cover other venues making remote connections and remote interviews. This capacity did not exist in the mountains until now. This technological capacity is a very significant legacy of the games that's being done by Telecom Italia.”
New production innovation at MC26
Drones entered the Olympics as a production tool at Sochi. Now FPV drones will be used across a range of outdoor winter sports, delivering dynamic first-person perspectives that follow athletes through competition courses. For the first time at a Winter Games, FPV drones will also be used in sliding sports, showcasing the speed and intensity of these events.
Exarchos said, “This new generation of technology allows for a very safe use of drones which go very close to the action and offer you a sense of being part of the competition. In many of the sports you will be see images that we have not seen before.”
For the first time and in collaboration with Alibaba, OBS is introducing real-time 360-degree replays, a combination of multi -camera replay systems and stroboscopic analysis delivering multi-angle views, freeze frames, and slow-motion sequences that showcase skill, technique, and moments of precision.
Breaking new ground at the Olympic Winter Games, an advanced tracking system for curling will visualise each stone’s path, speed, rotation and timing in real time. Subtle trajectory graphics and live data reveal strategic elements as play unfolds, complemented by a new overhead rail camera and ice -level views that enhance storytelling and viewer understanding.
OBS is testing an Automatic Media Description (AMD) platform to help teams manage the huge volume of live video from the Games. AI breaks broadcasts into searchable clips, suggests shot descriptions and keywords, and helps users quickly find key moments or highlights, making storytelling faster and easier.
For the first time at an Olympic Winter Games, automated highlights will transform the journey from moment to media, delivering ready-to-publish clips to every platform within minutes, “all while preserving the highest editorial standards” it is claimed. For comparison more than 100,000 highlights were generated during the Paris 2024 Games.
Raquel Rozados, OBS Head of Broadcaster Services, says, “Our focus is enabling MRHs to deliver the Games anytime, anywhere, on any device. For Milano Cortina 2026, we’re scaling personalised highlights, expanding behind-the-scenes content, offering immersive VR experiences, and using flexible cloud distribution workflows, along with a richer offering of short-form social media clips, providing MRHs with greater agility to tailor their coverage for diverse audiences.”
 Olympic GPT
MC26 will be the public debut of an AI driven search of content on Olympics.com. This  solution will deliver, for the first time, real-time results during the Games and will have the capacity to answer questions about the current state of events. For the first time, AI-powered article summaries on Olympics.com give fans a overview at the top of select stories.
“As with everything in the Olympics the challenge is accuracy, quality and control,” said Exarchos. “Most LLMs rely on the information that's out there on the internet, but we know that this information can be biased. Even leaving aside the ethical consideration, if you want to train an AI model on certain sports like football, tennis or basketball you can but you will not find this amount of data for every single Olympic sport. This is where it was very important in collaboration with Alibaba and with the technology team of the IOC and OBS that we brought in the experience, the terminology, the videos, and specific information that relates to the Olympics.

“We have the biggest repository of correct quality controlled data around the Olympics in the world. Every single statistic for every single competition and athlete exists there and it's correct. This is what is feeding this system and not what randomly may have been mentioned somewhere on the internet. We are also learning out of it and we are humble. There may be mistakes but we are pretty confident about what is being delivered. We have been testing intensely and seems to be working very well.”
With Winter sports federations the IOC is launching an analytics data project for MC26, where data will be exchanged. The IOC will gain deeper insights into digital sports consumption, trends and behaviours from partners’ data. Federations will access advanced data visualizations that combine sources to support their data-driven decision-making.
United in solidarity
The Winter Games begins on 4 February running until Feb. 22. “I must admit on a personal level, that following the news every day makes me want for this opening ceremony to come as early as possible,” Exarchos said. “Because I think these games can really help us recalibrate a little bit how we feel about the world and how we feel about the relations between people. What is uniquely part of the Olympics is its ability to bring people together. I feel that it's one of these times in human history where we need it so much.” 

Wednesday, 14 January 2026

Is ultra-low power the way forward?

IEC e-Tech

article here

This new emerging field is particularly energy efficient and promising. While some standards for it already exist, more will be required.

The number of devices connected to the internet is projected to reach 40 billion worldwide as we move into the era of the Internet of Things (IoT) and, even more so, what some pundits call the Intelligence of Things. Forecasts suggest this could put a strain on global energy use and count for as high as 25% of all energy consumption as soon as 2030. Currently, connected devices do not compare to power-hungry sectors such as transport or buildings, at an only five to seven estimated percentage of total energy consumption. But with the huge predicted growth, energy requirements are expected to escalate.

This increased demand was anticipated by some researchers theorizing the IoT a couple of decades ago. They foresaw that the exponential rise of the IoT and the concomitant increase in energy used to power billions of microelectronic devices required a new approach. And one of the results of this new approach is the burgeoning field of ultra-low power microcontroller, or ULP MCU, systems.

“It became apparent that for the IoT to be realized, we couldn’t continue to use traditional and in some cases energy-intensive approaches for electronic devices,” says IEC expert Leszek A. Majewski, who chairs the technical committee which prepares standards for printed electronics, IEC TC 119. He is also a lecturer in electrical and electronic engineering at the University of Manchester and has a PhD in the development of low-voltage organic field-effect transistors. “Although lithium-ion batteries and lithium-polymer batteries are currently being developed with smaller form factors and greater energy storage capacity for gadgets like smart home sensors, the increase in demand for new microelectronic applications in industrial monitoring, healthcare and space exploration requires approaches based on new materials and structures where batteries are not necessarily the answer,” he confirms.

ULP MCU systems fit in with this new approach. They are deemed essential to facilitate the growth of the IoT and the success of new applications which require extended operation without frequent charging or battery replacement and are often in discrete form factors.

A booming market and a hot topic

The value of the global ULP MCU market is forecast to hit USD 15,27 billion in 2030, driven by the rising adoption of devices such as consumer wearables, medical monitors and IoT sensors. “It’s a very hot topic and also a pretty wide field in terms of the new materials and techniques being explored,” adds Majewski. “In order for microelectronic devices to work, they need to be on pretty much all the time. Depending on the function, always-on devices would have to draw a minimum amount of power to stay on as long as possible. You don't want to have to change them in the field. You want to minimize maintenance.”

Consequently, ULP MCU devices need to be operated with very low supply voltages of 1 V or lower and consume minimal power, typically measured in milli- or microwatts. This significantly decreases power consumption amid rising energy costs.

Engineering challenges for ULP design

Since a lower-power device cannot generate, store or transmit vast amounts of data, the principal limitation is that its functionality needs to be simple. According to Majewski, depending on use, the design of a ULP device will often have to balance size and reliability against energy efficiency and performance.

“The range of the device is limited since the signals cannot be sent far, but you accept that this is an inherent property of a low-power device that you design for. You do not expect it to deliver a huge output. So, it must be something really basic with one or two parameters. Many of these devices are for new use cases that we would never imagine without low power.”

Ingestible devices are one of the applications

One example are ingestible electrochemical sensors, which can be swallowed to monitor health and detect disease as they pass through the body. Electronics can now be directly integrated into moulded plastic objects and devices. In-mold electronics (IME) is driven by the automotive industry because it “significantly reduces the cost, weight, waste and energy required to produce vehicle interior parts,” according to the group which has standardized its development.

IMEs can include all the surface-mounted devices included in traditional electronics to increase functionality, such as sensors, LEDs and microcontrollers. “You could integrate an array of micro low-power devices, like a matrix of transistor-based sensors, which can increase an application’s sophistication and capacity but will also increase power consumption,” Majewski says.

One avenue of enquiry proving particularly beneficial to wearables is that of e-textiles. Consisting of woven networks of flexible fibres, e-textiles can be readily deformed into stretchable, flexible form factors. This makes them ideal for use in wearables like smart watches which require motion tracking of human body’s physical or mechanical movements.

Rather than using hard substances like silicon to make transistors, organic soft matter is an emerging field of research. “Skin-like soft electronics offer conformal, stable interfaces with biological tissues – including skin, heart, brain, muscle and gut – enabling health monitoring, disease diagnosis and closed-loop therapeutic interventions,” researchers explain.

Energy harvesting works for ULP devices

Energy harvesting is the process of capturing and converting energy from the environment into electrical power, in principle as a perpetual and sustainable power source. It is a particularly energy efficient source of energy, as it can be even derived from body movement or heat. “There are a variety of methods to achieve it, and each one will convert the source power into usable energy in a different way,” explains Majewski. “Energy can be harvested from radio waves via a radio frequency (RF) antenna or heat via an infrared (IR) optical rectenna, for example.”

Thermal sensors on vehicles can harvest the radiant heat from the road surface. Other sensors on moving vehicles could obtain power from the motion energy “if placed in high-vibration locations, such as near the wheels or engine components.”

Similarly, energy for wearable devices can be powered by the kinetic movement of the body (piezoelectric) or from body heat or body fluids. Research shows that body-powered kinetic motion could add 10 mW to the primary power source for ULP MCUs.

“In the design of e-textiles you would take account of energy generation from a variety of mechanisms including thermoelectric generators that harvest body heat or you could use materials that incorporate solar cells. The use of piezoelectric mechanisms show particular potential for wearables since just the basic squeeze of the material will generate energy,” Majewski explains.

“However, all of these technologies are currently limited in terms of the amount of energy they are able to generate and in the consistency of energy generation. Consequently, energy harvesting for ULP MCUs is currently for use in limited applications, such as augmenting [extending] battery life,” he adds.

Wireless charging and data transmission

A number of wireless solutions for power delivery are gaining market adoption. These include the Wireless Power Consortium’s Qi standard (Qi2 and Qi2 25W versions) for wireless charging of mobile, handheld electronic devices and NFC Wireless Charging (NFC WLC) supporting lower power 1 W applications over a distance of 2 cm. Backed by the NFC Forum, the latest NFC wireless charging specification supports Qi induction charging platform, which delivers up to 15 W over a distance of 4 cm.

“Devices can be powered using near field communication (NFC) via an RF type of antenna,” explains Majewski. “The device is dormant, and until it is activated for a short period of time, it turns itself off again.” Technologies like Bluetooth Low Energy (BLE), Wi-Fi 6 and Zigbee are already designed to minimize radio-on time and therefore keep power needs to a minimum.

Existing IEC Standards and new ones required

The IEC is also paving the way for this new technology and has embarked in this field within several of its technical committees. TC 119 standardizes materials, processes, equipment and products for printed electronics. The TC is working on the first edition of IEC 62899-202-13, which contains measures for the conductive layer in IME and tests of printed thin-film transistor-based pressure sensors.

Printed electronics technology is not only low-cost but sustainable, says Majewski. “It is generating a lot of interest in manufacturing circles. TC 119 is therefore a key source of safety and performance standards for such technologies.”

IEC TC 124 publications relate to wearable applications, and there is ongoing work on the standardization of low-power electronics, according to Majewski. Further, TS 60747-19-2 provides a guideline for the specifications of a low-power sensor allowing autonomous power supply operation. It also provides a guideline for specifications of the power supply to drive smart sensors in a smart sensing unit. It is published by IEC TC 47 which, among many things, standardizes discrete semiconductors and sensors. It also publishes the IEC 62830-1 series, which includes methods for evaluating the performance of vibration-based piezoelectric energy harvesting devices.

Standards for piezoelectric technology are also developed by IEC TC 49, which addresses piezoelectric, dielectric and electrostatic devices. This includes IEC TS 61994-5, which gives the terms and definitions for sensors, intended for manufacturing piezoelectric elements, cells, modules and the systems.

ISO/IEC JTC 1/SC 41 is a joint subcommittee established between ISO and the IEC to standardize all aspects relating to the IoT, and therefore offers guidelines on the testing of IoT devices, including networks of sensors. “Standardization at the IEC tends to focus on test methods so that can we can ensure a particular device behaves as intended,” Majewski says. “But we need further research to consider the low-power options of new materials including, for example, human body tissues,” he concludes.

Energy – especially ultra-low-power forms of it – can indeed be sourced anywhere and everywhere. This opens up new opportunities for standardization bodies, but also for our future energy requirements in the race to meet our net-zero targets.

 


The potential of AI for urban intelligence

IEC E-tech

article here

Artificial intelligence (AI) can supercharge urban intelligence with the help of digital twins and sensors, as long as international standards are part of the equation. The citiverse is on the horizon!

Rome won the title of Smart City of 2025 at the Smart City Expo World Congress in Barcelona, which took place in November last year. The award was to reward a data-driven initiative that improves governance and public services, with the Italian capital rolling out 1 800 Internet of Things (IoT) sensors and more than 2 000 cameras connected by Wi-Fi, 250 km of fibre-optic cables and 5G networks across its public squares and busiest metro stations.  

The concept of “smart cities” emerged in the early 1970s, when civic authorities began to collect and analyze data about services to help them make decisions about planning and policy. As the volume and capability of digital sensors and communications networks have advanced, so too has the scope of smart cities. Projects now range from managing more efficient transport and traffic systems to security surveillance, faster emergency response, enhanced medical care and sustainable energy use.

AI and digital twins

The fast development of AI is viewed as a means of accelerating those goals. “Cities globally are sprinting to adopt AI,” says consultancy Deloitte in its report AI Powered Cities of the Future. AI is seen “as a driver of greater productivity and efficiency and, ultimately, economic growth and competitiveness”.

A key part of smart city development is digital twin technology – a digital replica of the city – which makes it possible to see how decisions affect urban life before they are deployed in the real world. This is now evolving into the citiverse, an AI-driven virtual city which is expected to be more efficient, more interactive and more transparent than a smart city, ultimately supporting more participatory governance.

Sensors are everywhere

Fortune Business Insights projects that the IoT sensor market will surpass USD 4 trillion by 2032, growing at a remarkable 24% a year. IoT sensors can be added to digital devices like cameras and non-digital street furniture like waste bins to measure the status of garbage containers while air quality meters measure pollution and light levels.

There is clear overlap between the markets for the IoT and that for smart cities. The value of the latter is estimated to top USD 3,7 trillion globally by 2030 growing at a compound annual growth rate (CAGR) of 29,4%. Rapid urbanization, which according to some figures, will see 68% of the world’s population living in cities by 2050, is putting governments and municipalities “under immense pressure” to improve infrastructure and adopt sustainable and efficient city planning solutions.

Such solutions include smart LED lights with motion detectors to save electricity; interoperable smart home technologies to reduce carbon emissions and electrotechnical waste; more accessible and personalized healthcare; and connected transport systems which monitor traffic conditions in real time with safety and traffic management benefits.

Virtual simulation

The holistic data-centric and real-time view of a city is increasingly being visualized virtually. By creating 3D digital twins of cities, “planners can simulate and test the impact of new developments, identify potential issues, optimize city services and proactively create policies to avoid future impact,” explain analysts at Capgemini.

Singapore is widely held to have launched the first virtual model for smarter urban planning. Dozens of other cities include Helsinki, with its virtual rendering of the city’s environment, operations and changing circumstances; and Rotterdam, which debuted its Open Urban Platform last January, and into which not just the city authority but companies, schools and residents are encouraged to participate and “exchange all kinds of data”.

Big data requires AI and machine learning

In order for projects like these to scale, smart cities must be able to manage the “unprecedented surge in data generation and flows” from diverse datasets of public and private infrastructure. Urban Data Platforms – described as the “central nervous system” of a smart city – perform the role of aggregating, processing and analyzing data from across the urban area, captured by sensors. The application of artificial intelligence and machine learning (AI/ML) to this platform layer (and to the IoT sensors themselves) is claimed to supercharge performance by enabling cities to “move beyond reactive management to proactive, data-driven governance”.

For example, AI/ML algorithms can analyze historical and real-time data to predict traffic congestion or potential crime hotspots. Beyond prediction, AI tools can recommend actions, such as optimizing waste collection routes, and identify data anomalies that might signal water leaks so that preventative measures can be taken before the situation worsens.

According to one report, it is the application of AI that “turns connected devices into smart devices”. Calling this shift “urban intelligence”, a developer of AI-powered sensors says that cities in Australia suffered “persistent problems” in managing urban environments before the introduction of AI. Issues related to “manual data collection, unstructured data and limited feedback loops with residents” led to delayed or inaccurate insights and missed opportunities.

Devices and software augmented with AI and fed into a urban platform can unify disparate data sets, spot patterns and apply analysis rapidly to deliver insights or automate responses. One result, given by the Australian developer, is the automatic optimization of traffic signals to ease congestion, which in turn leads to “smoother commutes, reduced emissions and increased public transport reliability”. 

Challenges and standards

The caveat is that while there is strong interest (from 96% of city mayors in one survey) in using AI to augment smart city infrastructure, today there is still little practical implementation of the technology. Questions remain around the impact of AI on city services and its ethical, legal and social implications.

That is where international standards can play a role: their use and implementation can reassure decision-makers that these aspects have been taken into consideration. Standardization in AI is carried out by the joint technical committee formed between the IEC and ISO, ISO/IEC JTC 1/SC 42, which considers the entire ecosystem in which AI systems are developed and deployed. The committee develops horizontal standards that provide a foundation for creating AI solutions across diverse industries. It is increasingly addressing societal and ethical issues, such as how to avoid bias or how to protect human rights.

The three mains standardization organizations, the IEC, ISO and ITU, have just published a joint statement on the governance of AI during the International AI Standards Summit, which took place in Seoul in December 2025. The statement sets out a joint vision and commitments from the three organizations for how international standards will support the development and deployment of trustworthy AI systems that benefit society, drive innovation and uphold fundamental rights.

From smart cities to citiverse

The first meeting of an initiative designed to shape the future of cities in relation to AI-powered virtual worlds convened in November 2025. Led by ITU and United Nations International Computing Centre (UNICC), the meeting touched upon the concept of citiverse, which is partly viewed as the ideal future of the smart city.

“The citiverse can be seen as the next digital frontier for cities,” explains Cristina Bueti, the Counsellor on Smart Sustainable Cities, Citiverse & Virtual Worlds at ITU. “It goes beyond the smart city by creating a trusted, immersive digital environment where cities can leverage enabling technologies such as artificial intelligence, virtual reality and extended reality.

“In the citiverse, AI acts as the backbone, allowing cities to offer citizens new immersive experiences and to enhance the city itself through digital layers that are interactive, predictive and participatory. This creates a shared digital space where leaders can test decisions in advance, improve services and design better solutions before implementation. It allows cities to anticipate drawbacks and understand the real impact of policies from the citizens’ perspective”, she adds.

Interoperability is the key requirement

A shared citiverse is being built under the European Union’s Digital Decade Programme 2030 where different cities can interoperate and collaborate. “Interoperability is the first challenge to address,” Bueti says. “Today, many cities cannot even share data with their neighbouring cities because they rely on vendor-specific platforms. A shared citiverse would allow cities to share data, procurement solutions and services, achieving efficiency gains in cost, time and service delivery – all with the goal of better serving citizens.”

Learning from neighbouring cities and sharing data improves the quality of decision-making by allowing cities to base policies on broader datasets rather than isolated local information. “In Europe, this aligns well with an existing human-centric framework that emphasizes accessibility, interoperability, safety and the protection of fundamental rights,” Bueti stresses.

Fourteen European countries have declared participation in the project including the Netherlands where Rotterdam recently upgraded the head of its digital city programme to be the world’s first Chief Citiverse Officer. As Bueti notes, the citiverse is a nascent concept and is dependent for its success on the building blocks of regional data, service interoperability and metrics to evaluate progress. It is also dependent on standards for interoperability and trustworthiness. The more standards are used, the less likely solutions will be proprietary and not talk to each other.

In December 2023, the Standardization Landscape for CitiVerse was published by European standardization experts, listing around 350 standards and other deliverables. It includes standards such as ISO/IEC 30141 for an IoT reference architecture, and the future ISO/IEC 3018 for a digital twin architecture, both published by ISO/IEC JTC 1/SC 41, as well as IEC 63205 for a smart cities reference architecture, published by the IEC Systems Committee for Smart Cities.

In addition, the IEC has recently joined forces with ISO to create a new joint technical committee, ISO/IEC JTC 4, which prepares standards for smart and sustainable cities and communities. The committee aims to build on the existing work of the IEC and ISO to foster the development of standards in fields such as sustainability, community infrastructure, digitalization, and more.

“We strongly value the long-standing collaboration between international standards organizations such as the IEC and ISO. Our goal is not to reinvent the wheel, but to build on the excellent work already done. Frameworks being developed will help cities think from the start about the technical structures they need to adopt the citiverse effectively,” Bueti concludes.