Thursday, 12 March 2026

How sports broadcasters are tackling scale, monetisation and engagement

SVG Europe

article here

The pressure to deliver more live and on-demand content across multiple platforms – often simultaneously – and with exceptional operational efficiency is a complex, evolving but, let’s face it, exciting engine of the media industry. The most successful sports broadcasters and streamers will be adept at using technology to complete their mission.

SVG Europe took the temperature of the business from sports tech solutions providers Synamedia, Levira Media Services and Amagi. 

“Sport is producing more moments than ever, but most of it still never reaches a screen,” says Martti Kinkar, CEO, Levira Media Services.  “The core challenge is making production scalable, reducing cost and complexity so coverage isn’t limited to premium events.” 

Then comes the platform reality, rights holders need to publish across linear, streaming, social and direct-to-consumer channels, and that can quickly become operationally heavy. On top of that, monetisation is still evolving. So what’s the right mix of paywalls, advertising, sponsorship and free distribution to build audiences?

Kinkar also points to a “capability gap”, explaining that teams often have deep broadcast experience, “but not always the digital, multi-platform skills needed to operate efficiently”.

Delivering quality at scale

Delivering sport at scale may come down to two main challenges: growing fan engagement and monetising those audiences effectively.

Simon Brydon, head of sport (video network), Synamedia, says: “At the most fundamental level, attracting viewers depends on delivering high-quality streams with low latency and reliable performance, even during events with millions of concurrent viewers. Any issues (buffering, delays or poor picture quality) can quickly undermine the viewing experience.”

But proficient technical delivery alone is no longer enough. Streaming platforms are increasingly expected to enhance the live experience with additional features that deepen fan engagement. These include rapid highlight creation, live-to-social clipping, cloud DVR functionality, and multi-view capabilities.

“These features help replicate and extend the traditional broadcast experience while giving digital audiences greater flexibility and control,” Brydon notes.

Additional complexities which OTT platforms and broadcasters must address include regional rights management, rapid highlight creation, and the need for real-time monetisation. 

“At the same time, revenue models are shifting towards streaming and CTV, requiring more data-driven, measurable advertising approaches,” says Srividhya Srinivasan, CTO, Amagi. Balancing scale, speed, cost control, and monetisation across fragmented platforms is the core challenge.”

Monetisation pressures

Traditional television viewing is still largely advertising-supported, but streaming services have struggled to replicate the same level of ad monetisation at scale.

“With subscription fatigue increasing, platforms need to maximise advertising revenue without harming the viewer experience,” says Brydon.

He points to dynamic ad insertion (DAI)’s ability to enable targeted advertising within live streams. However, delivering ads reliably across hundreds of thousands, let alone millions, of concurrent viewers presents significant technical challenges.

“Many platforms are unable to fully utilise their available ad inventory during large-scale live events, leaving potential revenue untapped. To address this, broadcasters are exploring approaches such as server-side ad insertion (SSAI), AI-driven optimisation, and ad pre-fetching to improve reliability and efficiency.”

New formats are emerging. Some broadcasters (ITV’s Six Nations 2026 coverage for example) have begun introducing ‘squeeze-back’ ads during natural breaks in play, allowing advertising to run alongside the live feed without fully interrupting the viewing experience.

With around three quarters of TV viewing now ad-supported (as reported by Nielsen Q3 2025 driven by the American football season), this underlines the role of SSAI and “advanced CTV ad technologies” in enabling personalised, measurable ad experiences. “This is making digital sports distribution commercially viable at scale,” he says. 

This is where cloud-based broadcast infrastructure is playing a major role. “Migrating playout, packaging, and distribution to the cloud enables broadcasters to scale dynamically around major events without heavy fixed infrastructure costs,” Srinivasan says. “AI-driven workflows are improving metadata enrichment, contextual ad targeting, and quality control. 

Amagi advocates the adoption of “unified, cloud-based workflows” rather than operating separate silos for broadcast and streaming. 

Explains Srinivasan: “A single operating layer that supports live production, channel origination, distribution, and monetisation allows sports broadcasters to launch linear and pop-up channels, distribute and monetise content seamlessly across platforms, and create near-real-time highlights. 

“Combined with advanced analytics and targeted CTV advertising, this approach helps attract streaming-first audiences, increase engagement through personalisation, and unlock incremental revenue from FAST channels and new digital ad formats.”

CDN strategy and infrastructure

Another key challenge lies in content delivery infrastructure. While public content delivery networks (CDNs) support many streaming services, high-profile live sports events can push them to their limits. As a result, some platforms are adopting hybrid delivery models, combining public CDNs with private networks deployed deeper within ISP infrastructure. 

Explains Brydon: “These private CDNs can provide more reliable performance during peak demand and help ensure consistent video quality.”

Encoding efficiency is also critical. Brydon says: “More efficient compression enables higher-quality streams at lower bitrates, improving viewer experience while reducing distribution costs.”

Adapting to changing audiences

Audience expectations continue to evolve, particularly among younger viewers. Many expect richer digital experiences, including real-time statistics, personalised feeds, vertical video, and highlights optimised for social platforms.

“Rights holders must balance these approaches carefully,” warns Brydon. “Short-form content can help attract new audiences, but excessive free distribution risks undermining the value of premium live broadcasts.”

He points out that long-form storytelling, such as behind-the-scenes documentaries, have also proven effective at building deeper fan engagement.

“In this rapidly evolving environment, broadcasters need flexible, scalable technology and partners capable of continuous innovation to keep pace with changing audience behaviour and monetisation models.”

Driving engagement 

AI-driven production tools, remote production workflows and IP-based infrastructure are having a significant impact, and for Kinkar this is positive.

“Automated camera systems and AI-assisted production are reducing cost and complexity, enabling coverage of events that would previously have been economically unviable,” he says. “IP connectivity allows signals to be routed and managed more flexibly, removing the need for heavy, on-site infrastructure.  AI is also improving content discoverability and workflow efficiency. Together, these technologies are lowering barriers to entry and making scalable production a reality.”

He urges organisations to adopt a “multi-platform mindset”, distributing across social, OTT, direct streaming and partnerships simultaneously. “Even lower-tier or grassroots content has value when packaged correctly. This broader exposure drives engagement, attracts sponsors, supports talent development, and creates new commercial models.”

Nonetheless, Kinkar feels that many broadcasters still default to traditional production methodology.  He calls for greater openness to new workflows and partnerships. 

“Embracing innovation, experimenting with new models, and bringing in digitally native expertise will be essential to unlock the full potential of modern sports content distribution.”

Amagi’s Srinivasan agrees that the sports broadcast industry needs stronger interoperability between platforms and better cross-platform measurement standards. 

He says: “Simplifying multi-platform sports delivery while improving monetisation efficiency will define the next phase of growth.”


 


Oscars 2026: Contender breakdown for cinematography, editing and VFX

IBC

article here

Angst and destruction are central recurring themes of the 98th Academy Awards, with multiple nominees using fire as a symbol of humanity’s fatal disregard for the planet.

The Lost Bus is a high-octane docudrama from Paul Greengrass about the wildfire that destroyed Paradise in Northern California in 2018, serendipitously releasing months after the fire that ravaged the LA metro area. The film calls out failed maintenance by electrical companies, as well as drawing attention to changing climate conditions, as typified by Fire Chief Martinez (Yul Vazquez) who states that ‘every year the fires get bigger, and there's more of them. We're being damn fools; that's the truth.’

In melancholic frontier drama Train Dreams, the central character’s family is wiped out by wildfire, and he is tortured by the guilt of being able to do nothing about it.

In Avatar: Fire and Ash the clue is in the title. Varang, the leader of the Ash clan, teams up with the military industrial complex embodied by Colonel Quaritch who says, “If you want to spread your fire across the world, you need me.”

Other Oscar nominees including Frankenstein and Sinners feature scenes in which fire is used to purge and destroy. If you want to look for it, F1: The Movie has a pivotal fireball crash. Even Marty Supreme has an explosive moment involving a gas station and a dog.

Greengrass has said: “The enormity of a wildfire speaks to what we all feel, which is that our world is burning. Everywhere you look our world is burning, and people know it and it troubles us all.”

Best Cinematography

Remarkably, One Battle After Another is Michael Bauman’s first as solo Director of Photography (DoP), yet he has already collected the 2026 BAFTA Award for Best Cinematography and the American Society of Cinematographers (ASC) 2026 Theatrical Feature Film award for his work on the film. Paul Thomas Anderson’s former gaffer was previously co-credited as cinematographer with the writer-director on Phantom Thread and Licorice Pizza before shooting 1.5 million feet of VistaVision over seven months on this sprawling counter-culture comedy.

“These cameras are meant to sit on a tripod for an establishing shot. They’re not designed to be strapped to a car, put on a Steadicam, or dragged through practical locations,” Bauman says. “Their noise is also loud. It’s basically like having a lawnmower on set so we had to design and build a blimp for the camera just to make it usable. That alone was huge.”

Each thousand-foot mag could only shoot about four minutes of footage. “There was all this machinery and process we learned. It was a completely unique experience. I’d absolutely do it again—because it would be easier next time.”

Director Clint Bentley’s Train Dreams wears its debt to visionary director Terrence Malick on its sleeve. This founding fable of America mixes naturalism with magical realism and was almost entirely shot on location across Washington State using available light and weather conditions. DP Adolpho Veloso earns a first Oscar nod for his immersive photography which often frames characters below centre, or with corner framing, to get a sense of their scale in comparison to their environment. One dream-like sequence was shot on a Volume stage with slow shutter speeds while one of two fire scenes was shot practically in a burnt forest.

Danish DP Dan Laustsen would be a worthy winner for his supreme command of colour, and light amid the sumptuous production design of Frankenstein. He’s been nominated for Guillermo del Toro projects twice before; The Shape of Water and Nightmare Alley. Although destined for Netflix, Lausten gives the story a cinematic look composing wide angles to capture icy vistas and grandiose gothic interiors and on Alexa 65 to produce an image close to 70mm print. For all the detail in sets and costume, this version of Shelley’s classic succeeds in portraying humanity in the monster.

“One of the scenes I like very much is the first time the creature sits with his father in the lab, and his father is tenderly shaving him,” Lausten says. “It’s a simple scene with the sunrise reflecting in a broken mirror. You feel the chemistry between the two actors, and you can also see that daddy doesn't understand anything about kids.” 

At cinematography festival Camerimage, Autumn Durald Arkapaw ASC revealed that Sinners starts with a different sequence than was scripted. “It was only a few days before schedule when [director Ryan Coogler] decided he wanted to turn that into an IMAX sequence. It's a heavy dialogue scene and we’re shooting IMAX which is not a sync sound camera so presented technical challenges.” It's now one of her favourite scenes of any she’s shot; “I can't see it not being in IMAX so it was a beautiful decision that he made.”

Technically, this was first movie to be simultaneously shot on Ultra Panavision 70, incorporating 65mm in its widest aspect ratio, and in IMAX, at the tallest ratio for 65mm.

The standout scene is a hallucinatory dance that transcends its 1930s setting by birthing rock‘n’roll, electric guitar and hip hop from Southern blues. Dubbed the ‘Surreal Montage’, Durald Arkapaw designed the shot in three parts with hidden transitions because the IMAX cameras would spool through 1000ft of film in little more than two minutes.

The kinetic narrative of Marty Supreme may be driven by the intoxicating charm of its title character but it’s the pantheon of indelible supporting characters which brings the film to life.

“There are more than a hundred featured characters in the film — every day on set different actors arrived with these unforgettable faces,” says Darius Khondji, previously nominated for Evita in 1996 and Bardo (2022). “The faces look like something out of a Honoré Daumier painting — [and] were incredible to photograph.”

Reuniting with director Josh Safdie after collaborating on Uncut Gems, Khondji shot Marty Supreme on 35mm film using anamorphic lenses and referencing the work of 1950s street photographers and turn of the century painters.

“Every director has their own way of doing things, but Josh has an obsessive, intuitive way of making movies,” says Khondji. “Stylistically speaking, he knows you usually don’t capture wide-angle shots using long lenses — but the rules don’t matter to him.”

Best Editing

Norwegian drama Sentimental Value has gathered a full house of Oscar nominations for its principal actors Renate Reinsve, Stellan Skarsgård, Inga Ibsdotter Lilleaas and Elle Fanning so it’s interesting to hear his long-time editor Olivier Bugge Coutté explain how writer-director Joaquim Trier’s approach has evolved over successive films.

“At the beginning he was a little bit more controlled in terms of the latitude of performances,” he explained during an interview with CinemaEditor magazine. “Over time he’s gravitated to what might be called ‘jazz takes’. That’s not to say there’s improvisation but there is much greater freedom for the actors to move around the core of the text. Sentimental Value is the furthest he’s gone in allowing actors to deliver a different emphasis or change words provided it remains in the spirit of the scene.”

This means Coutté received a lot of material with different tones. “Joaquin often says that he's looking for a life-like moment, an event to happen that feels representative of a moment of life. So, the edit becomes a meticulous process of stitching together from a huge variety of possibilities.”

The brattish character of Marty Mauser in Marty Supreme may not be to everyone’s taste but Timothy Chalemet’s infectious performance and the ping-pong pace of the screwball drama glosses over his faults. According to co-writers and co-editors Josh Safdie (who also directs) and Ronald Bronstein (also a producer), the intensity of the script and its convoluted storyline comes from an equally combative writing and editorial process.

“Everything gets highly abstract by the time it reaches the screen but every exchange is coming from some lived in experience,” Bronstein says. “So we're sharing very intimate things with each other. The process is invasive and we're not nice in the sense of not being sensitive to the other's experience. One person throws an idea out and then immediately the other person is tying it to a chair and beating the shit out of it, trying to get it to confess its weaknesses.”

You might think that the huge volume of material required to juggle for the Grand Prix action scenes were the most difficult for Stephen Mirrione to manage in F1: The Movie. This included reviewing and selecting takes from 20 cameras of actual broadcast race footage combined with original material filmed by DP Claudio Miranda enhanced with layers of VFX.

“You're talking about less than a minute or so of material, versus hours and hours and hours,” he says of the workflow.

Yet keying into the main characters was most important for the editor who won the Oscar for editing Traffic in 2000. “Even in terms of the storytelling style, it took on Sonny's personality — a little bit looser than the world he's in, a little bit crazier, unexpected,” he says, about Brad Pitt’s maverick racer. That held true for the romance between Sonny and his lead engineer (Kerry Condon).

“One of the first scenes that they did together was that scene in the pub where he's asking her about the car and he goes a little bit reckless with her, and she pushes back at him. Once we knew that part of the relationship was dialled in, and that she could really give it back to him, then we knew that the script was working.”

The genre-fluid Sinners switches in and out of supernatural and vampire elements mixing in comedy, erotica, romance and music. “A lot of what editors do is play with subtext that allow people to engage with the movie on a subconscious level,” says Michael P Shawver who lands his first Oscar nod. “There's one part when Annie (Wunmi Mosaku) makes the Mojo bag for Smoke (Michael B Jordan) and I realised she lights a match three times as part of her ritual. Then I noticed that at the end of the movie, when Smoke gets the cigarette from Hogwood (Dave Maldonado), he lights the lighter three times. I didn't ask if that was intentional but in the film’s prologue, I took those same match strikes and put them as the first thing you hear in the movie after the music comes in. It’s the rule of three. Having three strikes three times in the movie. Did it do anything? I hope so, but it's stuff like that that I like.”

Such subtly is in marked contrast to movies like Black Panther: Wakanda Forever which Shawver also cut for Coogler.  “In the Marvel world you have to over explain. It's very complex. Things are happening. You don't want people to be like doing the math,” he says. “In this movie, because of how good the performances were, the cinematography, the costumes, the writing, even if it wasn't over explained we trust our audience to absorb that for the actual experience it is without worrying about explaining everything.”

Action scenes are interspersed with slower paced dialogue in One Battle After Another and these peaks and troughs become literal in the mesmeric final car chase dubbed the River of Hills.

Having broadly mapped the sequence out, editor Andy Jurgensen started by making selects of different camera views: in front of and behind Willa’s [Chase Infiniti] car, and the cars in foreground and background shots. “Then I pulled together the best reactions from Willa and the shots where she's looking in the rearview mirror,” he explained to CinemaEditor.  “After that it was a case of experimenting, piecing together, shaving things down. We didn't have Jonny Greenwood’s score at first so we sent him a really long cut, and then he sent something to us with that percussive beat. The sound department elevated it to another level.”

It helped to project the sequence at full VistaVision scale. “I’d sometimes sit right in front of the screen and play it loud and try to get that feeling of motion sickness. It helped me figure out where people's eyes were going to land and to calibrate the rhythm of everything for a theatrical experience because we knew this would be shown in IMAX.”

Best Visual Effects

Having ‘solved’ water in The Way of Water, James Cameron and the team at Wētā FX in New Zealand turn their skills to fire in all its multiple forms for Avatar: Fire and Ash. The film’s 3500 FX shots contain more than 1,000 of digital fire, ranging from flaming arrows and flamethrowers to massive explosions and fire tornadoes.

“Physical fire is really hard to control, so we had to come up with how to bend the physics towards the direction that Jim was giving it,” explained Wētā senior VFX supervisor Joe Letteri to VFX World. “He was very specific where he wanted the fire, what kind of speed, rate, size, how much or how little energy.”

Cameron places as much emphasis on the performance capture in his story as distinguished from genAI actors which he has called “horrifying”. In post, the story is edited first based on the captured performances before Weta applies facial animation and the CG backgrounds, before re-editing the movie all over again.

When you set out to make the most authentic racing film ever made, you're not supposed to notice the visual effects. That was the brief that Director Joseph Kosinski gave Framestore’s Supervisor Ryan Tudhope following on from their partnership on Top Gun: Maverick. For F1: The Movie Framestore blended shots of Brad Pitt (as faded hero Sonny Hayes) and Damson Idris’s (protégé Joshua Pearce) stunt driving with real Formula One broadcast footage, using detailed digital skins, reconstructed frame by frame. Six Formula 1 circuits were scanned using an eight-camera array, allowing for millimetre-accurate match animation of racing environments. The shattering of carbon fibre debris, sparks, tyre deformation and engine smoke are all rooted in real racing incidents.

The Lost Bus might have got lost in the public eye given its almost straight to Apple TV release, but it’s docu-dramatisation of a real-life Californian wildfire is everything you’d expect from the director of Captain Phillips. It also vies with Avatar for the VFX realism of fire ranging in intensity from crackling bushes to hellscape inferno.

Paul Greengrass wanted authenticity as far as safety would permit for the journey of the yellow school bus loaded with children and driven by Matthew McConaughey. Live action exteriors mostly shot on a backlot in New Mexico were augmented by teams of vendors including ILM, beloFX, Cinesite, Outpost VFX and RISE. Outpost’s main task was 100+ shots of an intense smoke-filled trailer park sequence. Cinesite contributed 200 shots including massive smoke plumes, drifting ashes, heavy dust, and fire-driven atmospherics to show the bus escaping through a burning landscape.

Much of the environment around the bus was digitally enhanced with moving trees, flying debris, shaking power lines and blowing grass, while scenes inside the bus were enhanced with CG backgrounds, digital cars, dust layers, and glowing embers. Everything was matched carefully with live-action plates using compositing, lighting, and tracking work to make the danger feel real and immediate. 

Not an obviously heavy VFX film Sinners does rely on a dual performance from Michael B. Jordan as twins Smoke and Stack. “For half of the shots we went with a classic split screen approach, where we shot Michael twice, and then combined the two passes,” explains VFX Supervisor Michael Ralla. “With the other 50 per cent of scenes where there's a lot of physical interaction between the twins we developed what we call the Halo rig.”

This is a carbon fibre harness with a ring of 10-12 cameras that allowed them to capture Jordan’s whole head (not just facial) performance in 360 degrees. Australian VFX shop Rising Sun Pictures took the data to recreate Jordan’s performance replacing a body double’s head.

The film’s 1000 VFX shots, which also include vampire work, were completed by Storm Studios, ILM, Base FX, Light VFX and Outpost VFX.

Back to the future for this franchise which kickstarted the era of photoreal FX in 1993 landing Industrial Light & Magic the Best Oscar for its work – all 52 shots. Jurassic World Rebirth sees ILM delivering 1500 shots, more than any in the series’ history.

“There’s a narrative in the press about how everything is done in-camera,” says VFX Supervisor David Vickery. “Well, yeah, everything is shot in camera because you can’t ‘shoot’ visual effects. What you’re trying to capture is as many practical things on set as you can because you can’t go back and get it in post-production.”

Director Gareth Edwards tasked cinematographer John Mathieson with shooting on film, recalling the aesthetic of Steven Spielberg’s original.

“For a while, it was very ‘in’ to be shooting on green screen, or fashionable to use animatronics, and that’s what the public wanted to see,” Vickery adds. “Now there’s a desire to see things filmed on location, and there’s an acceptance of visual effects, so filmmakers respond to that in the way they make their films.”

 

Tuesday, 10 March 2026

Rakuten TV doubles down on ad-supported streaming in Europe

Streaming Media

article here

In a market defined by subscription fatigue and advertising reinvention, Rakuten TV is betting that FAST is not simply an add-on to streaming but one of its defining next chapters.

“The appetite is huge, and it’s growing,” Cedric Dufour, CEO of Rakuten TV tells Streamingmedia. “We are seeing a real shift in consumer behaviour and in advertising budgets. The momentum behind FAST is not just cyclical; it’s structural.”

Rakuten TV was early to the opportunity. Founded in Spain in 2010 as a subscription service before transitioning into transactional VOD, by late 2019 early 2020 it had pivoted to AVOD and FAST becoming the first platform to rollout those propositions across 42 countries in Europe.

“Developments in the US had signalled that premium content could thrive in a free, ad-supported environment,” Dufour explains. “We realised there was room for free content with ads as a new way of delivering content. So we invested heavily in AVOD and FAST.”

The platform now distributes approximately 500 FAST channels across Europe, including around 120 owned-and-operated channels reaching more than 150 million households. Individual markets typically carry about 100 channels, balancing Rakuten-owned IP with third-party offerings such as CNN and a range of sports, news and lifestyle brands

“Technically, we could offer 250 channels in each country,” Dufour says. “But consumers already complain about too much content and too many choices. The priority is quality and curation, not quantity.”

Virtuous circle

The early days were not straightforward. European audiences were unfamiliar with FAST channels and often confused them with traditional linear broadcast channels. Studios, too, were cautious.

“There was reluctance,” Dufour admits. “Studios were concerned that if they opened their catalogue to free ad-supported distribution, it would cannibalise subscription or transactional revenues.”

The breakthrough came through monetization. “We were able to demonstrate that FAST could generate meaningful advertising revenue without eroding other windows. As performance data improved, content supply followed.”

With better monetisation came more catalogue access. “With more qualitative content came larger audiences. And with larger audiences came more advertising revenue.”

That “virtuous circle” is now firmly established. Advertisers are steadily reallocating budgets from traditional linear TV into connected TV (CTV), drawn by targeting precision, measurable performance and access to younger viewers.

Recent internal research shows that 70% of TV viewers watch FAST channels at least once per week. Among those viewers, a significant share — particularly younger demos— no longer consume traditional linear television.

“If advertisers want to reach younger audiences, CTV is essential,” Dufour says. “If they stay only on traditional TV, they will not reach this population.”

That said, Dufour believes CTV is additive to linear. “They will coexist,” he says. “There is space for both, just as streaming did not eliminate cinema.”

Ad loads on Rakuten TV’s FAST channels are broadly comparable to traditional TV, he says, but user perception differs.

“Better targeting, geolocation capabilities and first-party data (where user consent is granted) allow for more relevant advertising, which improves tolerance.”

Telcos, once sceptical, have also shifted position. He says, “Three or four years ago, many operators questioned the need for FAST alongside hundreds of broadcast channels. Now, they recognise the distinction — and the incremental value.

“The advertising model with FAST on CTV is different. The consumption model is different. It reaches new audiences,” he says.

 Movies remain Rakuten TV’s strongest-performing genre, reflecting its origins in film distribution. Last December for instance it launched its flagship FAST movie channels (themed around action, romance, comedy and crime) with over 100 hours of curated on-demand films on French telco provider Free Ciné.

Drama and action also perform strongly. Notably, single-IP channels have exceeded expectations. Dedicated channels built around series such as Alerta Cobra and 21 Jump Street have delivered consistent engagement.

“You might think audiences would tire of watching the same show continuously,” Dufour admits. “But the performance proves otherwise.”

Local nuance matters. Operating across 42 territories gives Rakuten TV a substantial comparative data set. Japanese manga performs particularly well in France and Germany, for example, but less strongly in other markets. In Poland, the platform operates a dedicated local-language movie channel to address domestic demand.

“Local content is very important,” Dufour says. “It must sit alongside global content.”

Partnerships with smart TV manufacturers including Samsung TV+, LG Channels, Hisense VIDAA, TCL Channels, Xiaomi TV+, Free and Netgem have secured branded remote-control buttons, home-screen placements and EPG integrations.

“Our bet from the beginning was on television — because we’re primarily about movies, and movies are best enjoyed on a big screen. Today, around 90% of our viewing still happens on TV screens. However, we recognise growing consumption on mobile and tablets and are adapting accordingly. While TV remains our core, we aim to be present wherever audiences want to access content.”

From B2C to B2B expansion

In recent years, Rakuten TV has expanded beyond its own D2C platform. Through Rakuten TV Enterprise, it now distributes channels and powers VOD stores for partners.

An agreement announced last week with Prime Video will see Amazon’s platform carry Rakuten FAST channels in Spain, Italy and Germany.

“We could have said they are a competitor but we do not decide where users watch content. Therefore, expanding distribution across multiple platforms - smart TVs, telcos, and streamers - is central to our strategy. Our own app remains important, but future growth will primarily come from expanding touchpoints and partnerships across Europe and beyond.”

Telecom partnerships further extend reach. Rakuten TV operates the VOD store for Orange in Spain and works with Germany’s 1&1.  Last December its app became available on Virgin TV in the UK, “significantly expanding reach across one of Europe’s most competitive households.”

It has previously funded content, notably as part of its contractural obligation to operate in Spain, but Dufour says covering production costs through advertising alone proved challenging. “We scaled back original production to focus on channel curation and distribution.”

The company is a division – and a relatively smaller one at that – in Japanese parent Rakuten Group which is valued at US11.05 billion. It was formed in 1997 and has built a plethora of digital services around its core online retail platform including fintech, travel and mobile. The group’s scale also supports cross-platform loyalty initiatives. In France, for example, Rakuten e-commerce customers can redeem loyalty points for Rakuten TV rentals or ebooks via the Rakuten Kobo app.

“The strength of the ecosystem is differentiation,” Dufour says.

For now, Rakuten TV remains focused on Europe, but the US market is under consideration. “It is very saturated,” Dufour acknowledges. “We are having discussions around distributing selected channels, potentially leveraging Spanish-language or movie-focused offerings.

“Beyond that, opportunities in the Middle East and parts of Asia are being evaluated, subject to rights agreements. In VOD, scale is everything. If you do not reach scale, content costs are too high.”

 

ends

 

Monday, 9 March 2026

Barbara Ford Grant: “A lot is happening behind closed doors”

IBC

In a world where production capability is ubiquitous and content costs nothing, creative vision is the only thing that matters, according to VFX pioneer and Hollywood consultant Barbara Ford Grant.

article here

Barbara Ford Grant hasn’t wavered in her belief since making an experimental short film from scratch using AI tools a year ago.

“You don’t prompt your way into a movie,” she says. “You build workflows and pipelines. You layer AI into existing processes. What worries me most is that there’s still not enough understanding of how much AI is already integrated into everything we do.”

A creative technologist who began her career as a digital artist before leading award-winning teams, and projects including Game of Thrones, Alice in Wonderland, and Shrek, Ford Grant is a pioneering technologist and creative executive whose advise is sought across Hollywood.

“When I entered the industry, computer graphics were the disruptive technology,” she says. “People were worried then because it displaced old methods — though many still exist, like stop motion and miniatures. But what CGI really unlocked was new storytelling. You couldn’t have made Toy Story, Jurassic Park, or Avatar before CGI. That’s what I hope we see again — entirely new forms of storytelling.”

Movie studios would like to do things better, faster and cheaper - or at least two of the three.  Ford Grant only sees a race toward faster and cheaper. “I’m not seeing enough focus on using these tools to make content more interesting, to truly empower talent or push culture forward.”

Currently a consultant to Paramount and board member of Sohonet, Grant has been an AI technologies strategy consultant to Marvel Studios and held key executive leadership roles at HBO, DreamWorks Animation, Sony Pictures Imageworks, Digital Domain and Walt Disney Studios. She was the first woman chair of the Academy’s 95-year-old Scientific and Technical Awards Committee (2018-2024) and is a member of both the Television Academy and the Motion Picture Academy.

“It’s pretty clear that the technical difficulty to produce content - moving images, convincing sound, plausible narratives - is evaporating fast,” she says. “What took weeks can now take minutes. What cost thousands can now cost tens. The industrial complex approach to production, which has been the gatekeeper for determining who gets a seat at the table for many decades, is dissolving. But you cannot train a model on vision and judgement that takes a lifetime to develop.”

This creates an interesting paradox that she believes will define the next era of filmmaking, “As content floods every available channel, the scarcity shifts entirely to the human capacities that determine whether any of it is worth watching.”

The power of creativity

On the plus side, the power of creativity has always been with artists. “The possibilities have never been greater for them to push limits. I would be surprised if studios don’t see enabling artists as morally imperative — and also strategically necessary. Not doing so would be an existential threat because others will step in.”

However, she urges artists to step up and let AI in. “Different parts of the ecosystem have different points of view about AI. Studios have one perspective. Artists have another. Early adopters have their own. It’s not even just about where you sit — it’s about how you visualise the role of these tools.

“Particularly on the artist side, there’s this notion of choosing to resist, or virtue signalling that they’re not participating,” she says. “But the fact is, AI is already embedded in how people consume entertainment, how content is marketed, how studios plan a ten-year slate or pipeline.

“I don’t think creatives should be forced to use AI but it already exists within the infrastructure, and that’s only growing. There are people who aren’t negotiating labour deals or representing unions, who are fully embracing these tools. They see this as a watershed moment — access to capabilities they didn’t have before. And they’re going to move at rapid speed, regardless of what legacy institutions do.”

One AI battle after another

In the U.S, actor’s union SAG AFTRA is renegotiating a new deal with the studios less than three years after the strikes that brought production to a halt.

GenAI was a hot topic then and is among the union’s priorities now. In the interim, studios have advanced their AI strategies including training models on in-house content libraries and promoting executives with an AI-first brief.

Lionsgate, for instance, tied the knot with AI firm Runway to develop ‘capital-efficient content creation opportunities’ and recently hired its first chief AI officer Kathleen Grace. She joined from Vermillio, a platform that helped content owners and talent track, authenticate and monetise the use of their work in AI models.

Disney recently inked a $1bn licensing deal with OpenAI allowing users to make content with Disney characters. Its new CEO Josh D’Amaro has vowed to integrate AI into production workflows, albeit doubling down on artist creativity as the company’s strongest selling point.

“After the last strike, a lot of people left the industry and won’t return,” says Ford Grant who is not involved in negotiations. “Some of that work isn’t coming back. Arguably YouTube was the only winner.

“I hope that unions think strategically about their future role in entertainment, rather than trying to claw back leverage that may no longer exist. They need to understand what AI is, where it’s going, what they can control, and what value they uniquely offer.”

Practical lessons in AI
With the 2023 strikes, Ford Grant found herself with extra time and decided to make a short film to explore the possibilities and limitations of AI filmmaking.

“I’d been working on machine learning R&D for about 15 years, but once generative video tools like Midjourney came out, I wanted to play around with them unencumbered by the studio system.”

Under the banner of BFG Productions, she developed, wrote and produced a 22-minute film, Unhoused, on a shoestring $40,000 budget.  The majority of that was used to shoot the production traditionally with real actors on location with a union crew.

“Humans are still the best performers. You’ll still shoot practical photography. But if you want to accelerate how you blend that footage with generative content — matching colour, lensing, camera movement — what does that workflow look like? That’s what I’ve focused on.”

She incorporated different models into different parts of the process, including writing software to connect practical production with generative compositing.

“Foundation models are stabilising. What matters now is what you build on top such as LoRAs (Learnable, Reversible, and Adjustable operations), ControlNets (which gives precise control over AI image generation) and custom workflows. I’m also exploring tools like Cursor, Cloud, Figma — asking what the ‘new studio’ looks like.”

The immediate future of production won’t be purely generative. “It will be hybrid — traditional VFX combined with generative techniques,” she says. “A lot is happening behind closed doors. AI is being discussed everywhere.  What we see publicly right now is mostly demo material. The most promising work I’ve seen privately involves animation and non-photorealistic character work.”

Future of cinema

She worries for that cinema could atrophy unless there’s innovation in its production and presentation.

“Someone once said cinema risks becoming like opera; it will still exist, but for a nostalgic, aging audience. I hope it retains enough revenue to remain a meaningful distribution model, not something supported by benefactors.”

Immersive multi-sensory experiences designed for venues like Sphere or Cosm could be an salvation. Crucially, they are also communal experiences. Cosm call it Shared Reality.

Ford Grant, who once worked at immersive art project Meow Wolf, says connected physical-digital narrative worlds have always inspired her.

Game of Thrones (on which she also worked) came close with its world-building across series, podcasts, VR and live activations. The next step is connecting those experiences in real time across locations. Imagine being in Cosm in Los Angeles and feeling connected to someone experiencing it in Barcelona. We’re not fully there yet technologically. But it’s coming.”

As keyed into technology as she is, Ford Grant maintains that storytelling is nothing without human taste and judgement and it is this curation which she sees as the most rewarding role for creatives.

“Creativity is the accumulated judgment that comes from years of sitting in the back of screenings watching how people respond, understanding why one cut lands emotionally and another dies, recognising when something is technically correct but spiritually dead on arrival.”

Take the craft of the cinematographer. AI tools might help them test 30 filters instantly, emulate different lenses and explore visual ideas rapidly but it is their trained eye for an image in service of the story which should stand them in good stead in the era of instant image making.

“The individual impulse for an aesthetic and a sensibility is something that a model can’t predict. Then there’s the alchemy of the group. Filmmaking is a team sport in which each person brings expertise, instinct, and reaction to the material and the world around them. That creative mix is hard to program.

That’s why I’m excited that the playing field is levelling in a way that has the potential to reward the very thing we've always valued most - the quality of the idea, the depth of the vision, the truth of the telling. Not who has the biggest budget or the most expensive equipment, but who has something genuine to say and knows how to say it.”
 ends

Friday, 6 March 2026

What standards for drone tech?

IEC E-Tech

article here

As the market for drones rapidly expands, standardization organizations and technical committees need to cooperate to avoid duplication.

Unmanned Aerial Vehicles (UAVs) – drones in other words – are becoming ubiquitous across multiple industrial and commercial sectors. Some pundits predict that by 2030 they will be as common in the sky as cars on the road. While estimates of market size vary, researchers agree on the accelerated growth of drones over the next decade. Forecasts of global UAV market capital range from USD 57,8 billion by 2030 to USD 68,64 billion by 2034.

Their utility is expanding in applications as diverse as capturing previously unattainable shots for filmmakers, aerial analysis and fertilization of crops, mail delivery or even airborne taxis, but there are several technological developments which are driving adoption.

Chief among these is the use of artificial intelligence (AI), integrated in a variety of ways to boost performance and capabilities. There are also developments in the ways to power drones to improve their main limitation, which is flight time.

AI improves precision

While drone technology has existed for decades, it has mainly, for reasons of cost, been concentrated in the hands of the military. That changed when lower-cost lightweight drones carrying cameras emerged around 2010. The launch of the Phantom series of drones by a Chinese developer from 2013 onwards catalyzed the consumer and commercial UAV market.

While high-quality image capture from these drones took off – especially in industries like filmmaking – “the maturity for other applications such as mapping, surveying and inspection was very low,” explains Hendrik Bödecker, CFO of a Hamburg-based drone industry market researcher and consultancy firm. “There was very limited software available to analyze this data. For applications such as mapping or surveying, you needed optimization software to extract accurate information.”

This has changed dramatically in recent years with AI and machine learning (ML) being used to analyze large volumes of data gathered from sensors, for example in mapping, spectral analysis and infrastructure inspections. “If you collect thousands of images during a bridge or power line inspection, traditional methods would require inspectors to manually review everything. Today, ML models can identify anomalies such as corrosion or structural defects based on trained image datasets,” Bodecker explains.

In precision agriculture, for example, the UAV, sensor and software package of one US developer can analyze up to 15 spectral bands over farmland. “AI can dramatically speed up post-processing, identifying where crops need more water, fertilizer  or pest intervention,” explains Bill Irby, the company’s CEO. “Removing humans from the analysis loop allows for faster decision-making, which improves yields and supports global food security.”

Applying AI to video streams for surveillance or analysis “can significantly enhance advanced target recognition, helping operators identify and classify objects on the ground more quickly and efficiently,” Irby adds. Ten years ago, mapping accuracy might have been around ten centimetres. Today, thanks largely to software improvements, accuracy can be one centimetre or better.

A claimed world’s first AI-powered drone combining optical and electronic fog-penetration technologies is claimed to boost clarity by up to 50% in rain or fog. However, Bödecker notes that this type of AI “is really software-based data analytics and is independent of whether the data comes from a drone, a crane or another platform”.

The IEC and ISO have formed a joint technical committee, SC 42,  to standardize multiple AI requirements and applications. One of its standards series, ISO/IEC 5259, deals with the quality of data for analytics and machine learning.

Autonomous flying not ready for take-off

Software development is also a key enabler for programming flights and controlling drones remotely, as long as there is connectivity. For beyond-visual-line-of-sight (BVLOS) operations, users must fully trust the programmed flight path from point A to point B. This is where the first major AI challenge appears and it has to do with unmanned aerial systems (UAS) for controlling the drone in the air. Some companies claim to have intelligent onboard communication or decision-making, but this is often limited to basic computer vision, such as obstacle detection. Even then, a drone may not be able to decide independently whether to move left or right in complex environments.

“Many companies claim to have ‘AI drones,’ but that is not really true,” says Bödecker. “Drones are not yet capable of autonomously flying from A to B beyond visual line of sight without human oversight. If you are inspecting a tower beyond a mountain, for example, you still need connectivity, visual contact or a human operator. The idea of fully pilotless commercial missions is not yet a reality – it remains hype.”

The industry is moving toward autonomous, pilot-free operation, but according to Irby, there will always need to be a human overseeing operations. Even so, AI can help. “Whether in Europe, the US or elsewhere, AI tools can help drones navigate approved flight corridors and remain within permitted parameters such as altitude and speed, while avoiding restricted areas,” Irby says. “That kind of capability would greatly enhance global drone operations.”

One key concept is “one-to-many” control, where a single operator manages multiple drones. AI enables that by handling navigation, compliance and routine decision-making. “From a business perspective, this increases efficiency and reduces costs,” Irby continues. “That said, regulators like the Federal Aviation Administration in the US (FAA) or the European Aviation Safety agency (EASA) will likely continue to require a human pilot ultimately responsible for safety. It will be a long time before pilots are fully removed from the loop.”

Technology that is autonomous can be confused with automated tech. So-called swarm or teaming technology is one example. In theory, swarm intelligence would involve drones dynamically communicating and coordinating with each other in real time. This is not the same as pre-programmed drones (as used in light shows) where hundreds or thousands of drones are coordinated.

“There are proof-of-concept demonstrations, but no widely deployed, proven systems,” says Bödecker. “Regulation is also a major barrier. For now, swarm technology remains largely hyped outside of limited military research.”

How to solve the flight time challenge

Flight time is widely considered one of the main limitations of drone development. This is closely tied to battery technology, weight and power efficiency. Most drones – over 99% – are battery powered (typically lithium-ion), estimates Bödecker.

The issue with batteries is power density. The more energy you can pack into the same weight, the longer a drone can fly. The challenge is always balancing the added weight of batteries against performance gains.

Irby claims his systems currently fly for up to 90 minutes. If battery density were doubled, flight time could theoretically double as well. Most drones (around 97% according to Bödecker) use multi-rotors and operate similarly to helicopters. They can take off and land vertically, then move horizontally once airborne making them a good option when flying in tight spaces. However, “multi-rotors consume a lot of power, especially for hovering and take-off, which limits flight time,” Bödecker says.

With a fixed-wing aircraft, once in forward motion, the wings generate lift through airflow and pressure differentials. According to Irby, whose products use a fixed-wing design, this “dramatically improves efficiency”.

Nonetheless, Bödecker is sceptical of most manufacturer claims. In real-world conditions (when factors like wind drag and payload are taken into account), most drones achieve around 25 to 35 minutes of flight time, he says, even if manufacturers claim up to 50 minutes under ideal conditions. In practice, this limitation is often acceptable because many commercial use cases do not require longer flight times.

For applications such as power line inspections, longer flight times could be useful. In these cases, drones powered by two-stroke combustion engines can fly for up to two hours and carry heavy payloads. However, they are expensive – often costing over EUR 100 000 – require significant maintenance, are very loud and face regulatory challenges due to weight and noise.

Standards for hybrid systems

Hybrid systems combining batteries and/or solar photovoltaic (PV) cells with combustion engines are a solution to this: battery power being used to reduce noise during take-off and landing before switching to a combustion engine in flight.

Recognizing the need for specific guidance documents in this area, the technical committee which prepares standards for solar PV systems has formed a project team, IEC TC 82 PT 600, on vehicle-integrated photovoltaic systems (VIPV) and is planning to develop two new technical reports in this area.

The safety and performance of lithium-ion cells used in batteries are standardized by the IEC. The IEC 62660 series on secondary lithium-ion cells for the propulsion of electric vehicles (EVs) is a three-part series which covers performance testing, reliability tests and safety requirements. EV battery packs themselves are standardized by ISO 12405­2.

Hybrid fuel- and electric-powered designs can be more suitable for larger aircraft, especially those operating on conventional aviation fuel since electrical systems can extend flight time. Hybrid power configurations can also benefit applications requiring heavier payloads, such as the Hydra 400, which uses electric rotors and jet turbines for lift and propulsion of payloads up to 400 kg.

Hydrogen is a way forward

Due to the high energy density of hydrogen, hydrogen-powered drones are another emerging technology permitting longer flights with heavier payloads than battery technology. A claimed “world’s first” commercial hydrogen fuel-cell power pack for drones was launched last year in a design which retains batteries for redundancy. Meanwhile, a Ukrainian developer behind a combustion-engine drone capable of flying for 28 hours in military operations recently reworked the design with hydrogen fuelling an electric motor. While this cut flight times in half, its advantages include a “negligible thermal signature”, meaning it avoids being detected by infrared sensors.

The main challenges with hydrogen are infrastructure (for refuelling, for instance), storage weight (think massive gas tanks) and cost. The technology is improving, and more manufacturers are adopting it, but it remains a niche solution for now. Some standards for hydrogen applications which could be useful for the overall safety of hydrogen powered drones exist.

As hydrogen is odourless and invisible, leaks can be difficult to detect and thus potentially dangerous. IEC TC 31 prepares standards for equipment used in explosive and hazardous atmospheres. To ensure global compliance and safety with TC 31 standards, IECEx, the IEC Conformity Assessment (CA) System which oversees hydrogen-related certifications, is expanding its scope relating to testing and certification in the area of hydrogen technologies. The CA system has partnered with many other international organizations, including  ISO, establishing formal liaisons with ISO TC/197, relating to testing and certification in the area of hydrogen technologies, and more recently with IEC TC 105 for fuel cells. This TC has published IEC 62282-4-202  which covers the performance test methods of fuel cell power systems for unmanned aircraft. This encompasses start-up, shutdown, power output, continuous running time, warning and monitoring, and environmental compatibility among other areas.

More standards are needed

According to a report by the British Standards Institution (BSI), there are over 650 standards applicable to UAVs/UAS, many of which were originally developed for manned aviation and are either directly transferable or will require adaptation. Further harmonization is needed to fully address drone design, testing and operation.

The IEC has two technical committees, IEC TC 97 and IEC TC 107, specifically dedicated to the aviation sector, which still have to embark on specific standards for drones. But it is quite probable that many of theirs standards can be adapted to this rapidly growing market.

However, “the drone value chain is perhaps the most diverse and complicated of any industry or consortium that exists in the world,” warns Kishor Narang, principal design strategist and architect at an India-based consultancy which leads multiple global standardization initiatives at the IEC, ISO, ITU and IETF, notably in the area of smart cities and EVs. “Agreeing on common harmonized standards is a major hurdle for all of us”, he adds.

The principal categories where standards could help shape the sector are related to safety, public acceptance and reinforcing regulations. Specific subtopics that need addressing include operational risk assessment (ORA), maintenance, data capture and processing.

“Some drone stakeholders confuse regulation and standards,” says Narang. “Globally, there is a lack of awareness of existing standards and standards being developed across the aviation and industry sectors. A perceived lack of clarity around safety regulations creates uncertainty around priorities. For instance, there is a potential divergence between the US and Europe on some key technical standards, for example for BVLOS.”

Narang also highlights duplication of standards and best-practice activity across organizations within industry groups such as construction. In addition, some regulations are only applicable to the outdoor use of UAS. He stresses that “a sophisticated management system is desperately needed as drones become more widely used. The most basic functions of a drone management system are creating, editing and dispatching missions, keeping track of the real-time status of each drone and storing, analyzing data in a structured format.”

In addition to his many roles in the IEC, Narang is an expert within a recently formed IEC System Committee which deals with electrified transport, and some experts think it should take on the job of coordinating standardization in that space. Whatever the outcome, the need for cooperation between standards organizations is paramount.

 


Public transport is going autonomous

 IEC E-Tech

Public transport systems are gradually becoming autonomous, despite the many challenges, including for standards developers.

article here

Cities across the world are struggling with increased traffic, “resulting in billions of dollars in lost time, wasted fuel, and excess carbon emissions”, according to this provider of data and insight into worldwide transport systems. One of the solutions could well be autonomous public transport systems, which city planners can organize to reduce traffic with little requirement for added personnel.

Driverless urban rail links such as those in Kobe, Japan; Lille, France; and London Docklands have been operational since the 1980s, with dozens of metro networks in cities from Doha to Lahore and Santiago following suit. In recent years, autonomous trams and driverless bus services have been enabled thanks to advances in the ways computers process data from cameras, sensors and scanners in response to live conditions.

Driverless transport will soon involve air as well as surface and underground systems. Advanced air mobility (AAM) vehicles such as air taxis are not yet mainstream, but developments are happening quickly. For example, there are plans for air taxi services between Heathrow and central London.

The pros and cons of autonomous public transport

There are four major constraints which impact the viability of autonomous public transport systems. They include the amount of investment required, regulatory hurdles, infrastructure readiness and cultural acceptance.

“These constraints vary, depending on the economy in question,” explains Ripin Kalra, a smart cities specialist inside the IEC and Senior Fellow (Participatory Urban Resilience & Environmental Assessment and Climate Policy) at the University of Westminster. “In labour-intensive economies, job creation may be prioritized. In others, particularly those facing labour shortages, autonomous transport is a way of addressing workforce gaps, in addition to being an answer to traffic jams.”

Human fatigue is also a major safety risk. Urban transport drivers face a lot of stress and abuse. “Many report harassment from passengers,” says Kalra. “Autonomous systems can operate longer, more consistently and without fatigue-related errors. Fleet utilization increases significantly when you remove human constraints like working hours, illness or personal commitments.”

There’s also a shortage of trained drivers in some countries. Germany is forecast to have a shortfall of 80 000 bus drivers by 2030. Trains, trams or buses often operate up to 15 hours a day, seven days a week, but drivers cannot. “Labour is often the most expensive component of any system and removing or reducing labour costs can improve the economic viability of public transport systems, which rely on public funding to operate,” Kalra adds.

Infrastructure is the biggest challenge

That said, building the infrastructure required for such systems remains the most expensive and complex barrier. Automated transport depends on data centres, high-speed connectivity (such as 5G), artificial intelligence (AI) processing and constant connectivity – all of which consume energy and water. Even scaling electric vehicles (EVs) has strained power grids in many countries. Automated transport adds further demands.

“Scaling up remains the major challenge,” says Kalra. “Many cities still face a deficit in basic public transport infrastructure. Some are wealthy and able to invest; others struggle to fund basic services. Birmingham, for instance, has faced severe financial constraints and must prioritize essentials like waste collection and education before considering major innovation projects, such as automated transport.”

While Glasgow, home to the world’s third-oldest metro network, is rolling out driverless underground trains this year, London, which opened the world’s first subway in 1863, has abandoned similar plans, citing cost. The London city mayor said the cost to upgrade just three sections of London’s vast Tube network would be GBP 20 billion. Opposition from the drivers‘ trade union, over fear of job losses, was another concern.

Standards can help drive things forward, though, by making sure the tech is interoperable and works safely – reducing costs in the medium term. The IEC and ISO joint committee for information technology, JTC 1, established a subcommittee which publishes many of the foundational standards for the use of AI. SC 42 has published ISO/IEC TR 24030, a technical report which looks at many AI use cases, including transport in smart cities. It notably addresses how AI can be used for enhancing traffic management efficiency and infraction detection accuracy and for traffic signal optimization based on multi-source data fusion.

Public acceptance of automated systems is cultural and varies by country. In wealthier nations, regulatory scrutiny, liability concerns and public expectations around safety are much higher and that can slow deployment. “In contrast, some developing nations are more open to experimentation and ‘leapfrogging’ to the latest technology, partly as a statement of modernization,” Kalra describes.

Perceptions of automated transport can shift quickly. In San Francisco, for example, autonomous vehicles are now seen by the majority of passengers as significantly safer than human drivers sharing the road.

Trials with trams and trains

A section of Oslo is in the middle of a year-long test of a tram fitted with computer vision. Supported by EU-Rail, the tram is operated remotely using technologies developed under the EU-funded research initiative FP2-R2DATO. The EU Rail initiative has already helped launch Europe’s first passenger-carrying driverless train operating on a 24 km stretch of open railway in the Czech Republic.

Both developments are significant in shifting the rollout of autonomous train and tram systems away from insulated sections of track, such as airport shuttle services, onto networks with greater variables of potential incidents, such as level crossings.

The EU lags behind some other parts of the world when it comes to autonomous trains, notably China. In the country, where the Wuhu Rail Transit monorail has been operating at Grade of Automation 4 for five years, the state owned rolling stock manufacturer CRRC has debuted a claimed world first driverless train operating at speeds of 200 km/h. (For more on the various grades of automation based on the Society of Automative Engineers (SAE), read Capturing data for autonomous vehicles | IEC e-tech).

Standards can help

Depots, where trains are stored, maintained and refuelled, are relatively straightforward to automate. Branch lines, which cover about 30% of total track mileage in Europe, have lower maximum speeds, more homogeneous traffic (than main lines) and simpler regulatory requirements, making automation more manageable.

“There is still a ‘not invented here’ mindset in parts of the industry, but there is also growing recognition that modern technology can significantly improve how things are done,” explains Alex Haag, CEO and Co-Founder of an autonomous train systems developer based in Strasbourg and Munich. There are parallels with the automotive industry, which was widely seen as an old, closed industry where startups couldn’t innovate, before perception changed with the launch of Tesla. Rail is at a similar inflection point.

“Regulation doesn’t prevent innovation, but it does require that things be done carefully and systematically,” says Haag. “At some point, innovation has to move from theory into the real world, which means dealing with hardware, safety certification and regulatory processes.”

That is also where standards come in. IEC TC 9 is the IEC technical committee set up to develop standards pertaining to electric equipment for railways. Different levels of automation are specified in IEC 62290-1. This TC 9 standard establishes fundamental concepts for urban transport management and control systems. Grade of Automation 4, for instance, applies to trains that run automatically at all times, including door closing, obstacle and emergency situation detection. TC 9 also publishes IEC 62267, which specifies the safety requirements for automated urban guided transport. “Many cities simply don’t know where to begin,” says Kalra. “Standards can provide clarity and reduce risk.”

Autonomous buses are on track

The first self-driving buses are now departing to and from Rotterdam The Hague Airport, shuttling passengers on public roads, albeit a short distance, to the Meijersplein metro station. It is one of a number of Society of Automotive Engineers (SAE) Level 4 proof of concepts now live. Others include a 4 km route between two Gothenburg stations; a 5,3 km route connecting a university campus in Michigan; a 2,5 km route with nine stops in Arbon, Switzerland; and a 7 km stretch of a district in Hannover with 13 bus stops, 10 traffic lights and “multiple complex manoeuvres”, including roundabouts, side and parallel parking, and pedestrian crossings.

Research suggests there are more autonomous bus trials on city streets than autonomous train trials. Flexibility and cost are the key reasons. “Once you invest in rail tracks and infrastructure, you’re locked into that route,” says Kalra. “Buses, by contrast, can be redeployed to match demand. If a bus fails, the system continues operating; if a train fails, it can disrupt the entire line.”

Buses also have lower capital costs and shorter build times compared to rail or light rail systems. With technology in this space evolving rapidly, it is easier to upgrade or replace a bus fleet than fixed rail infrastructure. “Rail works extremely well in high-demand corridors, but buses offer adaptability, resilience and lower investment risk,” Kalra says.

Problems of integration and interoperability

“Interoperability is crucial,” says Kalra. “If systems communicate with one another, waiting times can be reduced and capacity adjusted dynamically. Ideally, transport networks should respond intelligently to passenger flows.”

However, full integration increases cyber security risks. “The more connected systems are, the more vulnerable they become to hacking or malicious interference,” he says. “So integration must be balanced with strong security standards.”

There is, however, no overarching framework for autonomous transportation as a complete system. “Today, multimodal transport systems in cities are fragmented,” says Kishor Narang, Vice Chair of the IEC Systems Committee for Smart Cities. “There are many initiatives and good intentions, but very few cities have a truly harmonized, end-to-end transportation system. If that foundation isn’t in place, integrating autonomous vehicles becomes an even bigger challenge.”

The vision may be compelling, and the technology is progressing, but cities are still waiting for individual technologies and systems to mature before attempting large-scale integration into multimodal transport networks.

“Cities are systems of systems,” says Narang, who is also Principal Design Strategist and Architect at an India-based consultancy. “Technology alone can’t solve these challenges, and neither can standards or policy on their own. What’s required is the integration of multiple technologies, transport modes and ecosystems into a coherent whole – and that’s a very complex task.”

 

Wednesday, 4 March 2026

MWC 2026: Telcos confront the hard economics of 5G

IBC

article here

With global 5G coverage now surpassing 50% but consumer willingness to pay barely shifting, operators at MWC argued that the next chapter must be defined by utilisation.
The mobile industry expects to generate U$11.3 trillion worldwide by 2030 but telco executives at Mobile World Congress are still concerned that their networks will end up a dumb pipe.
“Telco networks are the digital backbone of the economy,” said Christel Heydemann, CEO, Orange at the event in Barcelona. “We are everywhere, we are essential, and yet for years we have been called a utility, a commodity. When a network goes down it is never just technical. Public services, hospitals and factories stall. We power the system but we barely shape its value.”
Calling the mobile industry “the nervous system of the digital world” Vivek Badrinath,  Director General of trade body GSMA, said, “Today more than anything the lack of scale is hindering operators from making the necessary investment. If we want to realise the full promise of 5G and make a healthy foundation for a future 6G we must first and foremost complete the rollout of 5G.”
5G Advanced is the latest and final phase of 5G’s implementation and a critical one if next-generation services built on new spectrum under 6G are to be realised.
However, the gap between countries pressing ahead with it and those with slower rollout is widening. “In China, they have factories and hospitals running on 5G Advanced,” said Badrinath. “These are not just proof of concept. These are real life services in commercial use. But here in Europe we are losing ground. There is much more to be done to unlock the potential of 5G.”
Operators look for 5G payback
Audience polling during an event on monetising 5G highlighted the same tension— strong confidence in the technology, but uncertainty about the business model.
“With 5G standalone networks, speeds of over 1,000 megabits per second are possible under ideal conditions,” said Mani Manimohan, Head of Digital Infrastructure Policy and Regulation at the GSMA. “The mobile industry has achieved extraordinary technological progress, but many operators are still struggling to translate that technological brilliance into incremental revenue.”
In the first phase of 5G, the main targets were spectrum allocation, coverage, and faster deployment. Today, 5G covers more than half of the world’s population, and in some regions, coverage reaches 80–90%. This progress came at the cost of very high capital expenditure. Collectively, operators invested around U$250bn per year in network infrastructure and devices, according to GSMA.
The question being posed by operators at MWC is whether the industry needs a shift in thinking — one where innovation is based on connectivity and new services, rather than simply squeezing more bits out of radio waves.
“We are moving from a phase defined by rollout to one defined by utilisation,” said  Manimohan.  “The next chapter of 5G will not be about coverage. It will be about capability, experience and new revenue.”
Ericsson’s November 2025 mobility research suggested that the shift is already underway with around 65 differentiated 5G connectivity services in the market.
“These broadly fall into two categories: moment-based services and always-on premium services,” explained Marie Hogan, part of Ericsson’s 5G to 6G transition team.
The first is designed for short periods of high demand. “You might be an influencer at a concert and want to optimise your uplink for that moment, or a gamer who wants ultra-low latency for a specific session,” she said. “In those cases, customers may be willing to pay for better performance in a certain location or time window.”
The second category involves permanent or guaranteed service levels, often aimed at enterprises.  “Premium services are always on,” she said. “A business may want optimised fixed wireless access, or a critical service may need priority connectivity at all times. It’s not always about speed — sometimes it’s about reliability or latency.”
The journey from selling raw data to digital services
Jakob Greiner, VP European Affairs at Deutsche Telekom said the industry had long promised advanced 5G use cases, and some are now becoming reality. “Five or ten years ago we showed slides about remote surgery and ultra-reliable networks,” he said. “Now we are starting to see real applications. That gives me optimism.”
However, he stressed that traditional best-effort internet will remain the foundation. “We are not replacing the normal internet. We are complementing it with specialised connectivity where it makes sense.”
Examples include cloud-gaming offers using network slicing, currently available to some DT customers at no extra cost. “From a European perspective, we are moving more slowly than in the US, but the direction is clear.”
He pointed to high-density environments such as stadiums as a clear future use case. “Thousands of fans, journalists and emergency teams all need reliable connectivity at the same time. In those situations, guaranteed performance has real value, and customers may be willing to pay for it.”
Kaan Terzioğlu, CEO of VEON Group, a digital operator based in Dubai, urged peers to “stop selling SMS and gigabytes of minutes” and instead provide “meaningful connectivity” services to customers.
“It means becoming a banking service and an entertainment service or providing an online consultation with your doctor,” he said. “Do that, and you gain loyalty and people spend more time and money with you.”
The real value now, he said, lay in offering services with “augmented intelligence”. VEON for instance is developing local language models in Bengali, Ukrainian, Punjabi and Uzbek rather than English, French or German because that’s what different populations deserve.
“Augmented Intelligence is going to reshape how humanity will progress,” Terzioğlu said. “The reality is that ‘homo augmentus’ is going to assassinate homo sapiens. In ten years or maybe sooner we will have the ability to organically connect to data [via our brains].”
“AI can augment our skills and make us very successful,” he said, “But the responsibility for shaping it is ours alone. The value systems we have built as humans are our only insurance.”
According to GSMA Intelligence consumer survey, the average premium consumers say they are willing to pay for 5G is around 5% and has remained at that level since 2020. The largest group of respondents isn’t willing to pay anything extra at all.
“Part of the reason lies in the limits of the consumer proposition,” says Matthew Iji, head of forecasting and modelling, GSMA Intelligence. “Many operators tried to monetise 5G through speed-base tiers and premium plans, only to discover willingness to pay for raw performance is capped.”
There was also an expectation 5G would have a single defining killer app. “Among those consumers who are not planning to upgrade, the most common reason given is that the existing network already does everything they need it to,” Iji said.
This leads to a more uncomfortable question: do consumers still care about the generational label at all? For many the answer is probably no. They care about what they can do – stream work connect consume rather than which radio technology happens to be underneath.
If the first five years of 5G were about engineering, the next five will be about economics; The answer as to whether 5G was truly worth it will ultimately be written not in speed tests or coverage maps but in balance sheets.
Breaking the habit
Operators may want us to spend more time using their digital services but Aaron Paul, the Breaking Bad actor, argued for less reliance on our devices.
“I’m not anti-technology but I love that some States [in the U.S] are banning smartphones in schools,” he said in the keynote ‘Technology, Addiction and Balance’. “When I was a kid, they’d never let me bring in a TV and a game console. But now kids bring a device with access to everything. It’s not just unhealthy and unsafe.”
Paul is backing the Light Phone, a new device with much of the functionality of a regular mobile but stripped of adverts, alerts and algorithms that keep us scrolling.
Kaiwei Tang, co‑founder and CEO of LIGHT, explained, “The mobile business model depends on engagement, but it’s hurting people’s well‑being. After 10–15 years of smartphones and social media, we should have spent more time understanding the tools we were creating. Now the same thing is happening with AI. Tools should improve human life - save us time so we can be creative, enjoy our relationships, and live fully.”