Streaming Media
article here
Streaming Media
article here
AV Magazine
article here
The fallout from war has economically impacted UAE with a nosedive in tourism and disruption to ‘business as usual’. Yet no-one expects decades of investment, technological development, and foreign traffic to the region to contract for good.
The Iran conflict caught most people by surprise not least AV Magazine and our respondents who were some way down the route of replying to us before war broke out.
The fallout from the conflict has impacted UAE materially with drones and missiles on and economically with a nosedive in tourism. Yet no-one expects decades of huge inward investment, economic realignment and foreign traffic to the UAE to contract for good.
It can feel a little surreal, if not inappropriate, to be talking about the ‘wow factor’ of ProAV installations or luxury lifestyles for which the region is known when missiles are being fired just a few miles away.
That said, it is anticipated that this momentum is only temporarily hindered which is why contributors based in the region are taking a longer term view of prospects. The ProAV climate is judged resilient as the Emirates continue to evolve beyond oil-dependency.
United Arab Emirates University (UAEU) aspires to be join the top 20 academic institutions in Asia – and the top 200 globally – by 2030. To modernise its communication infrastructure and enrich the overall learning experience UAEU chose a suite of VITEC solutions for live streaming and digital signage across its 80-hectare campus.
Uncertain in the short-term
Calling the outlook for Emirati ProAV “generally positive” Mahesh Singh,
regional sales manager, UAE, Christie says, “We are observing consistent demand
for integrated AV systems across corporate, education, hospitality, events and
retail. However, it’s important to acknowledge that ongoing political
uncertainty in the region could affect business operations in general.”
“Despite the current uncertainty in the region, the ProAV
climate in the UAE is extremely dynamic,” says Al Tikriti, chief product and
technology officer, Disguise.
As of early 2026, the market continued to see momentum driven by increased
AV-over-IP deployments, more European and U.S.-centric manufacturers and
solution providers enhancing their MEA (Middle East Africa) presence, and
growing expertise within the Pro AV system integrator community.
“Post-pandemic recovery, tourism rebound, and continued diversification away from oil have created steady demand for high-end integrated systems across government, corporate, hospitality, education and entertainment sectors,” reports Johnny Hickman, sales account manager, Matrox. “Looking ahead, the UAE’s long-term development strategies, smart city initiatives, control room infrastructure, corporate expansion, and major events are expected to support continued demand for AV.”
Singh identifies the UAE’s commitment to smart cities leading to increased investment in interactive displays, video walls, AV-over-IP solutions and networked control systems for public buildings and business campuses.
Furthermore, the UAE’s status as a global hub for conferences, corporate events, exhibitions and international tourism is “fuelling demand for immersive AV experiences, large LED walls and projection mapping,” he says.
Ambitious AV design
State funding may be on tap but this coupled with a willingness to go above and
beyond in using AV to drive development in every sector.
“The bar for what counts as a successful installation is genuinely high here, and that pushes the whole market forward,” says Netgear’s head of sales, Annamalai Ar. “What is changing is the nature of the ambition. We are seeing a clear shift away from purely showpiece installations toward smarter, more integrated and sustainable solutions,” he adds.
He says the market is becoming more mature in planning, technical standards and lifecycle thinking.
“Clients are asking harder questions about system management and long-term supportability, not just headline performance. For Netgear, that shift is significant: it moves the conversation from spec sheets to network architecture, which is where we do our most meaningful work.”
Walid Tabet, Vitec’s regional director, says UAE’s AV culture is fast moving, highly visual, and aspirational; “it is shifting towards integrated, immersive, AI driven experiences across all verticals. What makes it exciting is how aggressively the country uses AV as a canvas for smart city living, and a global creator economy rather than just ‘equipment in a room’.”
Hickman calls local AV “ambitious, fun, luxury-focused and
experience-driven, yet always with an eye on value. [Eye-catching]
installations continually push market trends forward.”
What’s exciting, he says, is the speed of execution. “Projects here don’t just
adopt technology — they create environments that genuinely engage millions of
international visitors every year and enhance the daily experience of people
living in the country.”
Netgear continues to see strong investment across the board in a market that remains “active and genuinely competitive,” according to Ar. “The conversation has matured: clients are more focused on ROI and long-term value than on spec for spec’s sake.”
Singh judges the region’s approach to AV as “technologically progressive, increasingly localised yet deeply connected with global trends”. He elaborates: “It’s characterised by a strong infrastructure that is fostering the trend toward the use of digital and AI tools as artistic practice, with greater emphasis on regional stories and Arab cultural expression.”
Operating from October to April, Global Village Dubai functions as a hybrid of theme park, cultural fair, and shopping destination, serving tourists, and local visitors in a high-density, high-traffic environment.
Creative, strategic, and impactful
Technology in UAE is commonly treated as a foundational layer of new
developments, observes Tikriti; “This helps the region to push boundaries and
achieve things we’ve never seen before when it comes to next-generation
experience.”
He thinks the demand for immersive and technically sophisticated environments will only accelerate. “Technology is actively being considered as part of the core infrastructure of buildings and districts from the very beginning of the planning process. What’s exciting is the freedom here to rethink fundamentals – what a museum, public space, or cultural destination can actually be.”
Assessing UAE as “an opportunity rich, innovation driven market” Karan Kathuria, director of sales & business development at Renkus-Heinz says: “The role of AV will only become more creative, strategic, and impactful.”
“AV in the UAE has evolved from a hardware-led installation
market into a highly experience-driven, design-centric ecosystem tied closely
to architecture, events, tourism, and national ambition,” she says. “Dreams are
big and time is the only answer for them to be executed, whether these are
large or medium scale.”
Dubai began its ascent in the 1990s encouraging foreigners to base themselves in a tax free climate while infrastructure was built around them. The last decade has seen the Kingdom of Saudi Arabia attempt to turn its own oil tanker of an economy. Specifically the KSA is rapidly expanding its audio-visual market as part of its national transformation strategy under Saudi Vision 2030.
Masdar City is a $22bn sustainable eco-city project in being developed to rival California’s Silicon Valley but also to incorporate all aspects of city life. This includes an emphasis on narrow shaded streets, walkways and paths. Its orientation uses the cool night winds to reduce solar gain.
Nation to nation competition
“The UAE hasn’t been overshadowed so much as refocused,” argues Kathuria. “It’s
winning more project awards and doubling down on pragmatic, high impact
developments that create steady, real world demand for AV, immersive tech and
integrated digital experiences — a strategy that appeals strongly to project
owners, investors and tech partners alike.”
Saudi Arabia’s mega projects attract global attention, but the UAE maintains its position “through stability, execution capability and international connectivity,” argues Ar. “Rather than competing on scale alone, the focus here is on quality, innovation and timely delivery — and that is a fundamentally different value proposition.
“The UAE’s advantage has always been execution maturity,” Ar says. “Predictable procurement cycles, a deep and experienced integrator community, and projects that move from specification to commissioning without the delays that characterise more nascent markets. The current regional volatility tests that reputation, and it would be dishonest to say it creates no headwinds. But clients here have enough market experience to distinguish between regional turbulence and structural risk, and our pipeline reflects that.”
A focus on business operations
Dubai and Abu Dhabi continue to grow with a focus on business operations, entertainment experiences and global event readiness. “Dubai in particular is often perceived as ‘shiny’ because of its international scale and agility, and it truly leads in terms of ease of doing business and global tech partnerships,” says Singh.
Hickman suggests that, of the Emirates, Dubai leads in volume and visibility. “Dubai leads because of tourism and events infrastructure,” he says. “Abu Dhabi focuses on cultural prestige and government projects with a very different vibe.”
Netgear completed a full network infrastructure modernisation at Global Village Dubai, for one of the region’s largest cultural and entertainment destinations: a 10G fully redundant backbone, M4350 and M4250 switching across core, distribution and access layers, and managed wireless covering the entire park. “The scale and operational intensity of that environment made it a demanding brief, and one that reflects the complexity of projects increasingly coming through across the UAE,” says Ar.
However, as Ar points out, it would be a mistake to treat the UAE as synonymous with Dubai alone. “Beyond the two main centres, other emirates are steadily developing commercial and mixed-use projects, each at their own pace and with their own priorities. The result is a genuinely balanced national landscape rather than a single-city story.
There are seven Emirates
Hickman elaborates, “Sharjah is more focused on education and smart-community
developments, and the northern emirates on eco-tourism and hospitality.
Economically, Dubai and Abu Dhabi account for the majority of projects, yet the
smaller emirates benefit from spillover investment and targeted incentives. AV
trends follow the same pattern: flashy, high-profile in Dubai; institutional
and immersive in Abu Dhabi.”
There are a number of hugely exciting projects on the horizon that have technology and visual experience sitting at the heart. Billed as “an oasis of extraordinary Disney entertainment at this crossroads of the world” Disneyland Abu Dhabi will be “authentically Disney and distinctly Emirati” when it opens sometime between 2030-33. Immersive destinations and experiences creator Miral, based in Abu Dhabi is helping design the resort on Yas Island.
Echoing the scale of the 20,000-capacity Sphere in Las Vegas, Sphere Abu Dhabi promises to “re-define entertainment in the region” if and when construction begins. The country even plans a Harry Potter theme park from Warner Bros.
Twenty years after its announcement, the world’s largest Guggenheim is still waiting its official launch party. Frank Gehry, now 96 years old, and the original architect of Bilbao’s iconic art museum, has described this project as his “late masterpiece.”
Projects of scale
Abu Dhabi’s government want to overtake Paris and London as the art capital of
the world and has commissioned Zaha Hadid to design a new Performing Arts
Centre, located alongside a version of the Louvre and the Guggenheim in the
Saadiyat Cultural District. Also there is the Zayed National Museum,
architected by Foster + Partners with five lightweight steel wings, which
opened its doors in December.
“Such plans will make meaningful additions to the UAE’s arts and culture scene, while much-publicised developments such as Disneyland and Sphere will secure the UAE’s spot on the world map when it comes to pioneering next-generation immersive entertainment,” says Tikriti.
“Immersive experiences at museums and cultural destinations as one of the most exciting sectors right now. There’s a strong desire to tell national and regional stories through contemporary immersive techniques, combining cutting-edge technology with deep cultural narratives to connect with younger, digital-native audiences.”
Sports business
The live events and sports technology vertical deserves particular attention in
the UAE context. The region has committed not just to hosting world-class
events but to using them as platforms for genuine technological innovation. The
Abu Dhabi Autonomous Racing League (ADRL) is a case in point. The debut event
at Yas Marina drew over 600,000 online viewers and required a network
infrastructure from Netgear capable of supporting real-time AI decision-making
across multiple autonomous vehicles simultaneously. “Here, the AV network is
not a support system for the event but integral to it,” says Ar.
Despite the cancelling of Grand Prix in neighbouring Bahrain and Saudi Tabet notes “substantial investment directed toward the sports industry across the region, including football, e-gaming, and multi-sport arenas.”
A recent disguise highlight was powering the Phygital Games of the Future in Abu Dhabi, created and delivered by bright! Studios. “The event fused esports with physical sports in a live event that was broadcast to millions of fans across the globe, filled with jaw-dropping visual moments,” Tikriti says. “Disguise has further plans for expansion in the region which we will be sharing more on very soon.”
Other projects supported by Disguise in the region over the years include projection mapping at the Burj Khalifa, Al Wasl Dome and the One&Only One Za’abeel hotel, to immersive digital experiences at Dubai Airport.
Infrastructure and residential build out
The 900km-long Etihad Rail passenger network development expects to launch its
first services this year across Abu Dhabi, Dubai, Sharjah and beyond. This is
the region’s first cross border rail network connecting 11 cities and towns
across the seven emirates. There are also major airport expansions (such as at
Al Maktoum International in Dubai).
Marsa Al Arab is a seafront resort being built on two artificial islands near the Burj Al Arab, focusing on tourism and luxury living, with plans for hotel and entertainment facilities. Masdar City is a $22 billion sustainable eco-city project in Abu Dhabi near the international airport. Debuting next spring, Wynn Al Marjan Island is being developed into a “opulent and entertaining beachside destination” in the port city Ras Al Khaimah (capital of RAK).
While the Burj Khalifa remains Dubai’s most recognised symbol, the Creek Tower Dubai due to open this year is intended to be a landmark on par with the Eiffel Tower. Designed to surpass the Burj Khalifa in both height and symbolism, the tower is part of the Dubai Creek Harbour Tower development of luxury living and shopping, one part of which was hit by an Iranian drone.
“These are all expected to support continued AV investment in transport, entertainment and smart-city environments,” Hickman says. On a smaller scale he notes the need for increased security control rooms with higher levels of IP decode/encode “as is the need for corporate clients to transmit video at low bandwidths at high quality to reduce networking costs whilst increasing national and international collaboration.”
Overall, the UAE’s successful diversification of economic development, cultivated organically over five decades across various sectors, has helped reduce its dependence on oil.
“This strategic shift positions the UAE favourably,” Singh says. “Consequently, the UAE is not under immediate economic threat from any neighbouring GCC [Gulf Cooperation Council] countries, particularly given its consistent announcement and execution of diverse projects. Its well-established and multifaceted economy offers a robust foundation for growth going forward.”
IBC
article here
If there is a single takeaway from NAB 2026 it’s that the broadcast media industry is rebalancing. The centre of gravity is shifting, the customer base is diversifying, and the definition of “media” is expanding across sectors.
The International Association of Broadcast Manufacturers (IABM) reflected this by changing its five decades old brand to IAMT, standing for the International Association of MediaTech.
In its annual ‘State of Media Tech’ report unveiled at the show, the IAMT revealed that up to half of future revenue growth for its members is coming from parallel markets compared to traditional broadcast. These are verticals like banks, retailers, and schools “organisations that want broadcast-quality production capabilities without the infrastructure complexity that has historically come with them,” said Chris Evans, Head of Knowledge and Insight.
Show organisers capitalised on this by programming a series of ‘Enterprise Video Strategies’ with the aim of bringing TV professionals together with heads of AV to learn lessons from each other.
Creator talent pipeline
With these shifts in mind, it was no surprise to see the creator economy out in full force at NAB this year. IAMT CTO Stan Moote describes a stratified ecosystem: from phone-first hobbyists and semi-professional “solopreneurs” to fully professionalised creators with managers, monetisation strategies, and studio grade workflows.
“We spoke with a manager who represents a hundred influencers,” Moote says. “He’s helping them professionalise – buying lights, cameras, tripods, editing tools. It’s like talent management in film or TV, just with a different pipeline.
One example is the recent phenomenon of microdramas, where platforms hook people into stories with free episodes then attempt to convert them subscribers to watch the full series. According to Natalie Jarvey, Editor of Ankler Media’s Like & Subscribe newsletter speaking at a panel on the topic, it is creators who are going to own the microdrama space.
Evans frames it as a continuum rather than a separate industry. “The idea that creators and media entertainment are two worlds is a misconception. It’s a talent pipeline. Their storytelling techniques such as hooks, pacing, and audience engagement are now influencing traditional media. And on the enterprise side, we’re seeing companies hire internal storytellers and content creators. It’s a multi-faceted ecosystem.”
AI: From hype to hard ROI
AI is no longer the abstract, speculative version of 2023. This year, the focus was on tangible, operational value.
“We’re seeing real AI, not just talk,” said Moote. “AI-driven sign language interpretation; AI orchestration and scheduling that saves money; AI repurposing linear news into vertical formats automatically. And on the business side, AI-powered churn management for streaming platforms.”
IAMT data backs this up: 66% of respondents cite AI/ML as a key technology priority, and 34% say it is the single most important. But buyers are demanding clarity.
“They want outcomes, not buzzwords,” said Evans. “How does this reduce power consumption? How does it reduce operating expenses? How does it increase content output with flat staffing? Those are the questions driving investment.”
Show launches include the latest version of editing assistant Eddie AI with Night Shift mode that turns raw footage into a rough cut while you sleep. In the same ballpark, Adobe was showing the Firefly AI Assistant which is a new agentic tool that can orchestrate complex, multi-step workflows across the company's entire Creative Cloud suite (Photoshop, Premiere, Lightroom, Express, Illustrator etc) as well as third party tools like Kling. As ever, the emphasis is on the creative driving the process while the AI chugs away at pulling the idea together from a variety of apps.
The challenge, though, is that competition is intense with rival vendors claiming to do broadly the same thing. There are plenty of companies, for example, using AI for metadata tagging to achieve similar results.
It’s no longer primarily about quality, because AI has effectively standardised that; quality is now almost assumed. Increasingly, differentiation might simply come down to cost and speed.
“AI is a huge topic everywhere right now,” confirms Hossein ZiaShakeri, SVP, Media and Entertainment, Spectra Logic. “From a storage perspective, the focus is really on how to enrich content so it can be better monetised. Ultimately, it comes down to one core question: how quickly can you find the right piece of content? If you can locate assets quickly, you can repurpose them, build new stories, and unlock more value. That’s the key driver behind a lot of the innovation we’re seeing.”
Immersive, cinematic sports broadcasting
The value of sports rights continues to soar, reckoned by S&P Global to be worth north of $67bn in 2026, along with fan interest in the live event. The pulling power of live broadcast will be self-evident this summer during the FIFA World Cup and was a clear trend at NAB as leading cinema camera makers fielded new versions fit for the stadium.
Chief among these, and arguably the headline grabbing deal of the show even though it was announced beforehand, was ARRI’s sale to Riedel Communications.
ARRI had already begun to expand into live production and under Riedel’s parentage that development will come on leaps and bounds.
“The two companies are highly complementary,” explained founder and owner Thomas Riedel at NAB. “ARRI brings world-class camera technology, while we contribute expertise in transmission, both wired and wireless, network control, processing, video servers, and live production solutions.
“We won’t lose sight of ARRI’s legacy,” he added, “but expanding into live sport and entertainment is a natural next step, and it’s an area where we can really add value.”
On cue, ARRI announced new software features for the Alexa 35 Live camera and Live Production System LPS-1. These will be used at the 2026 Eurovision Song Contest in May, which will see 25+ Alexa 35 Live cameras used in partnership with Riedel.
Other cine-cam vendors Sony and RED are also developing their broadcast capabilities. At NAB, RED was showing the latest version of its Cine-Broadcast Module which includes an automatic record-triggering via external tally commands from the video switcher. This means R3D files are captured only for the portions of an event actually directed live, which should significantly cut post-production overhead and making it straightforward for editors to relink to the camera raw source.
CBS Sports and NBC Sports have already used the broadcast module, while Fuse Technical Group has captured several concerts using RED cameras.
What started as a stylistic experiment has become the new visual language of sports broadcasting, where storytelling, emotion and image texture define the fan experience. As ARRI’s Corporate Development Expert Philip Durst put it, “The line between sports and cinema has all but disappeared: every frame now carries emotional purpose, and every production decision – lens choice, camera angle, colour tone – serves the larger narrative of human performance.
“What used to be a race for higher resolution and faster frame rates has transformed into a creative movement driven by Hollywood-style storytelling.”
That race has now gone to the next level: live immersive content.
On the showfloor, RED demonstrated an 8K live streaming basketball demo built around RED Connect and an output to an Apple Vision Pro. Live streamed immersive VR has been promised for a while now, but the pieces are beginning to fall into place.
Blackmagic Design made the biggest waves in this area by debuting the URSA Cine Immersive 100G, an immersive cinema camera with dual 8Kx8K RGBW sensors costing $26,495. A series of live Lakers games have already been captured in the format for viewing on Vision Pro.
“Live immersive production is here, and it’s extraordinary,” said Grant Petty, CEO, Blackmagic Design. “URSA Cine Immersive 100G makes it possible, and the images from this camera are just incredible to watch. It truly feels like you’ve been transported to the middle of the action. From sports to concerts, this opens up an entirely new world in live production.”
Camera tech to watch
Announced last year as a kickstarter project and shipping from May 1 is the new Spark high-speed camera from Pixboom claiming 4K at 1,000fps. This compact Super 35 global-shutter camera could give Phantom a run for its money in slow-motion work since, at €12,000, it costs many times less, and the global shutter is useful for fast motion sports or action movies, nature docs and volumetric camera arrays where RED cameras dominate. A 12-bit RAW sensor pushes dynamic range to 14 stops was announced at NAB, along with internal ProRes RAW recording that streamlines post into Resolve.
GoPro arrived with a new range of action cameras headlined by the €600 Mission 1 Pro which can shoot 8K resolution at 60fps or 4K at 240fps from a 50MP sensor. A Mission 1 Pro ILS version, due later in the year, will accept interchangeable lenses from a Micro Four Thirds mount. The company claimed this to be one of its most significant product launches ever, as it targets winning back prosumer market share from Chinese competitors like DJI (Osmo Action 5 Pro) and Insta360 (Ace Pro).
A new drone caught the attention. Touted as the world’s first all-in-one 8K 360 drone, the Antigravity A1 (which launched last year and is made by a subsidiary of Insta360) reinvents aerial filming with video-game inspired control. The operator of the A1 ‘views’ a first-person perspective through a pair of goggles and uses either head movements or gestures from a Wii-style hand control. It costs €1280 and weighs less than 250g.
Colour and edit tools are merging fast
Adobe debuted Colour Mode which is built directly into Premiere’s editing interface reducing the need for separate specialist (i.e. actual colourist) workflows. In Adobe’s words, this is a “completely new colour grading system” with a new operations system for grade management and dozens of built-in styles and modules. One of those offers complete cinematic looks applied in one click. In linking colour grading with editing Adobe is competing with Blackmagic Resolve which has offered this for a while. All the tributes to Adobe’s move seem to come from creators, rather than professional video editors, which is where its target market lies for this initial launch at least.
Until now, the workflow between edit and grade (offline and online) required much back and forth between systems, people and facilities but developments like Adobe’s are collapsing this. Imagen Video has gone a step further and made its AI-driven editing platform available to the colour grading tools in Premiere and Resolve. It offers AI Profiles trained on professional colour styles with support for custom LUTs and can deliver a baseline grade up to “10 times faster” than traditional, manual methods, it claimed.
IBC
article here
When Cinematographer Blake McClure signed on to shoot HBO’s new comedy Rooster, he wasn’t looking to reinvent the sitcom; he was looking for a way to stretch the visual language of comedy to integrate the intimacy of large format.
As an arena where dialogue often dominates, camera grammar in comedy is usually expected to stay politely out of the way. What emerged was a bold, large‑format aesthetic built around the Blackmagic URSA Cine 17K 65, a camera McClure initially doubted but ultimately used to redefine the show’s emotional texture.
“Comedy is driven by dialogue. It’s about timing, tone, the
speed of the joke. There’s not a lot of air between lines so there’s less room
for expressive camera language. You don’t get the lingering, the following of
characters, the visual storytelling you get in drama,” he explains.
McClure has balanced a career shooting comedy (Miracle
Workers, starring Daniel Radcliffe, Apple TV’s Loot and numerous
segments for Saturday Night Live!) with drama (Ryan Murphy horror Grotesquerie, The Dropout
starring Amanda Seyfried and psychological thriller Ratched).
“Rooster finally gave me the chance to merge those
two worlds,” he says.
Rooster stars Steve Carrell as an author of pulp
fiction teaching who lectures at an Ivy league university where his daughter is
a professor. It’s the latest show from creators and showrunners Bill Lawrence
and Matt Tarses whose back catalogue includes Scrubs, Spin City and Ted
Lasso.
“While still a comedy at heart, Rooster is
more character‑driven themes blending humour with more dramatic and personal
themes,” McClure says.
The show’s visual identity began with Dream Scenario,
a 2023 satire starring Nicolas Cage and shot in 16mm by DP Benjamin Loeb. McClure
and director Jonathan Krisel initially explored Super 16 to emulate its
contrast and texture but the idea quickly ran into practical and aesthetic
limitations.
“Our sets were tight. We couldn’t throw backdrops out of
focus. Super 16 has a deeper stop, so it wasn’t helping,” McClure recalls. “I
started thinking: what’s the exact opposite? Medium‑format 65mm.”
Large‑format portraiture—epitomised by Hoyte van Hoytema’s
work on Oppenheimer —had long fascinated McClure. “The emotional
closeness, the lack of distortion, the way longer lenses compress space without
pushing the camera back,” he says. “It felt right for Rooster’s
character‑centric storytelling.”
There was just one problem: 65mm is rarely feasible in
television. Multi‑camera setups, tight schedules, and data demands make it a
luxury. ARRI’s Alexa 65 had been used to shoot The Revenant, Barbie and Dune:
Part Two but was not an option for a TV budget.
But in Spring 2025 with Rooster in pre-production,
Blackmagic announced the release of the URSA Cine 65, a camera McClure admits
he initially dismissed.
“They’ve been very consumer‑oriented. I wasn’t sure it would
hold up to a demanding TV schedule,” he says. “But the idea of 65mm portrait
storytelling kept pulling me back.”
Camera placement in large format
The URSA Cine 65 didn’t just change the look it changed how
McClure approached position and framing.
“I did initial tests to see if I even wanted to use the
camera. Then I brought the director in for more tests. Finally, we did hair, makeup
and wardrobe tests representing the show’s tonal range. That’s where we figured
out focal lengths for over‑the‑shoulders. With the wide field of view,
foreground shoulders sometimes looked too close or made actors feel far apart,
so we often eliminated the foreground shoulder entirely and shot just inside
the eyeline.”
They always ran two cameras, and about 80% of the time had a
third. He shot with a full set of Camtec Falcons, mostly using a 55mm T1.3.
“Because the sensor is so big, I didn’t want to just use
longer lenses and push the cameras far back—that defeats the purpose,” he says.
“But being physically close created challenges with shadows and angles.
“We were also physically closer to the actors than they were
used to. They’d look at the camera and say, ‘Why are you shooting an extreme
close‑up?’ But it was actually a medium.”
For example, in scenes set in the College president’s office,
he would sometimes put two cameras on the wall or cross‑shoot. “If an actor
stood up, our close cameras couldn’t tilt without shooting up their nostrils,
so we’d have a third camera higher up to maintain continuity. Sometimes that
third camera became a profile shot. We had to rethink placement constantly.”
The 8K workflow
The URSA Cine 65 can shoot 17K, but McClure settled on 8K,
balancing quality with practicality. Convincing HBO required transparency, but
he had an ally: British DP John Brawley had already used Blackmagic’s 12K
camera on Apple TV comedy Shrinking which was also made by Bill
Lawrence’s production company Doozer Productions.
“It helped that DigitalFilm Tree was the same postproduction
facility [as Shrinking] so they were familiar with Blackmagic RAW.
Working with the URSA Cine 17K 65 wasn’t a huge leap for them. But studios
don’t want to hear ‘8K’ because of data costs,” McClure says.
Even the idea of capturing to the camera’s 8 Terabyte media
mags was off-putting because the hours it could potentially take to offload the
footage would not be conducive to fast turnaround TV schedules.
Instead, Blackmagic advised the team to use CFexpress cards
which hold about 1TB. “We used about 10 per camera and rotated them as the
studio cleared media for deletion.”
Also making an 8K workflow viable was McClure’s choice of
compression options offered by the camera.
“We tested all the compression settings and landed on Q3
which uses a variable bit rate to encode only the moving parts of the frame,”
he explains. “More specifically, Q3 allocates more data to areas of high detail
and less to static parts of the frame, reducing the overall data rate.
You don’t notice it visually, but it meant that our 8K files were actually
smaller than an Alexa 35 show I’d just done. This made it an easier sell to the
studio, since we weren’t asking them to approve a massive data storage
footprint for the DIT or post production. Plus, it held up beautifully in
grading.”
Delivery in 4K gave ample room for reframing in post. “We
shot 8K using the full width of the sensor within our 2:1 aspect ratio. You can
zoom into this camera 350% and it still holds up,” he says. “I’m fine with
punch‑ins if they serve the story.
Lighting for naturalism
The script’s first line mentioned ‘New England fall colours’,
so that became the guiding idea for the look of the show, despite shooting exteriors
at The University of the Pacific’s Stockton campus near San Francisco and on
stages in Los Angeles.
“We pushed as much sunlight‑feeling light as possible—20Ks,
Molebeams, anything to create that fall atmosphere,” McClure says. “We embraced
blown [overexposed] windows. That allowed us to use exterior light as bounce
and fill, then augment with smaller sources inside.”
With colourist Josh Bohoskey, he referenced Dream
Scenario for its blown‑highlight, high‑contrast look. “Josh built LUTs that
brought the lifted shadows back down while keeping a slight lift. We also kept
the palette warm, dialling out blue computer‑screen hues, for example.
Production design was fantastic and set the foundation.”
He employed a LED colour contrast filter from Camtec. The Color-Con
consists of a diffusion filter surrounded by LEDs in a filter holder that
occupies two spaces in the mattebox. In addition to controlling colour and
contrast, the LEDs can create a controllable, directional hot spot. McClure
likens it to modern version of the Lightflex that Freddie Francis, BSC used on The
French Lieutenant’s Woman (1981) or the ARRI Varicon.
“With the sensor being so large, every lens naturally has a
bit of shading toward the edges of frame. The Color-Con filter exaggerates
that. I could focus more LED light on the centre of the filter. So it’s a
little bit brighter in the centre and that enhances the shading. Depending on
our stop, some shots almost look like there’s a hole punch while others are
very gentle.
He explains, “It lifts the highlights and shadows, almost
like shooting through smoke without needing smoke. I’m obsessed with baking-in
as much of the look as possible; colour, tone, LUTs, everything.”
Lessons learned
Even now, finishing colour on the tenth and final Season 1 episode,
McClure is still learning from the format.
“The biggest lesson is: don’t drift back into old habits.
Sometimes I drifted back into lensing in instead of physically moving closer
but then you lose the charm of the format. Staying committed to that closeness
takes more time and sometimes means sacrificing coverage, but everyone agreed
the results were worth it.”
It’s a testament to how a comedy series became a proving
ground for large‑format storytelling.
“I wanted the show to feel like a portrait, intimate and
relationship driven. I’ve always loved medium format photography, and this was
the closest I’ve been to that in motion. The intimacy we got was worth every
bit of problem‑solving.
“It’s not the right camera for every project—just like
anamorphic or 65mm isn’t always right—but it now sits alongside Alexa and
Venice as a legitimate tool. It should be considered seriously.”
Country music to comedy
Born in Nashville, Tennessee, McClure got his first spark of
inspiration from an unlikely source, the Ernest movies (Ernest Goes to
Camp, Ernest Scared Stupid, etc.), because they filmed in and around his
neighbourhood. Seeing crews moving about and setting up big lights revealed the
world of filmmaking to McClure, who went on to attend and graduate from Watkins
Film School.
His worked as a PA on the Coen Brothers’ O
Brother, Where Art Thou? shot by Roger Deakins and credits those two months on set with
Deakins with teaching him as much about cinematography as he learned in film
school.
“When I first moved to Los Angeles, I had a reel full of country music videos. The first people I connected with happened to be comedy filmmakers, including Oz Rodriguez, who directed on Rooster. I started shooting shorts with that group, and as often happens in this business, one connection led to another. That was 15 years ago, and it built from there. I haven’t shot a music video since—but I’m shooting my first one since then this weekend, in Key West.”
IBC
article here
Leveraging generative AI, computer vision, and data from real environments, spatial computing has opened the door for cutting-edge systems that blend the physical and digital worlds into a new frontier of human-technology interaction.
Marketed by Meta boss Mark Zuckerberg as the metaverse, a virtual playground populated by avatars, the next-gen internet is now being reconfigured around spatial computing with applications accelerated by AI.
“The metaverse didn’t die — we simply stopped using that
word,” says Rosemary Lokhorst, CEO and co-founder of XR developer Badass
Studios. “What we’re seeing now is the same idea evolving and becoming more
practical through spatial computing.”
For years, spatial computing – whether labelled VR, AR, MR,
or ‘the metaverse’ - has cycled through waves of hype and recalibration.
Recently something has shifted.
“AI is enabling spatial computing by solving problems that
seemed impossible just a few years ago—scene recognition, environmental
awareness, gesture understanding, natural language processing,” explains Neil
Trevett, president, The Khronos Group, and VP of developer ecosystems, Nvidia.
“These were previously hard research problems. Today, they are increasingly
productised capabilities.”
At the same time, spatial environments are becoming training
grounds for AI. Digital twins allow systems to learn how to interact with
complex, real-world physics and human behaviours.
“The result is a feedback loop. AI enables spatial
computing, and spatial computing enables AI,” says Trevett who describes the
metaverse simply as “spatial computing experiences where users are connected
together.”
Khronos develops open standards for 3D graphics, compute
acceleration, and AI. The technologies now overlap. “AI’s impact on spatial
computing is fundamental,” Trevett says. “In turn, spatial computing is
evolving into a natural user interface for AI, embedding intelligence directly
into the environment rather than confining it to a 2D screen.”
On a technical level spatial computing leverages
technologies like computer vision to create interactive 3D representations of
environments. By analysing visual data, computer vision interprets the geometry
and layout of physical spaces. According to Nvidia, other
technologies, such as Gaussian splats and NeRFs, enable the rapid
reconstruction of 3D scenes for visualisation and analysis. Generative AI can
transform 2D images into 3D animations, enhancing the integration of digital
content with the real world.
Take out the jargon however and spatial computing is really
about using technology in a way that mirrors how we experience the real world.
“It’s about creating an environment where you feel connected
to what’s happening around you and able to share that moment with others,” says
Lokhorst. “It’s location-based computing — technology that understands and
interacts with space.”
The idea behind the metaverse was similar: a
three-dimensional environment with depth and space where you can move around
and feel as though you’re actually there. One difference is that instead of
fantastical VR worlds experienced vicariously by animated proxies of ourselves
(the Ready, Player One or Snowcrash version in popular culture
which Zuckerberg bought into) the spatial internet is grounded in reality.
“What excites me most is how generative AI, computer vision,
machine learning, and AI agents work together,” Patrick Hadley, Sponsored AR
Product Leader at Snapchat told an audience at CES. Snapchat’s AR lenses are
used 8 billion times per day. That scale gives it a live testing ground for
what comes next.
“Think of spatial computing as the canvas, generative AI as
the paint, computer vision as the eyes, and ML as the technique,” he said. “Together,
they’re enabling entirely new experiences.”
Nonetheless, even Meta, which by some estimates has spent
$60 billion on attempting to build the metaverse, has pivoted to talk about
spatial computing.
“We’re building what we see as the next generation of the
internet—the spatial internet—where people can feel presence and togetherness
across devices and locations,” said Anne Hobson, Policy Lead for Metaverse
Products at Meta at CES.
Notably, Hobson is still in charge of ‘Metaverse Products’
like the Quest headset or Ray-Ban Meta glasses. “[These are] devices that blend
the physical and digital worlds,” she said. “They give AI a first-person view
of what you’re seeing in real time, making AI more useful in the moment.”
The global spatial computing market was $102.5 billion in
2022 and projected to reach $469.8bn by 2030, according
to some estimates.
Nonetheless, Meta has scaled back its ambition to developing
wearables as the interface to spatial computing rather than building the
metaverse itself. At the start of the year it shed
10% of jobs at Reality Labs with this new strategy in mind.
Other companies are stepping to furnish the software
building blocks of the spatial internet. They are gathering data from real
environments, parsing that through Large Language Models (LLMs) to create
digital counterparts rendered in some cases using games engines.
Niantic Labs is one. Famous for designing mobile AR game Pokémon
Go and now owned by Saudi Arabian group Savvy Games, it is building a
shared coordinate of the world for humans and machines. That means
reconstructing and understanding real-world spaces so headsets, drones,
robots—anything with a camera—can interact in real time.
“We’ve scanned over a million places worldwide and for us
that ground truth data is essential,” explained Azad Balabanian Product
Manager, Niantic Spatial at CES. “While generative AI is powerful, we can’t
over-index on fully synthetic outputs. For many enterprise applications you
need millimetre-level accuracy.”
Its geospatial model was showcased at an event during Super
Bowl late February when Niantic Spatial enabled a physical robot and its digital
twin to share the same reality viewable in realtime on mobile phones. Because
the robot and phones were all localised to the environment, they all had the
exact same understanding of where they were in space.
“This demo demonstrated the next frontier of our work: AI
that understands the physical world,” the
company enthused. “We believe there is a significant, untapped potential
that is realised when AI moves beyond the screen and into our physical reality.
Our mission is to move past the idea of AI as a digital only tool by giving it
a sense of place.”
Another company fusing LLMs with real world physics is World
Labs. The startup is valued at over $5 billion by investors including Autodesk
and Nvidia. Its founder, Fei-Fei Li, talks about how ‘spatial intelligence’
plays a fundamental role in defining how we interact with the physical world
and of the challenge in designing computer sims that mimic this.
“[We need] a new type of generative model whose capabilities
of understanding, reasoning, generation and interaction with the semantically,
physically, geometrically and dynamically complex worlds - virtual or real -
are far beyond the reach of today’s LLMs,” she believes. “The field is
nascent.”
But this research isn’t a theoretical exercise. Li says, “It
is the core engine for a new class of creative and productivity tools.”
Li is positioning Marble, World Labs’ virtual world building
tool, as integral to new immersive and interactive experiences. Just like the
vision for the metaverse this is conceived as a fully mapped 3D digital world
in which we all share.
“We’re approaching a future where stepping into fully
realised multi-dimensional worlds becomes as natural as opening a book,” she
argues. “Spatial intelligence makes world-building accessible not just to
studios with professional production teams but to individual creators and
anyone with a vision to share.”
Content producers are already busy in operating in spatial
computing modes.
British firm Nexus Studios creates XR content for mobile
devices, such as for horror studio Blumhouse, and massive immersive screen
experiences at Las Vegas Sphere. It also creates multi-sensory experiences for
theme park rides, museums and gallery installations.
“We’re well-versed in both cinematic storytelling and what
we call spatial storytelling,” says Chris O'Reilly, co-founder and chief creative
officer. “These huge new screens are architectural-scale storytelling
environments. They’re not just screens you watch — they’re spaces you inhabit.”
The canvas of spaces like MSG Sphere allows creators like
Nexus to describe what they do as world-building. “You can render them as
planets, or be inside someone’s bloodstream. The challenge is ensuring your
artists don’t think of the space as just a large rectangle. Instead of framing
shots, you’re sculpting environments. Instead of showing people a story, you’re
letting them inhabit it.”
Badass Studios is already
building digital twins of sports like E1 racing and MMA repurposing the data
into live AR overlays on the broadcast or virtual game simulations.
“Imagine watching tennis or
football in virtual reality,” Lokhorst says. “You could enter the stadium
virtually, choose your seat, and watch the match from anywhere. You might even
stand on the pitch during a penalty.”
Similar applications were
promised several years ago during the first metaverse hype and arrival of 5G.
“A lot has changed
technologically since then,” she says. “Compute power has increased, rendering
engines like Unreal Engine have improved dramatically, and
high-resolution environments are easier to transmit over the internet.
AI has also accelerated
development. Where building a game environment once took about a year, we can
now do it in two to six weeks. For example, recreating a city like
Monaco or Miami might take two or three weeks.
Today it’s becoming more
industrial and practical. Sectors like military training and healthcare
simulations have helped improve the underlying technology and infrastructure.”
Miniaturisation and comfort
Previous waves of XR were defined by bulky headsets and
niche gaming use cases, but the current phase is characterised by
miniaturisation and distribution.
Ziad Asghar, GM for XR and Personal AI at semiconductor
giant Qualcomm, said at CES, “We’re in the middle of a major transition—from
personal computing to mobile computing, and now to spatial computing. The
convergence of XR and AI is unlocking use cases that simply weren’t possible
before.”
Smart glasses, smartwatches even earbuds with cameras “can
understand and interact with the world around you in ways a device in your
pocket cannot,” he said.
“But there are real challenges. You need incredible AI
processing on-device. You can’t send everything to the cloud. That means
best-in-class performance per watt, excellent connectivity, low power
consumption—and all in a tiny form factor. A smartphone battery might be 20
times larger than what fits in smart glasses, yet users expect the same experience.”
A solution is emerging out of stealth mode in Dubai. Xpanceo
is developing a smart contact lens designed to integrate XR, night vision and
optical zoom. A small companion device worn on the body handles processing and
wireless power transfer. The company describes the concept as an “invisible
computing platform” designed to replace screens altogether and also as a
“habitat for intelligence” where data, sensors, and human perception converge.
Founders Roman Axelrod and Dr. Valentyn Volkov will wear the
prototype at its first public demonstration at the beginning of 2027 (the
timing suggests CES).
Axelrod and Volkov call it the “after-glasses” era telling Forbes that, if their team succeeds, the computer will no longer be a device we hold or wear. It will be something we look through, a living interface between biology and the digital world.