Wednesday, 15 April 2026

Spatial computing: “Instead of showing people a story, you’re letting them inhabit it”

IBC

article here

Leveraging generative AI, computer vision, and data from real environments, spatial computing has opened the door for cutting-edge systems that blend the physical and digital worlds into a new frontier of human-technology interaction.

Marketed by Meta boss Mark Zuckerberg as the metaverse, a virtual playground populated by avatars, the next-gen internet is now being reconfigured around spatial computing with applications accelerated by AI. 

“The metaverse didn’t die — we simply stopped using that word,” says Rosemary Lokhorst, CEO and co-founder of XR developer Badass Studios. “What we’re seeing now is the same idea evolving and becoming more practical through spatial computing.”

For years, spatial computing – whether labelled VR, AR, MR, or ‘the metaverse’ - has cycled through waves of hype and recalibration. Recently something has shifted.

“AI is enabling spatial computing by solving problems that seemed impossible just a few years ago—scene recognition, environmental awareness, gesture understanding, natural language processing,” explains Neil Trevett, president, The Khronos Group, and VP of developer ecosystems, Nvidia. “These were previously hard research problems. Today, they are increasingly productised capabilities.”

At the same time, spatial environments are becoming training grounds for AI. Digital twins allow systems to learn how to interact with complex, real-world physics and human behaviours.

“The result is a feedback loop. AI enables spatial computing, and spatial computing enables AI,” says Trevett who describes the metaverse simply as “spatial computing experiences where users are connected together.”

Khronos develops open standards for 3D graphics, compute acceleration, and AI. The technologies now overlap. “AI’s impact on spatial computing is fundamental,” Trevett says. “In turn, spatial computing is evolving into a natural user interface for AI, embedding intelligence directly into the environment rather than confining it to a 2D screen.”

On a technical level spatial computing leverages technologies like computer vision to create interactive 3D representations of environments. By analysing visual data, computer vision interprets the geometry and layout of physical spaces. According to Nvidia, other technologies, such as Gaussian splats and NeRFs, enable the rapid reconstruction of 3D scenes for visualisation and analysis. Generative AI can transform 2D images into 3D animations, enhancing the integration of digital content with the real world.

Take out the jargon however and spatial computing is really about using technology in a way that mirrors how we experience the real world.

“It’s about creating an environment where you feel connected to what’s happening around you and able to share that moment with others,” says Lokhorst. “It’s location-based computing — technology that understands and interacts with space.”

The idea behind the metaverse was similar: a three-dimensional environment with depth and space where you can move around and feel as though you’re actually there. One difference is that instead of fantastical VR worlds experienced vicariously by animated proxies of ourselves (the Ready, Player One or Snowcrash version in popular culture which Zuckerberg bought into) the spatial internet is grounded in reality.

“What excites me most is how generative AI, computer vision, machine learning, and AI agents work together,” Patrick Hadley, Sponsored AR Product Leader at Snapchat told an audience at CES. Snapchat’s AR lenses are used 8 billion times per day. That scale gives it a live testing ground for what comes next.

“Think of spatial computing as the canvas, generative AI as the paint, computer vision as the eyes, and ML as the technique,” he said. “Together, they’re enabling entirely new experiences.”

Nonetheless, even Meta, which by some estimates has spent $60 billion on attempting to build the metaverse, has pivoted to talk about spatial computing.

“We’re building what we see as the next generation of the internet—the spatial internet—where people can feel presence and togetherness across devices and locations,” said Anne Hobson, Policy Lead for Metaverse Products at Meta at CES.

Notably, Hobson is still in charge of ‘Metaverse Products’ like the Quest headset or Ray-Ban Meta glasses. “[These are] devices that blend the physical and digital worlds,” she said. “They give AI a first-person view of what you’re seeing in real time, making AI more useful in the moment.”

The global spatial computing market was $102.5 billion in 2022 and projected to reach $469.8bn by 2030, according to some estimates.

Nonetheless, Meta has scaled back its ambition to developing wearables as the interface to spatial computing rather than building the metaverse itself. At the start of the year it shed 10% of jobs at Reality Labs with this new strategy in mind.

Other companies are stepping to furnish the software building blocks of the spatial internet. They are gathering data from real environments, parsing that through Large Language Models (LLMs) to create digital counterparts rendered in some cases using games engines.

Niantic Labs is one. Famous for designing mobile AR game Pokémon Go and now owned by Saudi Arabian group Savvy Games, it is building a shared coordinate of the world for humans and machines. That means reconstructing and understanding real-world spaces so headsets, drones, robots—anything with a camera—can interact in real time.

“We’ve scanned over a million places worldwide and for us that ground truth data is essential,” explained Azad Balabanian Product Manager, Niantic Spatial at CES. “While generative AI is powerful, we can’t over-index on fully synthetic outputs. For many enterprise applications you need millimetre-level accuracy.”

Its geospatial model was showcased at an event during Super Bowl late February when Niantic Spatial enabled a physical robot and its digital twin to share the same reality viewable in realtime on mobile phones. Because the robot and phones were all localised to the environment, they all had the exact same understanding of where they were in space.

“This demo demonstrated the next frontier of our work: AI that understands the physical world,” the company enthused. “We believe there is a significant, untapped potential that is realised when AI moves beyond the screen and into our physical reality. Our mission is to move past the idea of AI as a digital only tool by giving it a sense of place.”

Another company fusing LLMs with real world physics is World Labs. The startup is valued at over $5 billion by investors including Autodesk and Nvidia. Its founder, Fei-Fei Li, talks about how ‘spatial intelligence’ plays a fundamental role in defining how we interact with the physical world and of the challenge in designing computer sims that mimic this.

“[We need] a new type of generative model whose capabilities of understanding, reasoning, generation and interaction with the semantically, physically, geometrically and dynamically complex worlds - virtual or real - are far beyond the reach of today’s LLMs,” she believes. “The field is nascent.”

But this research isn’t a theoretical exercise. Li says, “It is the core engine for a new class of creative and productivity tools.”

Li is positioning Marble, World Labs’ virtual world building tool, as integral to new immersive and interactive experiences. Just like the vision for the metaverse this is conceived as a fully mapped 3D digital world in which we all share.

“We’re approaching a future where stepping into fully realised multi-dimensional worlds becomes as natural as opening a book,” she argues. “Spatial intelligence makes world-building accessible not just to studios with professional production teams but to individual creators and anyone with a vision to share.”

Content producers are already busy in operating in spatial computing modes.

British firm Nexus Studios creates XR content for mobile devices, such as for horror studio Blumhouse, and massive immersive screen experiences at Las Vegas Sphere. It also creates multi-sensory experiences for theme park rides, museums and gallery installations.

“We’re well-versed in both cinematic storytelling and what we call spatial storytelling,” says Chris O'Reilly, co-founder and chief creative officer. “These huge new screens are architectural-scale storytelling environments. They’re not just screens you watch — they’re spaces you inhabit.”

The canvas of spaces like MSG Sphere allows creators like Nexus to describe what they do as world-building. “You can render them as planets, or be inside someone’s bloodstream. The challenge is ensuring your artists don’t think of the space as just a large rectangle. Instead of framing shots, you’re sculpting environments. Instead of showing people a story, you’re letting them inhabit it.”

Badass Studios is already building digital twins of sports like E1 racing and MMA repurposing the data into live AR overlays on the broadcast or virtual game simulations.

“Imagine watching tennis or football in virtual reality,” Lokhorst says. “You could enter the stadium virtually, choose your seat, and watch the match from anywhere. You might even stand on the pitch during a penalty.”

Similar applications were promised several years ago during the first metaverse hype and arrival of 5G.

“A lot has changed technologically since then,” she says. “Compute power has increased, rendering engines like Unreal Engine have improved dramatically, and high-resolution environments are easier to transmit over the internet.

AI has also accelerated development. Where building a game environment once took about a year, we can now do it in two to six weeks. For example, recreating a city like Monaco or Miami might take two or three weeks.

Today it’s becoming more industrial and practical. Sectors like military training and healthcare simulations have helped improve the underlying technology and infrastructure.”

Miniaturisation and comfort

Previous waves of XR were defined by bulky headsets and niche gaming use cases, but the current phase is characterised by miniaturisation and distribution.

Ziad Asghar, GM for XR and Personal AI at semiconductor giant Qualcomm, said at CES, “We’re in the middle of a major transition—from personal computing to mobile computing, and now to spatial computing. The convergence of XR and AI is unlocking use cases that simply weren’t possible before.”

Smart glasses, smartwatches even earbuds with cameras “can understand and interact with the world around you in ways a device in your pocket cannot,” he said.

“But there are real challenges. You need incredible AI processing on-device. You can’t send everything to the cloud. That means best-in-class performance per watt, excellent connectivity, low power consumption—and all in a tiny form factor. A smartphone battery might be 20 times larger than what fits in smart glasses, yet users expect the same experience.”

A solution is emerging out of stealth mode in Dubai. Xpanceo is developing a smart contact lens designed to integrate XR, night vision and optical zoom. A small companion device worn on the body handles processing and wireless power transfer. The company describes the concept as an “invisible computing platform” designed to replace screens altogether and also as a “habitat for intelligence” where data, sensors, and human perception converge.

Founders Roman Axelrod and Dr. Valentyn Volkov will wear the prototype at its first public demonstration at the beginning of 2027 (the timing suggests CES).

Axelrod and Volkov call it the “after-glasses” era telling Forbes that, if their team succeeds, the computer will no longer be a device we hold or wear. It will be something we look through, a living interface between biology and the digital world.

 

No comments:

Post a Comment