Friday 30 August 2024

Gaming the system with photoreal CGI

AV Magazine

article here

The ability to fuse real objects, people and environments with photoreal CGI is supercharging almost every part of every industry. One of the key technologies driving this are games engines and the leader by virtue of its creation of online games like Fortnite and its use in the virtual production for film and TV is Epic Games.

“We’re more than just a games company,” declared Epic’s business director for broadcast and live events BK Johannessen (who keynoted ISE 2023). “Unreal has broad adoption across industries from architecture to live events, training and simulation, manufacturing to advertising and beyond.”

Game engines are touted as the bedrock on which the digital future will be built. Epic’s vision is for one holistic platform that covers the entire pipeline from creating and visualising initial designs, through reviewing, testing, and training to creating marketing renders and photoreal product configurators.

Epic wants to house all of this on its platform. It says: “The data model of the future is to stay contained within a game engine where the features and tools that you need are already available without you having to switch between different software packages.”

Take automotive. While CAD capabilities have increased, inefficiencies remain. Vehicles are manufactured from thousands of parts, and each one needs to be created digitally before it goes into production, and made again for the photorealistic touchscreen car configurator online, again for a TV commercial, and remade yet again at mobile-friendly resolutions. At this point, managing these multiple assets and the reams of metadata creates even more inefficiencies and introduces inaccuracies.

“Realtime visualisation”
Epic contends that there is a way to produce an asset once for use in every stage from design and validation through to marketing. Johannessen calls this “realtime visualisation” and says industry after industry is adopting it.

A games engine is not for every instance. If you only need to generate static 2D images or don’t need realtime rendering, you’re best served with existing tools and workflows designed specifically for those purposes. However, for 3D data visualisation, realtime physics simulation, or recording a simulation in realtime, then GPU-accelerated game engines are the most powerful option.

Adding artificial intelligence into the mix takes this to another level entirely.

Stephan Baier, associate partner at Porsche-owned consultancy, MHP says: “In the future, all product-related content will be generated on demand in a personalised way.”

Digital twins as a communications platform
Digital twins are a mathematically perfect representation of a physical object or entity like a city and all its variants in a digital space. It’s a 3D model but with data continuously updating its functions and processes.

“When live data from the physical system is fed to the digital replica, it moves and functions just like the real thing, giving you instant visual feedback on your processes,” says Epic Games. The digital twin can be used to calculate metrics like speed, trajectory, and energy usage.

Microsoft and Nvidia are building platforms for companies to use data that is malleable. This means that users can change pieces of the process or workflow in realtime and view, via collaborative online platforms and VR, how those changes ripple across real world scenarios.

Science and education
An example: Surveying remote subterranean systems has always been a technically demanding job and very few people actually get to experience them.

That changes if you can map cave systems with all the detail that geoscience demands. In the US, arguably the leader of this effort is Blase LaSala, a former cave technician for the National Parks Service who is now the go-to person for 3D digital cave tours.

Using a LiDAR scanner and photogrammetry and Unreal, he scans caves and produces videos and VR experiences, for both scientific research and virtual tours for the public. The LiDAR scans produce billions of data points that are then processed in supercomputers. This is rendered with photoreal textures and global dynamic lighting in Unreal.

“I don’t have a computer science or 3D modeling background,” LaSala states. “All I know how to do is convert terabytes of data into a format Unreal can understand. Then it does everything else for me.”

Automotive
Electric vehicle maker, Rivian equipped the engine of its R1T truck with sensors and fed the data into realtime rendered graphics augmented with data from the original CAD model of the vehicle to develop a Human-Machine Interface (HMI).

“There’s a lot of math involved to convert the data that you receive from the cameras into the engine,” says Eddy Reyes, Rivian’s in-vehicle experience software engineer. “We had to go through multiple iterations until we got it right.”

Rivian showcased the RT1’s physics with the truck’s digi-double exhibiting precise tyre deformation as it bounds over rocks and roots in digital simulation. This included ‘true-to-life’ independent air suspension that softens as it splashes through mud and puddles with realistic fluid simulation and water rendering.

Volvo Cars is also using Unreal for developing its HMI, beginning with the Driver Information Module, one of the displays inside the cabin that provide the driver with relevant information and infotainment features. The electric flagship model Volvo will unveil later this year will be the first to contain the new graphics.

“When you bring interactive, high-resolution graphics running realtime into the car, you open the door to a vast range of new ways to inform and entertain everyone inside,” said Heiko Wenczel, former Epic Games’ director of automotive, now at Nvidia.

As vehicles become increasingly autonomous, their ‘digital brains’ will become as important as the frames from which they’re constructed. Data will be captured in realtime from computer vision (cameras, radar, LiDAR). While real life testing won’t be replaced entirely, the thousands of hours of tests required to prove concepts in the real world can be accelerated by photoreal sims.

“This is a race,” says Emmanuel Chevrier, CEO at AVSimulation, a Paris-based driving simulator company that works for Renault and BMW. “Customers must race to find the right asset, and the right software if they want to be the first to put their autonomous vehicles on the market.”

Architectural inspiration
In 1967, decades away from any computer technology, 23-year-old Masters student Moshe Safdie designed a novel, mixed-use community in Lego and submitted it to the 1967 Montreal World’s Fair.

“Lego was modular,” says Safdie. “It could be stacked and shifted in increments. It was working with that system that I designed Habitat.”

Safdie’s original design would have cost $45 million (equivalent to $450 million today) to build but with a budget of only $15 million Habitat was scaled back to less than half the planned size. It was built as a 158-unit housing complex at Cité du Havre, on the Saint Lawrence River, Montreal.

Fast forward to 2022 and Safdie’s company Sadfdie Architects realised the project in full albeit in photoreal 3D.

“Many of the foundational principles that continue to advance architecture today can be traced back to Habitat 67,” says Safdie Architects’ senior partner, Jaron Lubin. “To be able to use the latest technology to demonstrate the potential of these ideas allows them to live on beyond the walls of our studio.”

Working with Australian creative agency, Neoscape the team flew a drone equipped with a camera and LiDAR to map the existing building. A second drone captured 4,136 high-res images of the structural details. These datasets along with information from the original schematic drawings were combined and processed to create an accurate digital model in Rhino and 3ds Max. In Unreal, elements such as trees, plants, and general set dressing were added.

More than 4.5 billion triangles comprise the virtual Habitat of thousands of residential units. These assets are available for anyone to explore or to be incorporated into a cinematic project.

Says Safdie: “This is exactly what we need to rethink how our cities are made. I hope that the idea that you could live somewhere like Habitat 67 helps advance people’s desire to have this realised.”

Digital Us
Creating one high-quality digital human is difficult and time-consuming. Scaling that to create many diverse digital humans is a formidable task indeed. Enter MetaHuman Creator, a cloud-streamed app that draws from a library of variants of human appearance and motion, and enables users you to create convincing new digital human characters in minutes not months.

“You can populate a background scene with a big crowd of MetaHumans, or make a MetaHuman your centre-stage star,” Johannessen said at ISE.
Fashion brand, LVMH presented Livi, its first virtual ambassador, last year developed using MetaHuman technology.

A MetaHuman Animator feature of the software, released in June, speeds the process even further. With a professional stereo helmet-mounted camera or just a standard tripod mounted iPhone, users can reproduce any facial performance with the fidelity of an AAA game. The key is in ensuring the system understands the unique anatomy of the actor’s face, and how that should relate to the target character.

The software’s timecode support means that the facial performance animation can be aligned with body motion capture (provided you’ve access to a mocap system), and audio to deliver a full character performance. It can even use the audio to produce convincing tongue animation.

Epic claims the tech enables digital human identities to be created from a small amount of captured footage, that the process takes just a few minutes and only needs to be done once for each actor.

Future of concert visuals
Live streaming music festivals are nothing new but at Coachella 2022 acts including Australian DJ Flume had their performance augmented with realtime 3D graphics.

Pre-designed photoreal AR graphics including of giant cockatoos and golden flowers were composited live into the broadcast feed and streamed on YouTube using a combination of media servers running Unreal Engine, camera tracking data (from Stype) and the band’s timecode to automatically trigger graphic changes.

There were also the eye-catching interstitials - including deforming doughnuts bouncing off the stage - that entertained audiences while waiting for the next song.
“Engagement went crazy on the stream when the birds came out,” reported Sam Schoonover, innovation lead at Coachella. “People weren’t sure if it was real or CG.”

This represents a turning point for live show visuals, he suggests, one that could serve as a template for future hybrid events and festivals in the metaverse.

“As online audiences grow, it’s crucial that digital events bring something unique to the table. But it’s also important to think about how new tech could affect the experience on site. We hope this project will usher in the next era, where AR glasses and virtual worlds allow fans to experience a completely new dimension of music.”

Eric Wagliardo, Live AR producer at Coachella goes further: “The transition from 2D to 3D will be as revolutionary as the shift from mono to stereo.”

 


No comments:

Post a Comment