Tuesday, 15 June 2021

Who Will Be the First to Develop a True Hologram?

NAB

Just as we refer to old movies as being ‘silent’ or ‘black-and-white’ will we soon be calling today’s images ‘flat’? Holographic video techniques are being developed by some of the biggest brands in the business to transform the future of media and communication.

https://amplify.nabshow.com/articles/holograms-light-field/

“Our goal is to achieve what science fiction has promised us, and science has yet to deliver,” says Jon Karafin, CEO at holographic display developer Light Field Lab. “We are on the verge of making this happen and not just for entertainment – it will be ubiquitous.”

Holographic reality has been part of popular culture at least since Princess Leia stored her holo-selfie onboard R2D2 in 1977 and the Holodeck began to make regular appearances in Star Trek shows from 2001. A century earlier, imaging pioneer Eadweard Muybridge was among the first to record subjects moving sequentially in still images, a stepping stone to the light field.

After all, why should digital creations remain trapped on 2D screens when everything we see, do and touch in the real world is three-dimensional.

While early iterations of the holodeck – a 3D representation of reality – will likely be viewed through the plane of an internet screen and head displays, true holograms will be glasses free and free viewpoint – able to seen and interacted with from all sides.

The race is on to develop the hologram. Google, in particular, sees the creation of light fields, which it describes as a set of advanced capture, stitching, and rendering algorithms, as the solution.

“The key concept of light field rendering is that once you record all the rays of light coming into [a scene] you can use the pixel values and the RGB values of each image to create images from different perspectives and views where you never actually had a camera,” explained Paul Debevec, Google VR’s senior researcher.

“By sampling or interpolating information from the hundreds of recorded images, you can synthetically create camera moves moving up and down forward and back – every view you might want to view with six degrees of freedom.”

Filming in 360-degrees only captures one perspective for how different materials react to light.  By capturing the intensity and direction of light emanating from a scene, light fields can produce motion parallax, realistic textures and subtle light changes, shadows and reflections.

How to record light fields

There are a few ways to record light fields. One is to use a single camera with a lens array to filter all directions of light from a scene to the sensor. Lytro got the nearest to commercializing this, debuting a giant 755 Megapixel cinema camera in 2016. But the company folded two years later (its assets and some of its people transferred to Google).

You could also create a 3D model of a scene which might be represented as a polygon mesh or a point cloud. Lidar is one route to this.

The current preferred technique is to use a plenoptic array of cameras. Microsoft has outfitted a number of Mixed Reality Capture Studios with cameras arrays to record holographic video although last November, Intel quietly shuttered its giant 10,000-sq ft volumetric capture stage in LA. This contained 100 8K cameras and a green screen dome to create 3D holograms for AR and VR videos.

Virtual production stages could readily be equipped with plenoptic camera arrays should light field capture advance and there are suggestions the technique could be used for VFX (an application targeted by the Lytro camera). Not for nothing is The Mandalorian produced on ILM’s Stagecraft Volume.

When is a hologram not a hologram

There is debate about what constitutes a hologram. It is not, for example, the cypher performances of dead musicians like Tupac and Michael Jackson which are based on a theatrical slight of hand known as Pepper’s Ghost. 

In theory, holographic video, or holographic light field rendering, produces realistic three-dimensional images that can be viewed from any vantage point.

“A holographic display projects and converges rays of light such that a viewer’s eyes can freely focus on generated virtual images as if they were real objects,” describes Karafin. “You will have complete freedom of movement and be able to see and focus on an object no matter the angle at which you view it. Everything you see is free from motion latency and discomfort. In the holographic future, there will no longer be a distinction between the real and the synthetic.”

Provided sufficient information about an object or scene is captured as data then it can be encoded and decoded as a holograph, he says. That means that holographic content can be derived today from existing production techniques and technologies.

“Studios are already essentially capturing massive light field data sets on many productions. Some of this information gets used in postproduction, but when it is finally rendered as a 2D or a stereoscopic image, the fundamental light field is lost. It’s like creating a colour camera but only having radio to show it.”

The IDEA (Immersive Digital Experiences Alliance) Group was set up in 2019 to promote and develop immersive volumetric and light field media. Rivals and partners including Light Field Lab, OTOY, Looking Glass, Visby, Pluto, Cox and CableLabs are collaborating to devise standard technical specs for the interoperability of interfaces and exchange formats. It’s an important step toward developing infrastructure and agreeing on terms.

Earlier this month the group released a report that explores the various data formats and capture methodologies in use or proposed for live action capture. It also assesses the potential benefits and shortcomings of each.

“As display, capture, and network technologies improve, the future is ripe for a holographic media ecosystem,” declares Eric Klassen, Executive Producer, CableLabs in IDEA’s newsletter 

Zoning in on telepresence

Holography has also been given a boost by the pandemic.  Employers (and employees) want to cut down on ‘Zoom fatigue’ with a new approach to communications: holograms for the workplace.

Google’s Project Starline, is an effort to create a video-chat system with screens that give participants three-dimensional depth and Wired has been to test it out. Reporter Lauren Goode describes a 65-inch light field display in a booth equipped with more than a dozen different depth sensors and cameras.

“Google is cagey when I ask for specifics on the equipment. These sensors capture photo-realistic, three-dimensional imagery; the system then compresses and transmits the data to each light field display, on both ends of the video conversation, with seemingly little latency. Google applies some of its own special effects, adjusting lighting and shadows. The result is hyper-real representations of your colleagues on video calls.”

All of the data is being transmitted over WebRTC, the same open-source infrastructure that powers Google Meet, the company’s main video conferencing app.

Meanwhile, WeWork has announced a partnership with ARHT Media, a hologram technology company, to bring holograms to 100 WeWork buildings in locations including New York, LA and Miami.

According to the WSJ, customers will be able to record or live stream three-dimensional videos for a virtual audience via videoconferencing, a physical audience at a WeWork, or a combination of both. The holograms are viewable on a ARHT Media HoloPod, an 8-foot-tall screen structure with a camera, microphone and projector, a ‘HoloPresence,’ a screen meant to be used on a stage or a computer or tablet.

Earlier, in March, Microsoft introduced Microsoft Mesh, a “mixed-reality service”, which integrates 3D images of people and content into displays such as smart glasses.

Currently, video telepresence is limited to a single camera view that’s slightly offset from the screen with complex image processing software required to correct the offset eye-gaze. But, according to Klassen of IDEA, light field capture would provide the needed data to engineer one-to-one, one-to-many, and many-to-many social gatherings with more natural eye contact.

Light fields and the future of cinematography

While LED walls for virtual set production are all the rage right now, light fields could be its successor. A limiting factor with LED walls is that they display two-dimensional images, meaning the system must track the camera to render an image so that they don’t look flat, even though they are flat.

The DP can’t focus on the LEDs themselves without breaking the illusion. To avoid moiré patterns, the digital environments are often rendered with a soft focus. And there can be chromatic aberrations when shooting from the side.

“Light fields are the inevitable ultimate result of our effort to virtualize reality,” Scot Barbour, former VP of production technology at Sony Pictures Entertainment tells Light Field Lab’s blog.

“Where we are with LED walls is kind of like where video games were with Pong. That difference between Pong and triple-A games is where light fields can take us.”

He imagines some of the first uses for light-field display walls would be for blocking – to put digital characters into the set with actors, even if the characters would be recreated later.

Barbour continues, “If you’re wearing a headset and interacting in VR, that’s maybe halfway to what you could do in a light-field volume, and you could do it without any apparatus. You would have full six degrees of freedom, a full holographic image. It would feel real because to your eyes it is real. The game engines with real-time ray tracing will allow interaction with light fields, and light field displays will be the visualization mechanism. If you can synthesize light, there’s no greater realism, period.”

A DP could shoot without concern about moiré or chromatic aberrations, and without focusing on an LED screen. Says Epic Games’ business development exec Miles Perkins, “When you’re no longer focusing on the display but on the light, there would be a fundamental shift.”

In the same article, Magnopus co-founder Ben Grossman (Oscar nominee for the virtual production of The Lion King), goes so far as to suggest light field displays for home entertainment.

“The real value of light field displays is that if you’re going to make a film, once you have built up the light-field content, why not put LFD screens in people’s homes so they can experience the content in the same way as the way it was produced? To me, the idea of giving someone a 3D world at home is more interesting than giving someone a 2D facsimile of an amazing 3D world. Just giving the consumer a flat thing doesn’t realize the full potential of light-field displays.”

The laws of physics: Compression

There’s one fundamental problem when working with light fields. It’s the massive data payload.

Streaming a light field would require broadband speeds of 500Gbps up to 1TBps – something not likely in the next 50 years. Being able to work with so much data, let alone transmit it, requires serious compression.  

A group at standard body MPEG is drafting a means of enabling the “interchange of content for authoring and rendering rich immersive experiences”. It goes under the snappily titled ‘Hybrid Natural/Synthetic Scene’ (HNSS).

According to MPEG, HNSS should provide a means to support “scenes that obey the natural flows of light, energy propagation and physical kinematic operations”.

Light Field Lab has its own vector-based video format that it says will make it possible to stream holographic content over 5G.

MPEG is also developing Video-based Point Cloud Compression (V-PCC) with the goal of enabling avatars or holographs existing as part of an immersive extended reality.

V-PCC is all about six degrees of freedom (6DoF) - or fully immersive movement in three-dimensional space - and the goal which Hollywood studios believe will finally make virtual and blended reality take off.

Apple, which has an extensive augmented reality ecosystem in ARKit, reportedly the chief driver behind V-PCC. Google has its own secret sauce codec behind Project Starlink that allow it to synchronously stream 3D video bidirectionally.

“Light field workflows are immature to say the least,” says Chris Chinnock at Insight Media. “There is a lot of work ongoing to try to develop some standards for an interchange format so each application and display is not a unique solution. We need new codecs and better distribution options with higher bandwidth.”

Building a new video

Creating content let alone building holographic displays with sufficient fidelity and flexibility to be ‘walked around’ is a very long-term project.

Light Field Lab, the most bullish of the tech companies in this area, has yet to release a commercial product despite saying it would do so by 2020.

Digital camera maker RED aborted an attempt to bring a holographic capture and display ecosystem to market. Its first product, a smartphone with pseudo-holographic display called Hydrogen One, was seemingly a pet project of founder Jim Jannard and has been abandoned since he stepped down in 2019.

Yet Hollywood has not given up. It may even be imperative to develop away from 2D video capture and presentation. As Paramount Pictures Ted Schilowitz says, the industry needs new ways to “build” video for the Metaverse. “Volumetric video broadly blends the idea of gaming-style spatial pixels with true video pixels,” he says.  “Lots of companies are working on this.”

Former RealD and Sony Electronics imaging innovator Pete Ludé is now CTO at Mission Rock Digital building light field systems and IDEA Group chair. He predicts, that out-of-home holographic experiences will be among the first applications.

“Theme parks can definitely use these enhanced imaging technologies,” he says. “Some theme parks already use 3D glasses to see projected images, but ambient light degrades it and 3D glasses don’t work well. An emissive light field display solves all those problems.”

 

No comments:

Post a Comment