Tuesday 5 October 2021

The Next Steps for Interactive Media and Mobile Edge

NAB

The next quantum leap in media production will be when 5G networks are not only ubiquitous but when devices and applications are connected to the mobile edge.

https://amplify.nabshow.com/articles/the-next-steps-for-interactive-media-and-mobile-edge/

Only then will breakthroughs in latency deliver on promised applications that deliver interactive and immersive experiences, like game streaming, virtual reality, and in-venue experiences for live events.

“5G edge networks integrate cellular architecture with IT and cloud infrastructure to reduce end-to-end latency for a multitude of services and use cases,” explains Don Alusha, Senior Analyst, 5G Core & Edge Networks at ABI Research.

Its recent report suggested mobile 5G edge networks will unlock cloud video revenues totaling $67.5 billion by 2024 up from $5bn in 2019.

Amazon has already launched infrastructure optimized for mobile edge computing applications. “Wavelength Zones” are AWS infrastructure deployments that embed AWS compute and storage services within data centers at the edge of the 5G network, so application traffic from 5G devices can reach application servers running in Wavelength Zones without leaving the telecommunications network.

This avoids the latency that would result from traffic having to traverse multiple hops across the Internet to reach their destination. It enables customers to take full advantage of the latency and bandwidth benefits offered by modern 5G networks.

AWS Wavelength Zones are available in ten cities across the US on Verizon’s 5G network, in Tokyo and Osaka, Japan on the KDDI 5G network, in Daejeon, South Korea on SK Telecom’s 5G network, and in London on the Vodafone 5G network.

Wavelength also enables offload of data processing tasks from 5G devices to the network edge to conserve resources like power, memory and bandwidth that makes applications like autonomous vehicles and smart factories possible.

AWS itself talks about Wavelength helping AR/VR applications reduce the “Motion to Photon” latencies to the <20 ms benchmark needed to offer a realistic customer experience.

It could provide the ultra-low latency needed to live stream high-resolution video and high-fidelity audio, as well as to embed interactive experiences into live video streams.

Plus, the most demanding games can be made available on end devices that have limited processing power by streaming these games from game servers in Wavelength Zones.

When it comes to filmmaking, “future 5G versions will provide enough bandwidth for almost any kind of wireless data transfer — including raw video,” suggests Dave Shapton at RedShark News. “It should be possible to send video to an edge server from a camera, have it processed and receive it back in the camera within the space of a single frame.”

With almost no limit to the amount of processing that could be carried out in real time and fed back to the viewfinder of the camera, this means VFX, computer lens correction, “super-sophisticated object detection for metadata creation and autofocus and even real-time interactions for shooting hybrid live and virtual action scenes” become possible says Shapton. “Any process you can imagine – all taking place as if your camera itself had that processing power.”

In filming scenarios, the camera could become the edge device. It’s an advance predicted by Michael Cioni, Global SVP of Innovation at Frame.io (now part of Adobe) as far back as 2013.

“Digital cinema production will take advantage of these supercomputer cameras and connect out to our increasingly cloud-connected world,” Cioni declared.

New devices will apply color LUTs on the fly to RAW footage and upload each take to cloud storage for everyone on the production to access, he said.

“Digital cinema is getting closer and closer to becoming a real-time process, one that’s handled close to the set,” he said.

Speaking at the HPA Retreat earlier this year (see below), Cioni said, “By 2031, a media card will be as unfamiliar as arriving today on set with a DV cartridge or DAT tape. You won’t have removeable storage from the camera. Camera tech will transition to become transfer systems to the cloud. It will take a decade [for RAW camera files] but the transition starts here.”

Camera-to-cloud workflows are viable today over 4G networks offering 10Mbps upload provided the media is compressed to H.264/5. Anyone wanting to push camera RAW (Original Camera Files/OCF) faces an uphill task. OCF requires more like 1000Mbps before it’s reliable enough to move. OCFs are not only the largest data payload but the least time sensitive. Today, OCFs do not come directly from the cameras, but rather are being pushed to a local staging environment (on-set or near-set storage as a part of video village).

Companies like Sohonet are working with studios to install ultra-high-speed network connections that enable DITs to upload OCF right from set. Currently, those transmissions can’t be done wirelessly because wireless networks still lack the appropriate bandwidth.

Consensus is that we’re five to seven years away from average bandwidth utilized on set being suitable for RAW transfers to the cloud, with shooting OCF to the cloud becoming the norm by 2031.

 


No comments:

Post a Comment