Tuesday, 6 April 2021

Camera to cloud: What It’s Going to Take to Get Us There

NAB Amplify

A live camera to cloud production workflow was the climax of HPA Tech Retreat 2021 and demonstrated just how promising the technology is and also how far there is yet to go.

https://amplify.nabshow.com/articles/camera-to-cloud-workflows-what-its-going-to-take-to-get-us-there/

“Where last year we showed camera to cloud as proof of concept, this year we’ve been working to make the system more robust for field cloud acquisition,” explained Michael Cioni, global SVP innovation for Frame.io at the event.

To make the point, Cioni was literally in a field, organising the live shoot of footage intended to be inserted into an updated version of last’s years HPA Tech Retreat experimental film The Lost Lederhosen. The entire workflow was taken live over the course of three hours from location into post including edit, VFX, audio and color treatments to final deliverable all in the cloud.

The demo shot using a RED Helium 8K camera which automatically triggered the capture of proxy files encoded in H.264 from a Teradek Cube 655. These files, included timecode and ‘hero’ metadata were transmitted instantly to Frame.io’s cloud platform. Simultaneously, audio captured to a Sound Devices 888 portable mixer-recorder was uploaded to cloud via a hotspot thrown by a Net Gear LTE modem. The audio files were uncompressed original .wav files timecode jammed with the camera proxy for synching in post.

Production company Origin Point was next in the chain and the demo showed without smoke and mirrors, the proxy files immediately available on an Adobe Premiere panel for the creative process to begin.

The demonstration, organised by studio owned thinktank MovieLabs, is a marker on the road to its vision to move production to the cloud by 2030.

“Many people walked away from last year’s Tech Retreat thinking 2020 is the new 2030,” said Leon Silverman, a Senior Advisor to groups including the HPA and MovieLabs. “While on the surface there is a lot of cloud capability, and a lot of work has been done in cloud connected collaborative production there is a lot of vital foundational work that needs to be to created before this vision becomes a practical reality.”

The HPA Supersession highlighted two huge gaps. The first is that while proxy video can be transmitted near instantly to the cloud, the higher bit rate 4K Original Camera Files (OCF) take some time to digest.

“OCF does not transmit from the camera today but this will change,” Cioni said. “By 2031 a media card will be as unfamiliar as arriving today on set with a DV cartridge or DAT tape. You won’t have removeable storage from the camera. Camera tech will transition to become transfer systems to the cloud. It will take a decade but the transition starts here.”

Another piece of the puzzle due for rapid change is mobile broadband. “As it stands to today, in most areas 5G is weaker than 4G,” Cioni said. “That’s a concern for now but 5G’s performance will skyrocket as it is rolled out. In the meantime, a cellular bonded device like (the one used bonds networks from AT&T, Sprint and T-Mobile) delivers great performance but we have to download. For some perspective, uploading 4K OCF to the cloud seemed unobtainable a few years ago and now we’re using cell phone technology to transmit it.”

The HPA demo had to download OCFs to a laptop connected wirelessly to the cellular bonded hotspot and then into the cloud.

The second leapfrog that needs to be made is the ability for a production to access media assets seamlessly regardless of which cloud it is stored on. This would fulfil MovieLabs’ principle of creative applications moving to the data and not the other way around.

Right now though, different facility and technology vendors have a preferred cloud partner. In the HPA model, Frame.io was used to hop between them.

To elaborate, in the HPA demo media was uploaded first to AWS on the U.S East Coast then accessed from two further cloud regions. Avid instances of Media Composer and Protools sharing the same Nexis storage running on Microsoft Azure on the U.S west coast was required by Skywalker Sound.  Adobe Premiere was used by Origin Point as a virtual workstation provided by Bebop running on a Google instance located in Amsterdam. Another Google instance in the LA region was uses for VFX (Mr Wolf) and color conform (at Light Iron)

“The workflow essentially means logging into one cloud, downloading the media and uploading it back to Frame.io to pass onto the next stage in the pipe,” said Mark Turner Program Director, Production Technology, MovieLabs.

“It’s hardly efficient and generated a lot of egress but we have made it work without using hard drives.”

Data movement throws up many of the same problems as experienced when physically moving media between facilities, he explained. There’s more chance to introduce errors, media loss or loss of metadata plus it introduces delay. There’s also a chance of confusion - both human and machine - when you move anything, a risk of version duplication and greater gaps in security. When most cloud providers on most tiers of storage charge for egress any data movement incurs a fee.

“The 2030 vision avoids these problems because there is a single source of truth,” Turner said. “We expect workflow to continue to span multiple cloud infrastructure and we encourage that choice -- but the crucial point is that sharing work across various clouds should be seamless so it acts like one big cloud.”

MovieLabs may have called its vision 2030 but fully expects the industry to rally round and exceed those expectations.

“There are parts of it we can do today and parts that with cooperation we can do much sooner we think. We should be thinking not 2030 but how can we get it to 2025 or 2022?”

Common visual language

As the industry coalesces around cloud, MovieLabs also thinks it a good idea to have a common language to be able to communicate ideas.

“We’re trying to do is come up with a language to express workflow and communicate that to other people,” explained Jim Helman, MovieLabs co-founder and CTO. “This would be used by our member studios and by the industry to do workflow diagrams, dashboards and app development. We will make sure the language is open, flexible and has all the resources needed for widespread adoption.”

MovieLabs’ visual language is a set of shapes, lines, and icons to describe the key concepts of the 2030 vision for film and tv production. The four basic concepts include participants (an actor, a cameraperson or director), task, (filming, setting up some lighting, writing a script), asset (original camera files, script or stills for reference) and context (what it is that is happening - given shot or take for example). The visual language also expresses the relationships between these groups by way of arrows.

It’s not a million miles away from any workflow diagram on any powerpoint but MovieLabs aims to simplify and unify the iconography to make understanding workflows by different parties consistent and easier.

Of course, some systems at a higher level but be general while others will be specific to projects or divisions within projects. That’s why MovieLabs is talking about multiple connected ontologies rather than a single all-encompassing one.

Ongoing work includes creating a sound asset terminology to support a sound file naming specification; and camera metadata mapping so that there can be some consistency in searching camera files in databases.

Common security framework

Traditionally, production workflows happen on a facilities infrastructure or on hybrid cloud controlled by the facility and secured by a secure perimeter. When production moves to the cloud, this becomes a cloud resource shared outside of the facility infrastructure and by everyone working on productions.

“This means workflows happen outside of secure perimeter,” warned Spencer Stephens SVP Production Technology & Security at MovieLabs. “You could attempt to throw a secure perimeter around the cloud and every vendor and person working in it - but that would be extraordinarily complex and complexity is the enemy of security.”

Movielabs’ conclusion is that production in the cloud requires a new approach. Among its principles is that security should be intrinsic to workflow not an add-on. It must be designed to secure cloud-based workflows not the infrastructure it runs on (i.e assets are secured, not the storage it lives on).

“Our model describes an authenticated participant carrying out an authorised task on a trusted device using an approved application to a protected asset,” Stephens explains. “It’s a zero-trust architecture. It does not require the infrastructure to be secure and it protects the integrity of the workflow. Nothing can take part in any workflow unless authenticated and no-one can take part in a particular workflow unless authorised to do so. All users, devices, software everything must be authenticated before it they can join the production.

“It is also scalable. A Hollywood movie might turn the dial up to 10 and want individual assets to be encrypted while a reality TV show might be content with security at 3 and use access controlled. It’s really down to a matter of risk tolerance.”

Virtual production meets cloud

Cloud-enabled virtual production was another major plank of the HPA agenda.

“Everything in the future is going to be previs, techvis and virtual production,” said Erik Weaver, a Sr. Consultant Entertainment Technology Center @ USC. “Understanding things like a virtual camera will be a critical component to a cinematographer in the future. They will want to be able to see the magic angle they’ll potentially get on set which is different from not being able to visualize anything with a green screen.”

Solstice Studios CTO Edward Churchward, said, “Over the last year, we’ve seen a real reduction in the cost level. Many small productions on lower budgets are now able to avail themselves of virtual production efficiencies.”

Finished pixel virtual set finishing

Camera tracking specialist Mo-Sys discussed a workflow that delivers higher graphics quality combined with a real-time virtual production workflow. Technology such as games engines combined with cloud means onset finishing or finished pixel rendering is possible onset, nullifying the guesswork of on set by fixing in post, but also eliminating much of traditional post altogether.

“Realtime rendering is so good it is often good enough for finishing,” said Mike Grieve, Commercial Director at Mo-Sys. “Removing post production compositing from the pipeline in order to save time and money is surely the key driver to using virtual production.”

But there’s a dilemma. “As good as the graphics systems are they are still governed by time, quality and cost,” he said. “How do you increase virtual graphic quality without substantially increasing the cost on set or delaying delivery by using post-production.”

The Mo-Sys solution is a dual workflow where graphics are rendered once on set in realtime and again in the cloud using full quality keys and composites before returning the results back to set within a few minutes for review.

“We are not a post killer,” insisted Grieve. “It will leave post to focus on the creative task not repetitive tasks that should be automated.”

What production ends up with is a final pixel virtual production workflow combined with a post workflow to improve onset quality without incurring extra cost and time.

Uren added that the pipeline will offer the flexibility of deciding if the render is required in five minutes or five hours. “We have a working prototype on the bench and are in the process of deploying it to the cloud,” he said.

This process works with blue and green screen where extracting the key is straightforward but applying this dual render to a LED volume where you are shooting final pixels is not only not easy – “it’s currently not possible to create a simultaneous key while shooting a full composite shot,” said Grieve.

“But we’re working on it for 2022.”

 

No comments:

Post a Comment