Tuesday, 30 June 2020

Pioneering foresight keeps the story going at Untold Studios

copywritten for Sohonet
When Untold Studios launched the first completely cloud-based creative studio in the world, it was ahead of its time in ways that even its founders could not have imagined.
When the global pandemic broke in early 2020, sending production into virtual shutdown and post producers scampering to set up remote workarounds, for Untold it was business as usual.
“We were already working remotely in our Studio as all our artists could connect to virtual machines from any desktop,” explains Sam Reid, Head of Technology at Untold Studios. “The way we work has changed very little since the crisis, albeit we maintain social distancing measures and connect from home!”
Untold is an independent creative studio based in London, making content across, Music, TV and Advertising. It works with an internationally based creative community of writers, designers, directors, animators and creatives.
The studio’s design from inception meant that highly resource-intensive cloud computing workflows (virtual workstations, VFX rendering, transcoding, processing) are streamed securely from AWS over Sohonet FastLane, a private uncontended network which suffers no packet contention or loss.
“There’s been no change to workflows and our productivity has not dropped at all,” says Reid. “The only difference now is that artists are using their own internet connectivity to the Studio. We created VPN connections from artists’ homes to the Studio, and from there the media travels as before over FastLane to AWS.”
When Covid-19 struck, Untold was working with AWS and Autodesk to migrate its final finishing workflow to the cloud. Without a hitch, the Studio switched this crucial part of the process to a remote workflow using ClearView Flex.
“There are two review workflows we drive from the Flame—regular reviews between clients and creatives where we discuss VFX shots in progress—and the final review for the client to sign off on the project,” Reid explains. “For final review the picture has to be at the highest resolution and the highest possible quality.”
Both are now being facilitated remotely by ClearView Flex. 
The security of assets was arguably the main brake on widespread adoption of cloud production and post operations prior to lockdown. Untold Studios, though, was set up from day one with studio-grade security. 
“It is another aspect of our workflow that we haven’t had to firefight overnight,” says Reid. “We had MPAA guidelines already in place. We rely on Sohonet to provide triple ‘A’ security on connections to our office. When our artists connect over VPN from their machines at home to get onto our network there are airtight protocols including multifactor identification.”
He continues, “Our studio is not constrained by the physical location of our data, so we don’t have to have artists in the studio working on content – they can be anywhere in the world. That was as true before the pandemic as it is now.”
There are also positive lessons to draw from the crisis as production gradually returns to normal.
“We know that remote working can be as productive if not more so than working in a studio with all the distractions that can entail,” Reid says. “We will look at what has worked best for us under lockdown and maintain those workflows where it makes sense. “
“Part of the vision for Untold Studios is to provide people the flexibility to work wherever they want. There is no reason why any creative community should be less productive because they’re not physically in a studio. Those days are long gone.”

LEDs are bringing the digital backlot alive

IBC
LED screens, more commonly found as backdrop to live music acts or as digital signage at sports venues, are now the hottest property in visual effects production.  

The use of video walls in film and TV goes back at least a decade - they were used as a light source projecting onto Sandra Bullock and George Clooney in Gravity. More advanced versions playing pre-rendered sequences were deployed by ILM on Rogue One: A Star Wars Story, and its follow-up Solo and Kenneth Branagh’s 2017 version of Murder on the Orient Express.  
Today, the most sophisticated set-ups combine LED walls and ceilings with camera tracking systems and games engines to render content for playback not only in realtime but in dynamic synchronicity with the camera’s viewpoint. The result allows directors, actors and cinematographers to stage scenes with far greater realism than a green or blue screen and with more chance to make decisions on set. 
“On Gravity we were just using LED as a light source for principal photography but all the pixels were fully replaced in post,” says Richard Graham, CaptureLab supervisor at vfx facility Framestore. “Now we can shoot the screen as though it is a real environment or set extension and almost deliver that as the final image.”  
The use of LEDs as digital backlot forms a vital part of virtual production, the transformative suite of technologies allowing directors, cinematographers and every other department to see and often manipulate in real-time the physical set and actors composited with digital images and creatures. 
“The big change has come with more powerful GPUs combined with games engines providing the software for real-time rendering and ray tracing,” says vfx supervisor Sam Nicholson, ASC, who founded and heads postproduction house Stargate Studios. “When you put that together with LED walls or giant monitors we think that at least 50 per cent of what we do on set can be finished pixels.” 
To make Rogue One in 2014/15 ILM created CG animated backgrounds to populate LED screens that surrounded the set. But the displays at that time didn’t have the fidelity for greater use other than as a lighting source. 
Now the tech has advanced such that pixel pitches (the distance in millimeters from the centre of a pixel to the center of the adjacent pixel) are narrow enough for the images to be photographed. What’s more the panels are capable of greater brightness, higher contrast ratios and showing 10-bit video.  
Games engines – from Epic, Unity and Notch – have also matured, gone mainstream, become easier while GPU processing from Nvidia and AMD has got exponentially faster to enable real-time compositing. 
“In the last year production has become the fastest growing space,” reports Tom Rockhill, chief sales officer at disguise, which makes and markets LED displays and servers for live events and fixed installations. “There’s an inevitability about demand from film and TV.” 
Rugby’s LED backs
For ITV Sport’s presentation of the Rugby World Cup from Japan last summer, disguise worked with technical partner Anna Valley to create a three-screen LED video wall backdrop of a Japanese cityscape for the hosts sitting in Maidstone Studios.  The screens responded to on-set camera movements in the same way a physical camera would deliver a panoramic view of a real Japanese city.  
Rockhill explains that to achieve the effect positional data was fed from stYpe RedSpy trackers fixed to the live studio cameras into a disguise gx 2 server running Notch software that rendered the cityscape content from the perspective of the cameras and fed it back to the LED display window in real time. 
“It gave the illusion of perspective so when the camera moves to the left the image moved to the right so it looks like you’re looking out of a window,” he says. “The disguise software translates the physical data from the camera into the virtual realtime environment running on the games engine and pushes the correct pixels to the correct video surface (LED module) out of the disguise hardware.” 
Disguise has made two sales of similar set-ups to UK broadcasters and says the most popular application is sports. 
While film or TV cameras require little modification, camera tracking sensors applied to the camera determines where it is physically and where it would exist in a virtual space. Other vendors here include Mo-Sys and Ncam. 
While live broadcast use of the technology will typically use pre-rendered visuals, for high end dramatic production like The Mandalorian, high resolution video can be output up to 60 frames a second with different lighting set-ups and digital background backplates able to be swapped, tweaked and reconfigured at will. 
“This is aided by the vogue for shooting with large format cameras,” explains Graham. “The pixel pitch is still not narrow enough that the gaps between the pixels aren’t noticeable on certain shots. The solution is to use the shallow depth of field of large format so you blur the background out of focus.” 
Bringing vfx on set helps the crew and cast feel more connected to the end result. Actors can see all the CG elements complete with reflections and lighting integrated in the camera, eliminating the need for green-screen setups. 
“It enables fully immersive production environments, so the actor or on-screen talent doesn’t have to look at a reference monitor or perform in front of green screen,” Rockhill says. 
No more green screen
The significant value to a DP is that they’re not shooting against green screen and trying to emulate the light that will be comped in later – a loss of control that has been a cause of much angst among cinematographers. With this process, they can realise their creative intent on set, just as it was before the introduction of heavy visual effects. 
“Green screen does still have its uses,” says Graham. “One technical problem with screens is latency. There is a time delay between camera and the image being sent from the render engine. If you move the camera too quickly you will see a lag.” 
Though it is only a few frames, even The Mandalorian had to find a workaround. One was to deploy camera language from the original Star Wars which was largely static or with soft pans. Another trick was to render extra pixels around the viewing fustrum [the field of view of a perspective virtual camera] to give themselves a margin for error. 
“If you shoot handheld camera work it would be hard to make the images line up in timely fashion,” notes Graham.  
While good at environmental light, LED displays are less effective at illuminating a hard, bright light source such as strong sunlight. “If the scene requires that you have to bring in some form of external lighting to light the subject correctly,” Graham says. 
Nonetheless, virtual production workflow is removing the boundaries between story, previs, on-set, post viz and postproduction.  
“Essentially it front-loads decision making,” says Graham. “For a long time, live action and vfx has problem solved quite late in the filmmaking process. When you create media for screens well in advance then the key decisions have to be made quite early on with the advantage for director and DP of being able to see close to the final result in-camera rather than waiting months for post to see if their vision has been realized.” 
Fix it in prep
Salvador Zalvidea, VFX supervisor with Cinesite, says: “Most of the exciting technologies we are seeing emerge will be done in real-time and on set, shifting the visual effects process to preproduction and production. This will allow creatives to make decisions on set. We will probably still require some visual effects to be done or refined after the shoot, but iterations will be much quicker, if not instantaneous.” 
This collapse in timescales, particularly on the back end of projects, is a boon for producers scrambling to turnaround episodic drama. Nor does the show have to have a fantasy or science-fiction storyline. In principle any location could be virtualized from the interior of Buckingham Palace to the exteriors of Chernobyl. 
The technology could also be used as back projection to characters travelling in cars but unlike the century old cinematic technique this time with the ability to reflect accurate lighting from windows and shiny metal. Second units don’t have to leave the studio. 
The screen content itself can be synthetic or photographed on real locations, as photorealistic or exotic as you need. “As long as you plan and design it so it can be rendered successfully from any viewpoint it should be fine,” Graham says. 
While the system is being used on the next Bond No Time To Die, it is also being deployed by Netflix and on HBO’s production of comedy series Run co-created by Fleabag duo Vicky Jones and Phoebe Waller-Bridge. 
The latter uses a digital backlot system designed by Stargate Studios on set in Toronto.  “The challenge is 350 visual effects per episode, 4000 shots in ten weeks,” says Nicholson. “We synchronize it, track it, put it in the Unreal Engine, and it looks real and shouldn’t need any post enhancements. The entire power of a post-production facility like Stargate is moving on set. We now say fix it in prep rather than fix it in post.” 
The financial and creative benefits of virtual production are only just being explored. One of the next key steps is greater integration of cloud for the instant exchange, manipulation and repurposing of data. 
“As vfx companies start to create libraries of photo scanned environments, materials and objects we will get to a point where it’s going to be much easier to create the environments for screens,” says Graham. “This will start to cut down on the amount of prep needed before a shoot. And that means you can be more fluid in the process and allow for more improvisation and more creative iteration closer to the start date.” 
In broadcast, producers are already working with augmented reality to bring graphics to the foreground of static virtual set environments and using extended green screen backgrounds to display graphics rendered out of a games engine. 
“The next step is to add real-time environments running on a surface that the talent can actually see – either by projection or LED – and to combine all the elements together,” says Rockhill. 
LEDs are also emerging with flexible and bendable panels permitting the design of curved and concave shapes outside of the conventional rectangular frame. Disguise’s new studio currently being at its London headquarters will feature curved surfaces to make it easier to blend the edges of a virtual environment. 
“Rather than just a virtual set that looks pretty, we are working to evolve the technology to allow for interactivity with real-time live data,” says White Light’s technical solutions director Andy Hook. “We are also likely to see increased haptic feedback, skeletal tracking, frictionless motion capture – things that allow us to track the people within the virtual space and create more innovative use of the tools and technologies to create more immersive and engaging content.” 
Grounding Joker in reality
For a pivotal scene in Joker when Arthur Fleck murders three Wall Street bankers on the Gotham City subway DP Lawrence Sher ASC wanted to shoot it as practically as possible. 
One option was to shoot for real on the NYC metro but even if they arranged to shut down the tracks – not easy or cheap – Sher felt the complex logistics for the sequence would be limiting. 
An alternative was to shoot green screen and add the backgrounds in later but this risked losing the ability to light the scene as he wanted while it wouldn’t appear as real for the actors.  
The solution was to build a subway car set, put the actors inside and surround the windows with LED screens displaying the movement of the train. Sher could control the lighting display, switching between flickering fluorescent lights or white subway station, to achieve the heightened realism that he and director Todd Phillips wanted. 
“Suddenly, you’re not guessing where the background is,” he explains to rental houuse PRG whose LED screens and servers were used on the show. “You aren’t coordinating that background later, but you are able to photograph it in real-time and make lighting decisions as you photograph, that you can’t do when you’re shooting blue or green screen. 
“The fact that the airbags were moving a little bit and the world outside was going by, and when the lights flickered off, you can actually see a subway car or station passing by, as opposed to just blue screen, made it seem so real. I’ve talked to people who thought we went out on a subway and just drove a train up and down the tracks.” 


Monday, 29 June 2020

Virtual Production Can Be a Reality for Everyone. Here’s How


Creative Planet

Think virtual production is the preserve of James Cameron? The confluence of games engines with faster PCs, LED backlots and off-the-shelf tools for anything from performance capture to virtual camera is bringing affordable realtime mixed reality production to market.

Cameron saw this coming, which is why he has upped the ante to where no filmmaker has gone before and decided to shoot the first Avatar sequel as a virtual production under water. Not CG fluids either, but with his actors holding their breath in giant swimming pools.

“The technology has advanced leaps and bounds at every conceivable level since Avatar in 2009,” says Geoff Burdick, SVP of Production Services & Technology for Cameron’s production outfit Lightstorm Entertainment.
Massive amounts of data is being pushed around live on the set of Avatar 2, Burdick says. “We needed High Frame Rate (48fps) and high res (4K) and everything had to be in 3D. This may not be not the science experiment it was when shooting the first Avatar but... our set up is arguably ground-breaking in terms of being able to do what we are doing at this high spec and in stereo.”
This just the live action part. Performance captured of the actors finished two years ago and is being animated at Weta then integrated with principal photography at Manhattan Beach Studios.
Avatar 2 may be the state-of-the-art but it’s far from alone. Most major films and TV series created today already use some form of virtual production. It might be previsualization, it might be techvis or postvis. Epic Games, the makers of Unreal Engine, believe the potential for VP to enhance filmmaking extends far beyond even these current uses.
Examined one way, VP is just another evolution of storytelling – on a continuum with the shift to color or from film to digital. Looked at another way it is more fundamental since virtual production techniques ultimately collapse the traditional sequential method of making motion pictures.
The production line from development to post can be costly in part because of the timescales and in part because of the inability to truly iterate at the point of creativity. A virtual production model breaks down these silos and brings color correction, animation, and editorial closer to camera. When travel to far flung locations may prove challenging, due to Covid19 or carbon neutral policies, virtual production can bring photorealistic locations to the set.
Directors can direct their actors on the mocap stage because they can see them in their virtual forms composited live into the CG shot. They can even judge the final scene with lighting and set objects in detail.
What the director is seeing, either through the tablet or inside a VR headset, can be closer to final render– which is light-years from where directors used to be before real-time technology became part of the shoot.
In essence, Virtual Production is where the physical and the digital meet. The term encompasses a broad spectrum of computer-aided production and visualization tools and techniques which are growing all the time meaning that you don’t need the $250 million budget of Avatar 2 to compose, capture, manipulate and as good as publish pixel perfect scenes live mixing physical and augmented reality.
Games engines
The software at the core of modern, graphics-rich video games is able to render imagery on the fly to account for the unpredictable movements of a video-game player. Adapted for film production and the tech consigns the days of epic waits for epic render farms to history.
The most well-known is Epic’s Unreal Engine which just hit version 5 with enhancements intended to achieve photorealism “on par with movie CG and real life”.  A virtualized micropolygon geometry, for example, frees artists to create as much geometric detail as the eye can see. It means that film-quality source art comprising hundreds of millions or billions of polygons can be imported directly into the engine—anything from ZBrush sculpts to photogrammetry scans to CAD data—and it just works. Nanite geometry is streamed and scaled in real time so there are no more polygon count budgets, polygon memory budgets, or draw count budgets; there is no need to bake details to normal maps or manually author LODs; and there is no loss in quality.
Epic also aims to put the technology within practical reach of development teams of all sizes by partnering with developers to offer productive tools and content libraries.
It’s not the only game in town. Notch has a new real-time chroma keyer, which when combined with its automated Clean Plate Generation produce “fantastic” results with almost no setup or tweaking while providing all the features you’d expect such as hair, liquid handling and hold-up mattes all within less than a millisecond.
ILM which uses a variety of engines also uses proprietary real-time engine Helios, based on technology developed at Pixar.
The Jungle Book, Ready Player One and Blade Runner 2049 all made use of Unity Technologies’ Unity engine at various stages of production thanks to custom tools developed by Digital Monarch Media.
For example, on Blade Runner 2049, director Denis Villeneuve was able to re-envision shots for some of digital scenes well after much of the editing was complete, creating a desired mood and tempo for the film, using DMM’s virtual tools.
Games engines rely on the grunt power of GPU processing from the likes of Intel, Nvidia and AMD which has got exponentially faster to enable real-time compositing.


Digital backlots

The use of video walls in film and TV goes back at least a decade as a light source projecting onto Sandra Bullock and George Clooney in Gravity. More advanced versions playing pre-rendered sequences were deployed by ILM on Rogue One: A Star Wars Story, and its follow-up Solo and during a sequence set on a Gotham metro train in Joker.  A system is also being used on the latest Bond No Time To Die.

The most sophisticated set-ups combine LED walls (and ceilings) with camera tracking systems and games engines to render content for playback not only in realtime but in dynamic synchronicity with the camera’s viewpoint. The result allows filmmakers to stage scenes with greater realism than with a green or blue screen and with far more chance of making decisions on set.

“The big change has come with more powerful GPUs combined with games engines providing the software for real-time rendering and ray tracing,” says Sam Nicholson who heads Stargate Studios. “When you put that together with LED walls or giant monitors we think that at least 50 per cent of what we do on set can be finished pixels.”

For HBO comedy-thriller Run, the production built two cars outfitted to resemble an Amtrak carriage
on soundstage in Toronto. These rested on airbags which could be shaken to simulate movement. Instead of LEDs, a series of 81-inch 4K TV monitors were mounted on a truss outside each train window displaying footage preshot by Stargate from cameras fixed to a train travelling across the U.S.

“It’s a smaller scale and less expensive version of Lucasfilm’s production of The Mandalorian but the principal is the same,” explains Cinematographer Matthew Clark. “It effectively brings the location to production rather than move an entire production to often hard to access locations.”
Any light that played on the actor’s faces or on surfaces in the train had to be synchronized to the illumination outside the windows otherwise the effect wouldn’t work.

“It was important to line up the picture so when you’re standing in the car your perspective of the lines of train track and power lines has to be realistic and continuous. If the angle of the TV screen is off by just a few degrees then suddenly the wires of a telegraph pole would be askew. When we needed to turn the car around to shoot from another angle the grips could flip all the monitors around to the exact angle.”

LED displays are measured in pixel pitches (the distance in millimeters from the center of a pixel to the center of the adjacent pixel) are narrow enough for the images to be photographed. The panels are capable of greater brightness, higher contrast ratios and displaying 10-bit video.

Rental companies in the U.S offering LED screens or monitors include PRG and Stargate Studios; and in the UK, disguise, and On Set Facilities both of which also have operations in LA.
OSF advises that the bigger the pixel the more light it outputs onto your subject, which means very fine pixel pitches may not be optimum for filming. The pixel pitch resolution of the LED screens used on The Mandalorian was 2.8.
OSF is set up as a fully managed virtual production studio covering in-camera VFX (LED), mixed reality (green screen), and fully virtual (in-engine) production.
It has a partnership with ARRI and also has its own virtual private network connected to the Azure cloud for virtual production. StormCloud, enables remote multi-user collaboration in Unreal Engine powered by Nvidia Quadro technology. Entry points currently set up in London and San Francisco are being tested by “a number of Hollywood Studios and VFX facilities,” says the facility.

Camera tracking

Another essential component is the ability to have the virtual backlot tracked to the camera movement by a wireless sensor. This means that as the DP or director frame a shot the display which is often the main lighting source, adjusts to the camera’s perspective. That’s no mean feat and requires minimal to zero latency in order to work.
Professionals camera tracking systems from Mo-Sys and N-Cam are the go-to technologies here but if are purely filming inside a games engine there are budget ways of creating a virtual camera.
To create raw-looking handheld action in his short film Battlesuit, filmmaker Haz Dulull used DragonFly, a virtual camera plugin (available for Unity, UE and Autodesk Maya) built by Glassbox Technologies with input from Hollywood pre-viz giants The Third Floor. 
Another option is the HTC VIVE tracker which costs less than $150 which has been tested at OSF. “If you want to shoot fully virtual, shooting in engine cinematic is amazing with a VIVE as your camera input,” it sums up. “If you want to do any serious mixed reality virtual production work or real-time VFX previz, you are still going to need to open your pocket and find a professional budget to get the right equipment for the job.”

Plug-in assets
The Rokoko mo-cap suit can stream directly into UE via a live link demoed by OSF. The facility explain that the suit connects over the wireless network to the UE render engine and into Rokoko Studio where OSF signs the suit a personal profile for the performer. It then begins streaming the data into UE by selecting the Unreal Engine option in the Rokoko Studios Live Tab (a feature only available to Rokoko Pro Licence users). The system is being refined at OSF with tests for facial capture in the works.
There is video of the demo here: https://youtu.be/5N4OcNJUw9o
Reallusion - - make software for 3D character creation and animation include iClone and Character Creator 3D. The Unreal Live Link Plug-in for iClone creates a system for characters, lights, cameras, and animation for UE. The simplicity of iClone combined with UE rendering delivers a digital human solution to create, animate and visualize superior real-time characters.
Character Creator includes a plugin called Headshot which generates 3D realtime digital humans from one photo. Apart from intelligent texture blending and head mesh creation, the generated digital doubles are fully rigged for voice lipsync, facial expression, and full body animation. Headshot contains two AI modes: Pro Mode & Auto Mode. Pro Mode includes Headshot 1000+ sculpting morphs, Image Mapping and Texture Reprojection tools. The Pro Mode is designed for production level hi-res texture processing and ultimate face shape refinement. Auto Mode makes lower-res virtual heads with additional 3D hair in a fully automatic process.


OSF ran this through its paces, using Headshot to automatically create a facial model which was animated within iClone 7 using data from actors performing in Rokoko mocap suits streamed live to iClone allowing real-time previews and the ability to record animations. OSF also used Apple’s LiveFace app (available for downloaded on any iPhone with a depth sensor camera) and its own motion capture helmets https://onsetfacilities.com/product/face-capture-helmet/ to capture the facial animations. The next part of the pipeline is to transfer the assets over to UE with the Unreal Engine LiveLink plugin and Auto Character set up plugin which creates skin textures in the same way as Epic Games’ digital humans.

Virtual production on a budget
British filmmaker Hasraf Dulull made animated sci-fi short Battlesuit using Unreal Engine, on a skeleton budget and team of just three, including himself. 
Rather than creating everything from scratch, they licenced 3D kits and pre-existing models (from Kitbash3D, Turbosquid and Unreal). Dulull animated the assets and VFX in realtime within Unreal’s sequencer tool.
They retargeted off-the-peg mocap data (from Frame Ion Animation, Filmstorm, Mocap Online) onto the body of the film’s main characters. For facial capture they filmed their actor using the depth camera inside an iPad and fed the data live into UE.
“We had to do some tweaks on the facial capture data to bring some of the subtle nuance it was missing, but this is a much quicker way to create an animated face performance without spending a fortune on high end systems,” Dulull says.
Powering it all including realtime raytracing, Dulull used the Razer Blade 15 Studio Edition laptop PC with Nvidia Quadro RTX 5000 card.
Every single shot in the film is straight out of Unreal Engine. There’s no compositing or external post apart from a few text overlays and color correction done on Resolve.
“If someone had said I could pull off a project like this a few years ago that is of cinematic quality but all done in realtime and powered on a laptop I’d think they were crazy and over ambitious,” he says. “But today I can make an animated film in a mobile production environment without the need for huge desktop machines and expensive rendering.”


Friday, 26 June 2020

AI breakthrough could lead to entire games being created without humans

RedShark

New machine learning techniques have been developed to insert photorealist people and characters into photos and videogames with potential for slashing the time and cost on VFX creation of digital extras. Another AI development points the way for entire games to be created without an army of human coders.
The separate breakthroughs have been made in the past few weeks by researchers at Facebook, Electronic Arts and Nvidia.
Image generation has progressed rapidly in recent years due to the advent of (generative adversarial networks (GANs), as well as the introduction of sophisticated training methods. However, the generation is either done while giving the algorithm an “artistic freedom” to generate attractive images, or while specifying concrete constraints such as an approximate drawing, or desired keypoints.
Other solutions involve using a set of semantic specifications or using free text yet these have yet to generate high-fidelity human images. What seems to be missing is the middle ground of the two, specifically, the ability to automatically generate a human figure that fits contextually into an existing scene.
That’s what Facebook and researchers at Tel Aviv University claim to have cracked.

The creation of artificial humans

 As outlined in a new paper, after training an AI on more than 20,000 sample photographs from an open-source gallery, the method involved three networks or sequences: The first generates the pose of the novel person in the existing image, based on contextual cues about other people in the image. The second network renders the pixels of the new person, as well as a blending mask. A third network refines the generated face in order to match those of the target person.
Unlike other applications, such as face swapping, the appearance of the novel person here is controlled by the face, several clothing items, and hair.
The researchers claim to have demonstrated that their technique can create poses that are indistinguishable from real ones, despite the need to take into account all the social interactions in the scene.
They acknowledge that when CG humans are inserted into an existing ‘real’ image the results can stand out like a sore thumb.
But in tests, during which volunteers were asked to see if they could find the artificially-added people in group shots, they only managed to spot the ‘fakes’ between 28 per cent and 47 per cent of the time.
“Our experiments present convincing high-resolution outputs,” the research paper claims. “As far as we can ascertain, [this AI] the first to generate a human figure in the context of the other persons in the image. [It] provides a state-of-the-art solution for the pose transfer task, and the three networks combined are able to provide convincing ‘wish you were here’ results, in which a target person is added to an existing photograph.”

Saving costs

While inserting folks into frames might not seem like the most practical application of AI, it could be a boon for creative industries where photo and film reshoots tend to be costly. Venture Beat suggests that, using this AI system, a photographer could digitally insert an actor without having to spend hours achieving the right effect in image editing software.
Meanwhile, a team from EA and the University of British Columbia in Vancouver is using a technique called reinforcement learning, which is loosely inspired by the way animals learn in response to positive and negative feedback, to automatically animate humanoid characters.
“The results are very, very promising,” explained Fabio Zinno, a senior software engineer at EA to Wired.

AI generated computer games

Traditionally, characters in videogames and their actions are crafted manually. Sports games, such as FIFA, make use of motion capture, a technique often uses markers on a real person’s face or body, to render more lifelike actions in CG humans. But the possibilities are limited by the actions that have been recorded, and code still needs to be written to animate the character.
By automating the animation process, as well as other elements of game design and development, AI could save game companies millions of dollars while making games more realistic and efficient, so that a complex game can run on a smartphone, for example.
In work to be presented at next month’s (virtual) computer graphics conference, Siggraph, the researchers show that reinforcement learning can create a controllable football player that moves realistically without using conventional coding or animation.
To make the character, the team first trained a AI model to identify and reproduce statistical patterns in motion-capture data. Then they used reinforcement learning to train another model to reproduce realistic motion with a specific objective, such as running toward a ball in the game. Crucially, this produces animations not found in the original motion-capture data. In other words, the program learns how a soccer player moves, and can then animate the character jogging, sprinting, and shimmying by itself.
Even more astonishing, GANs now have the ability to generate whole videogames from scratch.

Recreating Pac-Man without coders

Trained on 50,000 episodes of Pac-Man, a new AI model created by NVIDIA with the University of Toronto and MIT can generate a fully functional version of the classic arcade game — without an underlying game engine. That means that even without understanding a game’s fundamental rules, AI can recreate the game with convincing results.
“We were blown away when we saw the results, in disbelief that AI could recreate the Pac-Man experience without a game engine,” said Koichiro Tsutsumi from Bandai Namco, which provided the data to train the GAN. “This research presents exciting possibilities to help game developers accelerate the creative process of developing new level layouts, characters and even games.”
Since the model can disentangle the background from the moving characters, it’s possible to recast the game to take place in an outdoor hedge maze, or swap out Pac-Man for your favourite emoji. Developers could use this capability to experiment with new character ideas or game themes.
“We could eventually have an AI that can learn to mimic the rules of driving, the laws of physics, just by watching videos and seeing agents take actions in an environment,” said Sanja Fidler, director of Nvidia’s research lab. “GameGAN is the first step toward that.”

Wednesday, 24 June 2020

The New Precision Time Protocol Explained

Broadcast Bridge

The IEEE has just published the latest version of its Precision Time Protocol (PTP) standard that provides precise synchronization of clocks in packet-based networked systems. This article explains the significance of IEEE 1588-2019, otherwise known as PTPv2.1, and how it differs from previous versions including updates to the isolation of profiles, monitoring and security.
The Precision Time Protocol (PTP) is a timing standard used to synchronize clocks throughout a computer network. On a local area network, it achieves clock accuracy in the sub-microsecond range, making it suitable for measurement and control systems. Versions of PTP are in use throughout many industries. In media, it is essential to keep video and audio IP equipment, such as cameras, vision switchers and graphics kit in synchronicity.
A typical infrastructure will have at least one (usually more) PTP clock generator called the Grandmaster clock. Various slave clocks synchronise to it and can acquire ‘Grandmaster’ status if the Grandmaster fails thus providing a main-backup solution.
The first version of PTP, IEEE 1588-2002, was published in 2002. IEEE 1588-2008, also known as PTPv2 is not backward compatible with the original 2002 version. IEEE 1588-2019 (PTPv2.1) was approved in November 2019 and was published on June 16th 2020. This version includes backward-compatibility to the 2008 publication.
Indeed, the designation PTPv2.1 rather than v3 is chosen to emphasise that maximum backwards compatibility is assured.
“PTPv2 has worked very well until now but the new features can enhance the robustness and accuracy of PTP,” explains Daniel Boldt, Head of Software Development, Meinberg, which makes timing equipment such as clocks for IP infrastructure. “Importantly, the new features in v2.1 are optional and entirely interoperable with v2.0.
“This was not the case with version 2.0 which was not backward compatible with the existing 1.0 standard. It meant that a v1.0 device couldn’t talk to a V2.0 device and vice versa. This is addressed in this version and means that a 2.1 master can synchronize with every 2.0 slave and vice versa.”
Among the new features introduced in the new version is the ability to use isolated profiles. If two or more PTP Profiles, which use multicast messages, are in the same network, then the PTP nodes will see PTP messages from profiles which are not of interest. That could wreak havoc with the Best Master Clock Algorithm, for example. The current way to solve this is to configure each profile to operate in a different PTP domain.
“That works if the network operator has time to pay attention to the PTP domains of profile running on their network,” explains Douglas Arnold, Meinberg’s Principle Technologist and also chair of the PNCS - Precise Networked Clock Synchronization Working Group – which led the specification design. “If you are a network operator, then you are so busy that you barely have five minutes, so you know that it is dangerous for equipment vendors to assume that you will track PTP profile domain numbers.”
If you want to be sure that profiles of PTPv2.1 do not interact you can use the new profile isolation feature.
The idea is this: A Standards Development Organization, or SDO such as SMPTE the IEC or ITU, can apply for a globally unique number, the SdoID, which appears in the common header of all PTP messages. PTP nodes can then ignore all messages which do not have the SdoID they want. You get these numbers from the IEEE Registration Authority. These are the folks which are in charge of parcelling out certain globally unique numbers such as Ethernet MAC addresses. The idea is that each SDO can get a SdoID, then protect its own various profiles from each other using rules on domains. Before you rush to the Registration Authority to get your number be advised that they will only give you one if they judge that you represent a standards development organization. So organizations (like the IETF, ITU, SMPTE) would be able to get one, but not equipment manufacturers, or network operators.
Also new in PTPv2.1 is the ability to use multiple masters at the same time in order to send messages to slaves simultaneously.
“This feature has been built into profiles of PTP for use in other industries, such as the enterprise profile for the financial industry, but is new to the media world,” Boldt says. “One of the drawbacks of the old PTP version was that there was only one master present and everybody had to follow this one single master. If that sent out a wrong time every slave would follow that master.
With a multiple PTP approach, the slave can now choose a group of masters with the accurate time so they can kick out a false master automatically.”
Other features are focussed on monitoring and collecting of metrics from the slave. With network conditions constantly changing, operators need precise and relevant information. “These metrics provide statistical information about a certain PTP node such as the average offset in the last 24 hours; the number of packets exchanged; the minimum or maximum delay measured in the last 24 hours.”
Again, none of these have to be added, they are user optional.
Perhaps the most important new feature pertains to security. This has been introduced around TLV which stands for “type, length, value.” It is a general means to extend a PTP message with some extra information for some optional features – in this case enhanced security.
“In PTPv2.1 this security appendage can be attached to any PTP message and allows for the authentication of a master,” Boldt says. “It ensures that only trusted masters can be active and the communication path between slave and master is secure.”
The idea of the AUTHENTICATON TLV (it is customary to name this in caps in the TLV) is that a cryptographic integrity check value (ICV), sometimes called a hash code, can be appended to a PTP message, explains Arnold, “if anything in the message changes, then the receiving node can detect that by recomputing the ICV and comparing it to the one in the TLV. This involves performing mathematical operations on the bits in the message, except the ICV, and the secret key, resulting in the ICV. The trick is that it is practically impossible to get the right answer unless you know the key, so a man in the middle attack will be detected.”
It will take a little time for these features to be added into product, from Meinberg or other reference clock manufacturers, and integration will be done on a step by step basis depending on customer demand.

Studios move to the cloud

IBC
The current workflow challenges in Hollywood are driving interest in accelerating the ambition to move all production to the cloud sooner than 2030.
In the past few months the media and entertainment industry has not only decentralised its entire organisational model, but dramatically sped up its transition to cloud-based workflows, archives and computing resources.  
This is not just happening piecemeal or short term – coronavirus is finally putting the skates under Hollywood studios to speed the transition to wholly cloud-based production. 
As Avid puts it, while the past few years have seen a handful of innovators migrating certain workflows to the cloud, “it took a disaster on the scale of a global pandemic for the broader industry to seriously consider what cloud solutions could do for media production.” 
Even before Covid-19 struck, cloud had risen up the agenda. In a 2018 poll when nearly 70% of media enterprises said they preferred an exclusively on-premises compute and storage model, a survey conducted by IBM with Ovum predicts this to drop by more than half in 2023. Meanwhile, there will be a surge in uptake of cloud offerings from 10% to 34% over the same period, resulting in an even split of deployment models across on premises, cloud, and hybrid.  
Since that survey was conducted last year, investment priorities will have shifted further as a result of the external shock.  
A recent IABM report underlines this by highlighting increased investment in virtualization and remote production. Most media buyers told the IABM that, post-pandemic, they can’t imagine things going back to the way they were before. The imperative for business continuity alone will accelerate technology transitions that were already underway. 
“There will be positive consequences resulting from production lockdown,” contends Chuck Parker, CEO at Sohonet. “Chief among these will be an enlightened attitude in Hollywood and beyond to the practicality and benefits of a distributed content-production workforce.” 
This is echoed at MovieLabs, a think tank set up by the major studios, including Disney, Warner Bros, Paramount, Sony and Universal, to define and enable the future of film and TV production. 
“Everything changed overnight, and that obviously created a new level of urgency and focus on the cloud just to continue some level of work,” says CEO Rich Berger. “It would be fair to say that the collective mindset has shifted from ‘we will be migrating to the cloud’ to ‘‘we have an imperative to move to the cloud to protect our long-term business’.”   
Creative driving force MovieLabs set out a blueprint last autumn for all studio content to be created and ingest straight into the cloud without needing to be moved and gave a ten-year timeframe for that to happen.  
“Not every principle we highlighted will be achieved at the same time and at scale,” Berger says. “That said, our focus since we published the 2030 Vision paper is to accelerate this timeframe.”    
The current ‘working from home’ short term solutions are different from its longer-term cloud vision and involve complications like VPN access and duplication of files that are not core to its ultimate ambition but, Berger says, many parts are already happening and will be realised before 2030.  
“Working in the cloud is not necessarily about improving efficiencies for the studios so much as enabling creatives to explore more options before they run out of budget or time,” stresses MovieLabs CTO Jim Helman. “For example, it allows the creation of more iterations before principal photography begins.” 
Indeed, it is the creatives rather than the executives who may provide the driving force to maintain ‘work from anywhere’ workflows after Covid-19 is contained.  
“Creatives have now had several months of working from home and can now see that cloud-based production workflows actually can work for them,” Berger says. “We have an opportunity as we move to the cloud to reinvent and optimise workflows. You wouldn’t have to send files all over the place, we can have them stay in one location and have the applications come to the medium. That in and of itself should create a lot of change and opportunity.” 
At its most basic level, the cloud provides a more cost-effective platform for fulfilling unpredictable spikes in demand. At a more strategic level, the cloud provides a platform for aligning the needs of both creative and business teams. 
“The costs associated with maintaining infrastructure on premises is higher in the long term,” argues Bob De Haven, GM, communications and media at Microsoft. Its Azure cloud is the platform of choice for Avid and Technicolor. “Go digital or go broke, that’s really the point.” 
Changing the wheel Cloud skeptics spotlight security, bandwidth and accessibility as critical concerns. Michael Cioni, Global SVP of Innovation at Frame.io welcomes the questioning but says the industry needs to make a behavioural change. 
“We’ve gone through film to tape, tape to files, HD to 4K. Each required a decision to make the change, problems to be solved, inventions and a behavioural response to how facilities are run and how we expect the work to actually flow. Going to the cloud is really no different.” 
The motion picture industry has worked in fundamentally similar ways since its beginning. Cameras and DoPs, audio recorders and sound engineers, editors working with NLEs, compositing tools and VFX artists, colour correction tools and colourists—all form a production support structure. They make up the most important part of the workflow foundation, and require the most significant investment from an equipment and labour perspective. 
Few of these steps are directly connected to one another through a single platform. 
“The unfortunate truth about the shift to digital technology is that it didn’t speed up the process as significantly as originally promised,” suggests Cioni.  “Given the fair amount of manual processing it takes to move through the workflow, there’s still room for errors and failures—even (and perhaps especially) at the highest levels of the industry, where creatives are constantly pushing technology to its limits.” 
Cioni is positioning Frame.io to be the media management ‘glue’ that links production tools with artists in the cloud. A milestone in this regard was the proof of concept demonstrated at a meeting of the Hollywood Professional Association (HPA) in February. 
At the event, a 6-minute film was made largely in real time in the cloud with the participation of multiple vendors including Avid, Blackmagic, RED, ARRI, FilmLight, SGO and Sony as well as members of the influential American Society of Cinematographers (ASC). 
Footage was sent directly from camera to the Amazon Web Services cloud over wireless 4G LTE then streamed in both HDR and SDR to iPads carried by the DP and director for instant review. When takes are automatically available almost as soon as they are taken it dramatically shrinks the time to make alterations on set from days (sometimes weeks) to seconds. 
“The workflow proved that it is possible to host the original camera files and power the offline edit from the cloud, and then relink back to them locally for the conform,” Cioni says. “The timecode and metadata captured in the live streaming assets on the set were sufficient to link back to the original camera files for the final digital intermediate.” 
Bottlenecks to overcome  There are some constraints holding back the vision right now. One is around the streaming of the highest quality video, for example, high bit depth, full colour gamut streaming desktops for critical colour review.  
MovieLabs points to limitations in the collaborative nature of work when working remotely. “We’ve yet to find technologies that allow whole teams of creatives to work in a ‘virtual’ suite as well as they can in a physical suite – where every participant can see, comment upon and control the output of the work and the applications being used to create it,” Berger says.  
“We have more work to do as an industry to enable these applications to talk to each other and hand off assets and metadata to other components in a chain, even if piecemeal cloud implementations can be more inefficient than our legacy approach.” 
Cloud interoperability is another issue. How can people working on the same show but in a different cloud access the same media? “We need to have a way to join clouds together or a seamless way where the operator doesn’t realise there are multiple clouds happening at the same time,” says Greg Ciaccio, workflow chair of the technology council at the ASC. 
Limitations purely related to bandwidth are restrictive largely at ingest for Original Camera Files (OCFs). “We are able to ingest proxy files for many of our cloud-based workflows, but there are considerable delays to getting the matching OCFs ingested to the cloud because of the size of those files,” Berger says.   
That is largely an issue of upload bandwidth speed which could be solved by 5G connectivity.   
Concerns about the cost of cloud haven’t gone away either. “Without a doubt, [moving to remote production] was the right thing to do and without a doubt this was also the most expensive thing to do,” says Yves Bergquist, Director of the AI & Neuroscience Project at USC’s Entertainment Technology Center. “Storage, compute and data egress costs are likely skyrocketing throughout the industry, so this won’t be sustainable for long.” 
He argues that AI/ML could offer a smarter way of managing data. “Like it or not, this hastily cobbled together ‘Internet of Production’ is here to stay. Because of the costs associated with its operation, it very much needs artificial intelligence.” 
Concerns about security, which form a large part of the MovieLabs Vision 2030 specifications, seem to be receding.  MovieLabs believes that in many ways, cloud-based workflows can and will be more secure with the right security framework and policies.  
It is currently working on a ‘cloud readiness’ assessment across a set of workflow tasks to find a common way of evaluating just how possible various tasks are in the cloud today.  
“We have seen pieces of the workflow—like batch VFX rendering, dailies and review/approve cycles and certain editing use cases—move to the cloud already,” Berger says. “We’re expecting more of these individual tasks to be next, for example, VFX workstations using shared global storage and also cloud-based archiving replacing LTO tapes.” 
Internet of production Cloud computing has dominated the IT conversation for many years, but the media and entertainment industry has lagged behind. That’s particularly acute in Hollywood in contrast to Netflix which has run all of its computing and storage needs, from customer information to recommendation algorithms in the cloud since 2016. It was a migration that took the company seven years to complete. 
Netflix content production and postproduction remains a more hybrid operation, with one foot in the cloud to connect globally located creatives and another in physical spaces from New York and LA to Singapore and Brazil, where that creative work is done.  
“It would be great to have the ability to reinvent large parts of our industry to essentially create a cloud-based creative supply chain connected and integrated fully with the creative and physical places all over the globe where content is made,” Leon Silverman, director, Post Operations and Creative Services, Netflix told IBC.
Not even Netflix can do it alone. “This will take real work and a lot of dialogue to move our industry forward,” Silverman says. 
All the projections about cloud adoption are happening quicker because this technology is maturing faster than originally envisaged. 
Cioni predicts: “Some people assume cloud is a decade away but I would say with very high certainty that in 2025 entire facilities that today do not use cloud for editorial, colour correction or conform will be using active and archive workflows completely in the cloud.”