Wednesday, 10 July 2019

HBO Max Joins Multi-Billion-Dollar Content Battle

Streaming Media
WarnerMedia's announced that its premium direct-to-consumer (DTC) service will be called HBO Max when it debuts next spring, but will it have the content to attract and retain subscribers?
That's the multi-billion-dollar question facing not just WarnerMedia parent company AT&T but media conglomerates Comcast and Disney as they seek to face down Netflix and pending streaming competition from Apple, Quibi and more.
Unlike those two newbies, HBO Max will have huge franchise brand recognition and a 10,000-hour content catalogue from day one.
It has repatriated, at vast expense (reckoned to be north of $400m for a five year deal) ,all 230+ episodes of Warner Bros' TV produced sitcom Friends, will presumably offer every season of Game of Thronesand GoT's fantasy-epic sequels, and has also lined up seven original series including Dune from director Denis Villeneuve and romantic comedies from Reese Witherspoon while DC Universe spin-offs like Batwoman will also be available.
WarnerMedia has not yet announced pricing for HBO Max, which is expected to be more than HBO Now ($15 per month) so as not to undermine its existing standalone OTT service. The HBO Max library will be larger though.
Apple, which is projected to spend $5bn on content between now and 2022, has stressed that its strategy is one of quality not quantity. That's more of a default position, given its standing start as a content producer.
Meanwhile, Netflix and Comcast-owned Sky have announced fresh initiatives to double down on content produced out of Europe.
Netflix has taken a reported 10-year lease on space at the renowned Pinewood Shepperton Studios which, together with an existing production hub in Madrid, will see it both capitalise on UK and European tax breaks and meet European Union criteria for streaming a percentage of content produced locally.
The streamer has 153 originals on its slate from European producers this year, double that of 2018, and 221 productions in total from a European budget of $1 billion and a total spend this year of $12 billion.
It's worth noting that Warner already has major permanent production space north of London, at Leavesden Studios.
Sky plans to double its investment on original content to $1.3 billion over the next few years, and will produce it under the banner of Sky Studios.
"This is a transformational development for us," said Sky Group chief executive Jeremy Darroch. "Sky Studios will drive our vision to be the leading force in European content development and production."
Sky's strategy is strengthened by the backing of Comcast/NBCUniversal, with potential international re-sale revenues on the table from non-Sky markets.
"Netflix and Sky have both grown their reputation in original content in recent years, and recent announcements regarding further investments prove how central this content is to their overall strategy in the coming years," says David Sidebottom, principal analyst, Futuresource Consulting. "In addition, both continue to focus on improving user experiences, providing an attractive proposition for consumers and helping manage subscriber churn."
"Sky is positioning itself as a European production powerhouse," according to Richard Cooper, research director at Ampere Analysis. "The Disney-Fox merger and Comcast's [$39 billion] acquisition of Sky have been brought about by competition from streaming giants Netflix and Amazon, permanently changing once-established audience viewing dynamics." 
A significant proportion of investment to date in the new "golden age of TV" has been in scripted content, particularly from Netflix. But, as Sidebottom points out, non-scripted, typically scheduled content like Love Island, Britain's Got Talentand talk shows "remain hugely relevant and critical for local broadcasters."
He argues, "This [type of content] is less represented on global streaming services, including Sky. In addition, the sports rights battleground will be re-defined, another area where major rights are typically localised and have been key to the success of pay TV operators such as Sky to date."
All of these hugely expensive strategies could backfire—and surely will for one or more—if viewer fatigue takes hold. While U.S. residents are on average willing to pay for about 3 SVOD packages per household, the proliferation of DTC services will impact consumer choice and what they can afford.
New services may entice consumers with free trials and aggressive introductory prices, but the long-term success of these services will depend more on customer retention than acquisition, and that requires a strategic mix of technology, marketing, and the right content to satisfy the consumer experience.

DVB-I Promises Sub-Second Latency for Broadcast and IP

Streaming Media

TV standards body DVB is working to enable an open standards-based approach to OTT and broadband television.
DVB-I will deliver services over the internet to devices with broadband access and also over managed networks, with operator support. All connected devices are in its scope, not just TVs and STBs.
"The aim of DVB-I is to do for IP services what DVB have done for broadcast," explains DVB chair Peter MacAvock. "Services will be signalled and distributed in a standardised manner, so a specific app is not required. For users it will mean a more consistent experience, though they don’t have to know or care whether a service arrives via broadcast or IP. Broadcasters can deploy a service once to a wide range of devices, and manufacturers can make a single consistent user experience for DVB-I and broadcast services."
DVB-I deployments can be standalone, or broadcast and IP delivery can be combined to create a single hybrid offering. The latter would incorporate services delivered via both methods, making optimal use of the different characteristics of each channel. 
"The first premise is to replicate the broadcast experience on broadband-delivered content, but once we go down that road we open up a whole other set of possibilities in terms of being able to mix different types content, offer enhanced services, and offer different versions of a service, targeting different groups of users in a way that is not feasible with broadcast."
Some examples include the provision of accessibility options such video with signing, or versions of content with special technical characteristics such as UHD. 
For much of the functionality required by DVB-I, decent technical standards are already available. Content delivery will use the DVB-DASH specification which is already deployed by many broadcasters, often in conjunction with HbbTV.
This will be augmented by a low-latency mode, for which the technical specification has been approved, to ensure that the overall delay for live OTT channels is the equivalent to broadcast.
"LL-DASH takes the MPEG DASH specification and profiles it for delivering low latency services," MacAvock says. "It implies that the amount of time for a service to make it through the network is not a minute or 30 seconds but [on] the order of seconds—and potentially fractions of a second.
"Of course, this requires this system to be implemented correctly. We know live distribution of content over broadband networks are only going to get more important, and we don’t want situations in live sports, for example, where people learn about an event on Twitter a good while before it is streamed."
The forthcoming DVB specification on Multicast Adaptive Bit Rate (mABR) will offer further opportunities for broadcasters and network operators to work together to optimise delivery to large numbers of receivers simultaneously.
"The main aim of the DVB-I ecosystem is designed to virtualise the concept of a media service," MacAvock says. "DVB as service definition is network-agnostic. A DVB broadcast stream received in the home is delivered over the network over cable, satellite, or terrestrial. DVB-I is designed to do exactly the same, so packaging of a stream would be consistent regardless of whether you have a broadcast network, a broadband network or, in future, a 5G network."
The DVB is in discussions internally and with stakeholders about 5G network integration.
Another goal is to remove the need for vertical integration by which broadcasters or their partners have to customise delivery for each device. The resulting economies of scale would be useful to free to air broadcasters, operators and pay TV providers, MacAvock said.
DVB-I will also offer the ability to deploy to receivers a single integrated service list including services available over both broadcast and IP. 
"In terms of making DVB-I functionally equivalent to DVB (for digital TV), the major missing piece from a standards perspective is a service layer," says Peter Lanigan chair of the DVB Commercial Module working group. "This is used to signal the services and content that are available, meaning the information used by a TV set (or smartphone, tablet, etc.) to populate the channel list and the EPG. 
"This will probably involve the most significant technical choices for the DVB in writing the DVB-I specifications. Several existing technologies are candidates to be adopted and, if necessary, extended to fulfil the requirements. One challenge that the DVB will have to solve is how a receiver starts the process of service discovery and locates the service list."
DVB members include the BBC, Dolby Labs, Sky, DirecTV, Arqiva, Canal+, Samsung, and Ericsson. The industry will get its first close-up look of DVB-I at IBC with release of the first specifications scheduled later this year.

Tuesday, 9 July 2019

Craft Leaders: Lee Smith, Editor

IBC
From sound design on The Piano to co-creating the intoxicating long opening shot of Spectre and manipulating the space/time puzzles of Christopher Nolan, few editors have as astonishing a track record as Lee Smith ACE.
Nominated for three Oscars (winning one for Dunkirk) and with credits including The Truman Show, Elysium, X-Men: First Class and Inception, the 59-year old is in constant demand.
“Editing is very instinctual,” Smith told Editfest, an event organised by the American Cinema Editors guild for an audience of peers, aspiring editors and assistant editors at the BFI. “The physical mechanics of cutting two shots together can be trained and learned but it doesn’t necessarily make you any good. The choices are far more important – the rhythm, the application of music and the sound effects. Everything needs to come together.”
The Australian started out as a sound editor then sound supervisor for Antipodean directors on films like Dead Calm for Peter Weir, Lorenzo’s Oil for Mad Max director George Miller and The Portrait of a Lady for Jane Campion.
“My father was an optical effects supervisor, my uncle ran a small optical lab, my aunt was a neg cutter and my brother an animator, so I guess I didn’t have a choice,” the editor explains.
His first industry job was in a “very small post house” in Sydney doing animation, feature film and docs where he gravitated toward the audio department.
“I realised that to be a good picture editor your knowledge of sound is very important. You need the technical ability to lay sound into the Avid because that will help your final cut.
But film is the sum of its parts. One part does not work without the others.”
The first of three notable relationships with directors was with Weir. “I started working for him when I was a kid really and worked my way up through the ranks of the editing team,” he says cutting The Year of Living Dangerously (as associate editor), Fearless and The Truman Show.
“Some directors rely on storyboards and arrive on set with a very accurate plan of what they will shoot. Peter just responds to what he sees on the set and will change tack on the day when he realises something it is not working. His process is organic.”
Smith say he turned down Peter Jackson’s invite to edit The Lord of the Rings to work with Weir on Master and Commander: Far Side of the World (2003) the Napoleonic-era seafaring saga starring Russell Crowe, which landed Smith his first Oscar nomination.
For one dialogue heavy dinner party scene set in the captain’s cabin, Weir had exposed 25,000 ft of 35mm negative.
“It was a lengthy sequence with a lot of characters and coverage [angles]. You had to sift through a fortune of material and try to stay true to the sequence which is really about Hornblower connecting with his crew. In any other director’s hands this might not have been much of a scene, but Peter succeeds in showing the bond between the characters. That’s important because you want the audience to want to be on that boat and then feel the tragedy when half of them are killed.”
The Nolan yearsThe Oscar nod opened more doors including to Christopher Nolan who needed someone to edit a reboot of Batman. Initially Smith resisted the offer to edit Batman Begins because he says he didn’t want to work on a superhero movie.
“When I met Chris, he persuaded me that he was going to treat the subject with a great level of seriousness,” says Smith. “He had answers for everything.”
The film effectively birthed a whole subgenre of superhero origins movies and by extension, the later Marvel and DC franchises.
Its follow up, The Dark Knight, was darker still since it features the intense and, tragically, last film performance of Heath Ledger as the Joker.
“I’d edited one of Heath’s first films [Two Hands] in Australia when he was just a kid and I kind of couldn’t believe they were even considering casting him as the Joker. When I watched the first day of dailies it was a shot on a street corner with Heath standing there with a clown mask before getting into an SUV but he has his back to camera the whole time. It was extraordinary. He owned the screen just by the way he stood.”
Other collaborations with Nolan include The Prestige, The Dark Knight Rises, Inception and Interstellar although Smith admits, only half joking, not to entirely understand Inception or “whatever was going on in the bookcase sequence” in Interstellar.
“You have to be interested in puzzles and deconstruction and reconstruction to work on Chris’ films,” Smith says. “The scripts that he writes with his brother [Jonathan Nolan] are like a watch. You can mess with it, but it still has to tell time on the other end.”
Nolan’s films typically contain multiple timelines and in Interstellar’s case at least one other dimension, but none were as complicated to assemble as Dunkirk.
“We have the hour, day, week triple timeline of parallel stories told in the air, the land and the sea that has to converge and then separate again,” Smith says. “We had some leeway over the point of convergence. This is where the armada of boats from England arrive on the scene and there’s a huge rush of emotion because until that point the film has been unrelentingly tense. It was vitally important to gauge when that scene would drop.
“The driving idea behind Dunkirk was how could we make a film tense from the split second it opens, to drop the audience into what would be third act in any normal film, but we also had to know when to ease off, when to deliver that catharsis.”
In order to present the multiple timelines in a chronology which an audience would understand, Smith explains that he would take the entirety of sequences related to air, land and sea and stitch them together and play them back linearly.
“The trickiest timeline was the aerial one which had the least amount of screen time but in early versions we found we were still getting lost. By pulling all the footage out and bolting it together you could get an overview to help weave it into the other two thirds of the story.”
In contrast to Weir, Nolan “has precision knowledge of how he is going to shoot and cut,” Smith says. “The edit is built into to how he shoots. He knows what he wants and gets what he wants.”
Don’t flog a dead horseThe third of Smith’s serial collaborations is with Sam Mendes. They are currently in production together on period drama 1917 which, by its pedigree alone (cinematography is by Roger Deakins), is tipped for next year’s Academy Awards.
“He will abandon a scene on set if he intuitively feels it’s not working,” Smith says of Mendes, citing an example from Spectre. “He’s right not to waste effort flogging a dead horse but doing that requires conviction and the budget to back it up. The problem is usually in the script.”
The celebrated opening of Spectre - set during a day of the dead parade in Zócalo Square, Mexico City - is a complicated sequence involving multiple subliminal cuts to appear as a seamless shot. It cleverly masks exterior and interior set-ups filmed in the city, other Mexican locations and Pinewood.
“We had 17 cameras running during the sequence shooting from every conceivable angle and all shooting film. It took me eight hours just to watch the dailies. Chris Nolan on the other hand is very frugal with his coverage - perhaps because he shoots a lot of IMAX film which is very expensive stuff.”
Smith is not doing Nolan’s next film, Tenet, which is currently being shot with Jennifer Lame (Hereditary) in the cutting room. This was simply a matter of scheduling which clashed with 1917.
“When you are presented with a film that is happening, greenlit and ready to go right now then I think you should say yes rather than wait for a director you also want to work with but whose film is waiting to be financed,” he explains. “There’s always the risk that films can be pushed back. It might eventually put you out of sync with those you want to work with, and the decision is never taken lightly but you have to work - editors also need to get paid.”
Generously, his advice for editors looking for their big break was that everything they do, regardless of the outcome, counts.
“You don’t work any less hard on an also-ran movie than on a cinematic masterpiece,” Smith says. “You gain experience with every film. And if the DNA of a film is simply not in place then there’s nothing much you can do.”

Friday, 5 July 2019

Star Wars fan film delivers blockbuster action

content marketing for VMI
From the opening tracking shot extending back from rugged desert dunes to reveal a guard jumping into a World War II army truck you know you are watching a movie with the production values of a Hollywood blockbuster.
At least, that’s the impression that writer director Phil Hawkins hopes to pull off with his 15-minute fan film which audaciously marries Raiders of the Lost Arc with Star Wars.
Like all fan films, Star Wars Origins is shot on a fraction of a studio budget but smart use of a RED Weapon 8K Helium S35 camera and P+S KOWA Evolution anamorphic lenses hired from VMI achieved the classic look of an eighties action film.
Shot on location in the Sahara, on the border between Morocco and Algeria, Origins is a calling card for Hawkins’ ambition to emulate the likes of Gareth Edwards and Colin Trevorrow in being plucked from relative obscurity to helm a major studio picture.
“I’m not interested in making kitchen sink drama. I want escapism,” Hawkins says. “I want to tell stories that transport an audience the same way that Jurassic Park and Star Wars did with I first saw them.”
Hawkins is an established and successful director of ten years’ experience with hundreds of commercials and five features including Being Sold and The Four Warriors under his belt.
Yet making a fan film about Star Wars has been an obsession since making short films with friends in school.
LucasFilm encourages filmmakers to dabble in the Star Wars universe and to create new stories using franchise assets from costumes and starships to John Williams’ signature score - with the one proviso that they don’t profit from it.
“Arguably, Origins is the most expensive Star Wars fan film ever made,” says Hawkins, who self-funded the project in association with Velvet Film, a commercials and content production company based in Manchester.
“I could be throwing away many, many thousands of pounds of my own money. I’ve thought long and hard about it but I see it as an investment in me as a director and in my career. Origins is, for me, a way of showing what I can do as a filmmaker.”
This was born out of frustration. He says, “No-one is calling me up to make a Star Wars film but very honestly that’s where my ambition lies. I want to make big budget commercial Hollywood studio films.”
Eschewing the light sabre fights and storm troopers of many previous fan films, Hawkins’ high concept is that the world of Star Wars exists in a parallel galaxy to that of Indiana Jones.
It took two years to flesh into a script and to find a location in the Sahara for the 8-day shoot.
Director of photography David Meadows, who worked with Hawkins on The Four Warriors and who had valuable experience shooting in Saudi Arabia for documentary film Joud, was invited to join the project.
“I own a RED Weapon so that was the logical camera choice but it’s also a very versatile and lightweight camera when stripped down which is what we needed when working with a limited crew and short timeframe,” Meadows says. “We needed to be able to mount the camera on a DJI Ronin. If it had been any heavier we’d have to upgrade to a Movi or Steadicam which we didn’t have the budget for.”
Similar weight and balance issues informed the choice of KOWA lenses which Meadows had paired with the RED in the desert heat when shooting Joud.
“I used the old vintage KOWAs on Joud and the thing about these that I love is that you don’t know exactly what they are going to give you. The way it flares light can give an ethereal look to the film. The KOWAs had proved themselves in my eyes so when VMI suggested the KOWA Evolutions for Origins I was sold.”
The KOWA Evolution anamorphic 135mm/T3.5 primes are manufactured by P+S Technic to match the original KOWA Prominar lenses and available in focal lengths 40mm, 50mm, 75mm, 100mm.
“Even at the top of the range the lens doesn’t feel any heavier. They are all consistently lightweight and balanced without needing additional support and that’s very important for this shoot when I’m moving quickly and already carrying matte boxes,” Meadows reports. “The light weight meant I could hold the camera all day even in 50-degree heat.
“In such conditions and with a wind machine deliberately kicking up dust you really want to minimise lens changes wherever possible. Four focal lengths may not seem a lot but what did was shoot 6K to cover the 4:3 anamorphic aspect ratio and jump to 5K when I wanted a tighter shot. I made one of the shots at 4K because 100mm wasn’t quite long enough but by virtue of adjusting the sensor view in RED I could get a tighter shot.”
Meadows feels that the RED and KOWA package achieved the epic look that the director was after.
“You can really feel the heat coming through the lens. It’s washed out where it needs to be in certain areas and the light flares are marvellous. We’ve captured the look of Lawrence of Arabia.”
For a sequence set in a cave the flaring was even more distinct since Meadows was able to send light from different angles into the lens.
“With exteriors you are putting on quite a lot of NDs which can make it harder to flare, however, we were able to achieve this by using a mirror to fire sunlight back through the lens. The streaks produced across the lens are terrific.”
Meadows recorded in REDlog RAW to Rec 709 with the camera set to Legacy. The grade was supervised by freelance senior colorist Dan Moran.
Hawkins’ script breaks into three sections. It starts in the desert camp, a car chase forms the film’s backbone, with a VFX spectacular for the finale.
“Some of the sequences I wrote without restriction which was clearly challenging for our budget and basically boils down to how the heck am I going to get this thing in camera,” Hawkins says.
“I wanted to work with miniatures and special effects as much as possible backed up with plate photography because I believe that that lends greater authenticity to the visuals.
“I hope Origins will be seen as a celebration of classic eighties action adventure movies that will be embraced by the fan communities of both Star Wars and Indiana Jones.”
Star Wars: Origins is produced by Phil Hawkins and executive produced by Gary Cowan of Velvet Films. Release is being timed for December 2019 to coincide with Star Wars: The Rise of Skywalker.

Thursday, 4 July 2019

How AI is reinventing Visual Effects


IBC
AI/ML and deep learning is having a huge impact in computer graphics research with potential to transform VFX production.
In Avengers Endgame, Josh Brolin’s performance was flawlessly rendered into the 9ft super-mutant Thanos by teams of animators at Weta Digital and Digital Domain. In a sequence from 2018’s Solo: A Star Wars Story, the 76-year old Harrison Ford appears pretty realistically as his 35-year old self playing Han Solo in 1977.
Both examples were produced using artificial intelligence and machine learning tools to automate parts of the process but while one was made with the full force of Hollywood, the other was produced apparently by one person and uploaded to the Derpfakes YouTube channel.
Both demonstrate that AI/ML can not only revolutionise the VFX creation for blockbusters but put sophisticated VFX techniques into the hands of anyone.
“A combination of physics simulation with AI/ML generated results and the leading eye and hand of expert artists and content creators will lead to a big shift in how VFX work is done,” says Michael Smit, CCO of software makers Ziva Dynamics. “Over the long-term, these technologies will radically change how content is created.”
Simon Robinson, co-founder at VFX tools developer Foundry says: “The change in pace, the greater predictability of resources and timing, plus improved analytics will be transformational to how we run a show.”
Over the past decade 3D animations, simulations and renderings have reached a fidelity in terms of photorealism or art-direction that is near perfection to the audience. There are very few effects that are impossible to create, given sufficient resources (artists, money), including challenges such as crossing the uncanny valley for photorealistic faces.
More recently the VFX industry has focussed most of its efforts on creating more cost-effective, efficient, and flexible pipelines in order to meet the demands for increased VFX film production.
For a while, many of the most labour intensive and repetitive tasks such as match move, tracking, rotoscoping, compositing and animation, were outsourced to cheaper foreign studios, but with the recent progress in deep learning, many of these tasks can be not only fully automated, but also performed at no cost and extremely fast.
As Smit explains: “Data is the foundational element, and whether that’s in your character simulation and animation workflow, your render pipeline, or your project planning, innovations are granting the capability to implement learning systems that are able to add to the quality of work and, perhaps, the predictability of output.”
Manual to automatic
Matchmoving, for example, allows CGI to be inserted into live-action footage while keeping scale and motion correct. It can be a frustrating process because tracking camera placement within a scene is typically a manual process and can sap more than 5% of the total time spent on the entire VFX pipeline.
Software developer Foundry has a new approach using algorithms to more accurately track camera movement using metadata from the camera at the point of acquisition (lens type, how fast the camera is moving etc). Lead software engineer Alastair Barber says the results have improved the matchmoving process by 20% and proved the concept by training the algorithm on data from DNEG, one of the world’s largest facilities.
For wider adoption studios will have to convince clients to let them delve into their data. Barber reckons this shouldn’t be too difficult. “A lot of this comes down to the relationship between client and studio,” he says. “If a studio has good access to what is happening on set, it’s easier to explain what they need and why without causing alarm.”
Rotoscoping, another labour-intensive task, is being tackled by Australian company Kognat’s Rotobot. Using its AI, the company says a frame can be processed in as little as 5-20 seconds. The accuracy is limited to the quality of the deep learning model behind Rotobot but will improve as it feeds on more data.
Other companies are exploring similar image processing techniques. Arraiy has written an AI that can add photorealistic CGI objects to scenes, even when both the camera and the object itself are moving. An example of its work has been showcased by The Mill.
Software first developed at Peter Jackson’s digital studio Weta for The Planet of the Apes films has been adapted in California by Ziva to create CG characters in a fraction of the time and cost of traditional VFX. Ziva’s algorithms are trained on physics, anatomy and kinesiology data sets to simulate natural body movements including soft tissue movements like skin elasticity and layers of fat.
“Because of our reliance on physics simulation algorithms to drive the dynamics of Ziva creatures, that even in 10,000 years when a new species of aliens rule the earth and humans are long gone, if they can ‘open’ our files they’d be able to use and understand the assets,” says Smit. “That’s a bit dark for humans but also really exciting that work done today could have unlimited production efficiency and creative legacy.”
Smit estimates that a studio would probably need to create fewer than five basic ‘archetypes’ to cover all of the creatures required for the majority of VFX jobs.
“Conventional techniques require experts, some with decades of experience, to be far too ‘hands-on’ with specific shot creation and corrective efforts,” he argues. “This often demands that they apply their artistic eye to replicate something as nuanced as the physical movement or motion of a secondary element in the story. Whereas we know that simulation and data-driven generative content can in fact do that job, freeing up the artist to focus more on bigger more important things.”
Democratising mocap
Similar change is transforming motion capture, another traditionally expensive exercise requiring specialised hardware, suits, trackers, controlled studio environments and an army of experts to make it all work.
RADiCAL has set out to create a motion capture AI-driven solution with no physical features at all. It aims to make it as easy as recording video of an actor, even from a smartphone, and uploading it to the Cloud where the firm’s AI will send back motion-captured animation of the movements. The latest version promises 20x faster processing and a dramatic increase in the range of motion from athletic to combat.
San Francisco’s DeepMotion also uses AI to re-target and post-process motion-capture data. Its cloud application, Neuron, allows developers to upload and train their own 3D characters — choosing from hundreds of interactive motions available via an online library. The service is also claimed to free up time for artists to focus on the more expressive details of an animation.
Pinscreen is also making waves. It is working on algorithms capable of building a photo-realistic 3D animatable avatar based on just a single still image. This is radically different to VFX simulations where scanning, modelling, texturing and lighting are painstakingly achieved such as ILM’s posthumous recreation of Carrie Fisher as Princess Leia or by MPC’s re-generation of the character Rachel in Blade Runner: 2049.
“Our latest technologies allow anyone to generate high-fidelity 3D avatars out of a single picture and create animations in real-time,” says Pinscreen’s Hao Lin. “Until a year ago, this was unthinkable.”
Pinscreen’s facial simulation AI tool is based on Generative Adversarial Networks, a technique for creating new, believable 2D and 3D imagery from a dataset of millions of real 2D photo inputs. One striking example on synthesising photoreal human faces can be seen at thispersondoesnotexist.com.
Such solutions are building towards what Ziva’s Smit calls “a heightened creative class”.
On the one hand this will enable professional VFX artists and animators to assign the technical work to automation in theory permitting more freedom for human creativity and on the other hand democratize the entire VFX industry by putting AI tools in the hands of anyone.
The videos posted at Derpfakes, of which Solo: A Star Wars Story is one, demonstrate the capabilities of image processing using deep learning. An AI has analysed a large collection of photos of a person (Ford in this case) and compiled a database of them in a variety of positions and poses. Then it can perform an automatic face replacement on a selected clip.

Touch of a button
Recent work at USC focusses on generating anime illustrations from massively trained artworks from thousands of artists. “Our algorithm is even capable of distinguishing the drawing technique and style from these artists and generating content that was never seen before using a similar style,” Lin reveals. “I see how this direction of synthesising content will progress to complex animations, and arbitrary content in the near future.”
Progress in this field is rapid, especially given the openness in the ML and Computer Vision community as well as the success of open source publication platforms such as arXiv. Further research needs to be done to develop learning efficient 3D representations, as well as interpretations of higher-level semantics.
“Right now, the AI/ML for VFX production is in its infancy, and while it can already automate many pipeline related challenges, it has the potential to really change how high-quality content will be created in the future, and how it is going to be accessible to end-users,” says Lin.
Human touch
While AI/ML algorithms, can synthesise very complex, photorealistic, and even stylised image and video content simply sticking a ‘machine-learning’ label on a tool isn’t enough.
“There’s a lot of potential to remove drudge work from the creative process but none of this is going to remove the need for human craft skill,” Robinson insists. “The algorithmic landscape of modern VFX is already astonishing by the standards of twenty years ago; and so much has been achieved to accelerate getting a great picture, but we still need the artist in the loop.
“Any algorithmic-generated content needs to be iterated on and tuned by human skill. We’re not in the business of telling a director that content can only be what it is because the algorithm has the last word. But we are going to see a greater range of creative options on a reduced timescale.” 
The future of filmmaking is AI and Realtime
A proof of concept led by facility The Mill showcased the potential for real-time processes in broadcast, film and commercials productions.
‘The Human Race’ combined Epic’s Unreal game engine, The Mill’s virtual production toolkit Cyclops and Blackbird, an adjustable car rig that captures environmental and motion data.
On the shoot Cyclops stitched 360-degree camera footage and transmitted this live to the Unreal engine producing an augmented reality image of the virtual object tracked and composited into the scene using computer vision technology from Arraiy. The director could see the virtual car on location and was able to react live to lighting and environment changes, customising the scene with photo-real graphics on the fly.
The technology is being promoted to automotive brands as a sales tool in car showrooms, but its uses go far beyond advertising. Filmmakers can use the tech to visualise a virtual object or character in any live action environment.
A short film using the technology is claimed as the first to blend live action filmmaking with Realtime game engine processing.



Tuesday, 2 July 2019

5G Speeds Just Aren't Good Enough: 6G and 16K Are Inevitable

Streaming Media
5G is barely out of the blocks but the starting gun has already been fired on what comes next. Research has begun into wireless technology that may be branded 6G, and the U.S. wants a head start.
"I want 5G, and even 6G, technology in the United States as soon as possible. There is no reason that we should be lagging behind on... something that is so obviously the future."
President Trump’s tweets in February were much derided, but the sentiment that the country should lead in communications tech was a message the Federal Communications Commission clearly received. A month later, the FCC voted to legalize tests in the terahertz wave spectrum and to issue 10-year licenses for experiments for “6G, 7G, or whatever is next.”
“These way-up there airwaves represent the new frontier,” stated Jessica Rosenworcel, an FCC commissioner. “There is something undeniably cool about putting these stratospheric frequencies to use and converting their propagation challenges into opportunity.”
She added, “I fear our unwillingness to do so will balkanize spectrum and cut short the possibilities.”
In contrast to 5G, which uses wavelengths (between 30 and 300 gigahertz) measured in millimetres (mmWave), the wavelengths the FCC has put up for grabs are in the 95 gigahertz (GHz) to 3 terahertz (THz) range. The top end of that—from 300Ghz upwards—are submillimeter waves.
The higher the frequency, the shorter the wavelength, and the shorter the wavelength the more data can be transmitted. Like 5G, any signal in the Thz range will suffer from attenuation and interference which will likely need both extreme directional beaming and a density of antennae an order of magnitude greater than 5G.
If these challenges are overcome, one outcome is download speeds 1000 times faster than the mere gigabit speeds of 5G.

Some observers have suggested that if 5G goes according to plan there will be no need for a 6G. Other predict that something like 6G might be needed to shore up the parts of 5G implementation that have yet to take root. Another school believes that the evolution of network technology is inevitable and that major leaps happen roughly every decade—so we’ll be due another by 2030.

“A new mobile generation appears every 10 years, and so 6G will emerge around 2030 to satisfy all the expectations not met with 5G, as well as new ones to be defined at a later stage,” explained Matti Latva-aho, the Academy Professor at the University of Oulu in Finland.
Latva-aho is leading the $284 million-funded 6Genesis project to develop the components needed for 6G systems by researching “distributed intelligent wireless computing, device and circuit technologies, and vertical applications and services.”
The ITU, part of the United Nations, has assigned a task group to the matter called Network 2030. The working assumption is that in a decade we’ll be dealing in terahertz radio frequencies.
The Chinese have gone public with one study. The Southeast University in Jiangsu Province joined in predictions that 6G technologies will go into commercial operation by 2030 noting that “6G competition has already begun among many enterprises.” 
We’ve barely scratched the surface of 5G’s promised superhuman capabilities to drive autonomous cars and gift us media and entertainment applications like 8K live virtual reality, multi-player real-time AR gaming and even holographic lightfields.
According to NYU Professor Ted Rappaport, however, 5G won’t be good enough.
“The use of mmWave in 5G wireless communication will solve the spectrum shortage in current 4G cellular communication systems that operate at frequencies below 6 GHz,” he writes in a paper published by teh IEEE. “The increasing number of new applications such as VR/AR, autonomous driving, internet of things (IoT), and wireless backhaul (as a replacement for labor-intensive installation of optical fiber), as well as newer applications that have not been conceived yet, will need even greater data rates and less latency than what 5G networks will offer.”
That finding may come as a shock to telcos wanting to monetize next-gen consumer apps tomorrow in order to pay back investment in today’s 5G infrastructure.
Rappaport’s paper is important for being the most widely quoted on the subject of 6G. His lengthy academic text has been reduced in press reports to mean that "6G will stream human brain-caliber AI to wireless devices."
That’s bad shorthand for his own explanation which is that terahertz frequencies will likely be the first wireless spectrum that can provide the real-time computations needed for wireless remoting of human cognition. In short, that is true AI.
Media reports could equally well have been headlined "6G will steam human brains via wireless devices," since the “out there” frequencies being explored are borderline radioactive.
As Rappaport acknowledges, “Ionizing radiation, which includes ultraviolet, x-rays, galactic radiation, and gamma-rays, is dangerous since it is known to … lead to cancer.”
Some of these fears surround mobile phone use today, but the professor is optimistic a workaround can be found.

Perhaps just as speculative are potential applications for 6G. The short wavelengths at mmWave and THz will allow massive spatial multiplexing in hub and backhaul communications, as well as incredibly accurate sensing and imaging.

“The ultra-high data rates will enable super-fast download speeds for computer communication, autonomous vehicles, robotic controls, high definition holographic gaming, entertainment, video conferencing, and high-speed wireless data distribution in data centers,” says Rappaport.
In addition to the extremely high data rates, there are promising applications for future systems “that are likely to evolve in 6G networks and beyond." These fall into categories like wireless cognition, sensing, imaging, wireless communication, and localization or positioning.
One example is the opening up of a new dimension of wireless, that enables future wireless devices to do “wireless reality sensing” and gather a map or view of any location, leading to detailed 3D maps of the world created on-the-fly and uploaded and shared in the cloud by future devices.
It would make mobile game developer Niantic’s attempt at global real-time augmented reality location mapping look so last century. It would supercharge attempts by Magic Leap to superimpose a dimensional internet on the world around us.
As exciting as that all is, it might as well be science fiction.
Telcos have got to deal with the now which include early days rollout in select urban areas of 5G networks, the main use case for which on the consumer side is nothing more exotic than enhanced broadband.
“I think there will be a 6G but we have no way of knowing what it is,” says Kester Mann, director of consumer and connectivity at analyst CCS Insight. “The media and communications industry will always have visionaries wanting to look into future, and there has to be a focus on where you put future investment and R&D. But you could argue that the evolution of 4G has an awful long way to go. There’s plenty of room for 4G growth and it could perform many tasks and services that perhaps haven’t been invented yet, so all talk of 6G is more than a little premature.”
The noise about 6G overshadows the prosaic development of 5G and is analogous to a situation in the broadcast industry with UHD.
With 4K UHD still limited in distribution even in mature media economies like the U.K., and with many broadcasters worldwide (including some in the U.K.) yet to transition to HD, the increasing chatter around 8K UHD seems a distraction.
8K production kit is coming to market. For example, Blackmagic Design’s entire NAB 2019 messaging was around 8K. But this heavy metal—dedicated black box hardware working in SDI—is at opposite ends to the leaner flexi-workflow possibilities of IP which has barely got to grips with 4K UHD live production.
Certainly, 8K has its place as a recording format where the greater data overhead can help render higher quality visual effects or deliver more information to the final image for cinematographers wanting to mix resolution, aspect ratios, and sensor size.
The format will also find a home in live production for techniques such as region of interest—extracting 4K or HD images from a panoramic one.
At NAB, the 8K Association was formed to promote the format’s introduction. Its members include Hisense, Panasonic, Samsung Electronics, and TCL Electronics, and, in truth, it is display manufacturers that have the most to gain from pressing the 8K button so early.
Spanish-based Japanese-owned streaming movie service Rakuten TV plans to add 8K content to its catalogue by the end of the year and has announced partnerships with TV-makers Samsung, LG, Philips, and Hisense. There’s no sense of how much bandwidth that will require of home users and no mention of any 8K content. Though some of the latest TV displays do come with an HD or 4K auto-upscaling function to 8K.
Naysayers to 8K, as well as mobile operators suggesting that HD HDR is sufficient for video over 5G, argue that the human eye is blind to pixel resolution beyond 4K (especially on small screens).
That’s beside the point. The tech will be proved, use cases will be found, and the price per bit will come down. What’s more, other academic research suggests that our brains can, in fact, resolve more information beyond the retina.
“We need to envisage 16K and 32K—there will be certain applications for that,” Professor Kyoung-Min Lee from Seoul’s National University told lifestyle magazine EFTM. “But the limitation is infinite.”
You can bet Japanese broadcaster NHK already has 16K in its labs.

Monday, 1 July 2019

Proper diagnosis saves The Good Karma Hospital from location drama

copywriting for VMI
Hit ITV medical drama The Good Karma Hospital required a top to toe check-up of its own to keep camera equipment operational in the heat and dust of a location shoot in Sri Lanka.
The show stars Amanda Redman, Amrita Acharia and Neil Morrissey as medics working at an under-resourced and overworked cottage hospital in south India.
The first season of Tiger Aspect’s production was dogged by blocked air filters to the cameras and other problems associated with heat and dust which caused an enormous amount of damage to the kit and consequently extremely expensive repair charges.
When VMI were invited to supply gear for the second and third series they put in place some additional checks and measures to ensure the production didn’t face a repeat of the issue.
“Sri Lanka is a beautiful country to work in but filming in any tropical country brings well documented issues and challenges,” says Graham Frake BSC Director of Photography for Season 3 of GKH. “Good facilities, testing and expert technical back-up is essential in camera prep and even more so when the location is 5,500 miles away. VMI were in a good place for the challenges to come and knew what to expect.”
Frake took a similar camera package from VMI to that hired for second 2: three Alexa Minis, sets of Cooke S41 primes and Angenieux Optimo zooms.
Throughout the three month shoot temperatures only occasionally dipped below 30c (at night) and often much hotter during the day. The humidity was extremely high and heavy rain was a regular occurrence. However, away from the coastal strip where most of the locations were - it could be dry and dusty.
“One of our most idyllic locations was ‘Greg’s Bar’.  This was a set constructed only a few metres from the Indian Ocean on a beautiful beach.  Paradise to look at but torturous and incredibly demanding to work on,” says Frake.
“We had two tropical suns to deal with - one in the sky and the other one glaringly reflected off the ocean. The noise of surf made conversation and communications difficult and the breeze blew fine sand mixed with salt spray.”
In order to protect the sensors from airborne salt and sand it was not possible to change lenses on the exterior set so much of the photography was made on zooms.  When a lens change had to be made the cameras were taken from set to an adjacent room where it could be done safely.  A layer of film constantly built up on filter surfaces and needed to be removed regularly.
“We put a strict regime in place that was adhered to by our disciplined camera crew from shoot day one,” Frake explains. “Routinely, when cameras were on set, they were protected by umbrellas and covered in a lightweight white fabric to help keep them cool. Small portable fans also helped. The show’s Teradeks became uncomfortably hot so cold gel packs covered in fabric were used to reduce the possibility of condensation.”
At the end of every day the cameras and equipment were cleaned before being loaded onto the camera truck and transported back to base where they were placed in a room with case lids open and dehumidifiers working to help protect against rust and moisture in the electronics.
The shoot was scheduled around two blocks over the three months.  It was pre planned for Kevin Oaten, VMI’s Operational Director, to fly out to Sri Lanka at the half way stage and service the cameras mid-production on location in as clean a room as he could find.
“This was essential,” says Frake. “Not just to fly out between blocks to strip down and service the hard working cameras but also the opportunity for feedback from him about how well our camera care was working and what - if anything - we could do better. 
“We also had the opportunity to swap out one of our Teradeks which Kevin brought out with him from a 600 to 1000 for better reception. We were in good hands with VMI and had 24/7 support.”
Oaten stayed for six days to cover the first day of block 2 shooting ensuring everything had a clean bill of health.
The harsh working environment didn’t just affect the technical equipment. It was tough for heard working crew too - moving gear across soft sand is not easy.
“My job was nowhere near as demanding as the challenges my camera team faced to keep the cameras safe and working. I had two great 1st ACs (Shirley Schumacher and Jo Smith) who, together with the Sri Lankan crew, were amazing.”
VMI’s support extended to the transport of the camera batteries.
Traditionally, drama productions have used 100WH batteries but as cameras are loaded up with more and more peripherals, the power demands on batteries increases. Tiger Aspect’s team were keen to try out Anton Bauer's new 150WH XT batteries. 
Since Lithium batteries are classed as dangerous goods and can pose a fire hazard, strict regulations preventing their carriage in the hold of aircraft had come into effect before production commenced. They could though be carried as hand luggage.
“When preproduction began the production team and crew members travelling to Sri Lanka took two 150WH batteries out with them (twenty in total) to ensure they were ready for principal photography.
“Nothing was too much trouble at VMI in prep,” Frake adds. “Bespoke cables, splitters and rigging plates were all made or purchased on request. With the support and experience of VMI and the disciplined working practices put into place by the entire team made this a happy, enjoyable and almost trouble-free shoot.”
Tiger Aspect plans to begin production on Good Karma Hospital series four soon.