Wednesday, 18 November 2020

AWS Takes SDI in the Cloud to Next Level

Streaming Media

In September AWS unveiled AWS Cloud Digital Interface (CDI), a new network technology enabling customers to migrate to uncompressed live video workflows to the cloud. 

https://www.streamingmedia.com/Articles/News/Online-Video-News/AWS-Takes-SDI-in-the-Cloud-to-Next-Level--143971.aspx

AWS’ ambition has taken a further step with a proof of concept live production workflow built with Grabyo’s cloud-based video production kit and the production expertise of LTN Create. 

 

Beyond the immediate demonstration AWS is planning to create a standard for interoperable video transport for live workflows in the cloud. SDI is the analogy but SMPTE ST 2110 already exists. Just like ST 2110 AWS wants to maintain all the useful attributes of SDI (namely interoperability and reliability between machines) but do so in a way that makes sense for moving data around in the cloud. There’s more on this below and it’s arguably the bigger story. 

 

“We want CDI to have the same interoperability quality as SDI in the cloud,” said David Griggs, senior product manager, AWS Media Series. “If we don’t have that all we end up doing is building processing islands that don’t talk to each other and we miss. So, we’re really serious about CDI becoming a product that brings level of interop.” 

 

Uncompressed live demo explained 

 

But first. The CDI demo (not shown live) was of a hybrid architecture where live HD MPEG video feeds are brought in over the LTN transport network from a venue (in this case from an American football stadium) and decoded (in this case in a fixed location facility run by LTN but it could be done in the cloud). The LTN Create team switches and integrate this into a uncompressed clean feed which is handed back to the network and delivered into the AWS cloud via MediaConnect (AWS’ live transport stream over IP service). 

 

In this case, Mediaconnect feeds a live production solution provided by Grabyo in the cloud and represented the first time the partners had shown a full end to end live sports workflow. 

 

Gareth Capon, CEO at Grabyo explains, “Once the feeds are received by the public cloud into Grabyo we replicate that feed three times to create three different graphics outlets. These could be for regionalised production or different distribution platforms. These are sent back uncompressed using CDI for transport. So, we’re essentially taking one high fidelity stream, replicating it three times, moving that into three different production instances at low latency and repackaging that with different graphics outputs and sending it back over the AWS network into LTN for presentation.” 

Since the LTN Network is multicast enabled it could potentially take each of those three unique feeds into a hundred locations or thousands of locations whether local broadcaster, MVPD, VMPVD, digital owned and operated platform or a social network. 

 

“A very common next step in the process for sports would be to handoff to a copyright protection system before handing off downstream,” says Rick Young, SVP, Head of Global Products at LTN Global. “The video is replicated in uncompressed fashion so there’s no generational loss from continued encoding and decoding.” 

 

The demo diagram represents three typical hand-offs 

 

Benefitting the future of live from sports to education 

 

The vendors talked up what this technology could do for future live production. From a presentation standpoint, for example, the number of variations is limitless and could be used to create tens or hundreds of different outputs from a single event. 

 

“From a control point of view you can do this with a much smaller number of people who could be based anywhere,” said Capon. “Using CDI in the cloud offers significantly more flexibility at a much lower price point than traditional host broadcast production out of a truck.” 

 

Indeed, just three people worked on this production, each in different locations.   

 

Other things that could be achieved far more efficiently at the scale CDI promises includes changing the audio for multi-regional distribution. There are format implications for social media too which will make it much easier to reframe for Instagram live or Facebook live.  

 

“You can start to use lightweight workflows to change the output presentation which does have really positive upsides for commercialisation,” Capon said. “With these types of workflows you can put together more live content in more places for more people which democratises the content opportunity whether sport, music, news, corporate or education.” 

 

David Griggs, senior product manager, AWS Media Series, added, “Previously, attempts to do this would have been limited because at some point you hit a resource constraint like CPU memory or I/O bandwidth. CDI allows you to spread that processing across multiple instances. In this case each replicated feed is being processed on its own instance so ultimately the scalability of the deployment is far less constrained. 

 

“That means you can grow and shrink your cloud-based broadcast infrastructure based on the sophistication of the event instead of having to pre-provision and buy infrastructure you think you’ll need at some point in future. It turns the whole operational paradigm on its head.” 

 

In this demo the sources were 1080i 59 94fps but there should be no real infrastructure restriction with regards to data.  On the transmission side, LTN is a MPEG TS native network at low latency handling 50-100 Mb feeds and so is agnostic to the underlying format.   

 

“A bigger challenge is the ability to perform frame accurate synchronization in the cloud across a range of different input devices to platforms and locations,” said Capon. “It has held us back but that is coming into production now.” 

 

“Another important challenge with some of our cloud productions in the last 12 months has been having heavily compress the outputs of vision mixer in the cloud when needing to scale. By using CDI and moving video around uncompressed it protects the fidelity of the production as you scale it up and frees up compute resource (more feeds, more inputs).  

 

“We’re not outputting uncompressed video but in future there will be lots of opportunities for uncompressed video on the distribution side.” 

 

Griggs followed up, “These pre-transmission workflows are largely dependent on data rates supportive of uncompressed video and for some time AWS has not been able to offer its customers a path to translate these workflows to cloud environments. With MediaConnect and CDI we are paving the way and this demo is just the tip of the iceberg.  

 

“The move from traditional FPGA hardware to software-defined workflows makes that transition to higher resolution, higher frame rates so much less challenging. Once we’ve made that pivot as an industry, the incremental change to embrace high fidelity standards is a lot less painful.” 

 

The focus of this demo was not on end to end latency but on the scalable capability of live workflows running in the cloud.  

 

SDI in the cloud and data interop 

 

One challenge which maybe vendors should think about is how to monitor for compliance and integrity as the number of outputs rises.  As you start to automate all of these functions you need to know what is happening and how to make changes – things that live event producers have not had to think about when only outputting one version of a mixed feed. 

 

Capon said, “We did quite a lot of work with the eSports industry over the summer and what we saw there was you had 60-100 players in an event at any one time. It’s rare that you see video of them all in production – and they may all be in different places given everything’s behind closed doors- so to scale up to that workflow using traditional methods is hard. This type of cloud-based network makes it much easier.” 

 

The production possibilities in the cloud only make sense with multi-vendor involvement which is why AWS made an SDK for CDI available which Grabyo used for this demo. 

 

AWS clearly sees a substantial market in being the backbone for live events and an important part of that is having multi-vendor’s tools work interoperably. 

 

Griggs calls this SDI in the cloud. He explains that CDI’s technology stack is proprietary and allows it to move data around at rates that are equivalent to SDI (it can move a frame of video in 8-12 ms, which is well within SDI specs). “We had to use proprietary technology that knows how to survive and thrive in our multi-tenant network; ST 2110 just doesn’t fly in those kinds of environments.” 

 

Separate from the technology stack of CDI is the audio video metadata (AVM) schema, which is not proprietary. This is designed to be an open standard and is in the SDK. AWS want to promote that through industry bodies and get buy-in from multiple vendors so that it ends up with an interoperability standard. 

“A key part to CDI is an interoperability of the data plane that enables tools from vendors to communicate,” he explains. “To do that, we’re talking to as many partners as we can across the spectrum of live production, playout and master control to ensure we get momentum to deploy their products and services in the cloud. That is essential to the success of cloud production. We need a healthy and vibrant vendor community that is embracing the cloud. 

 

“Secondly, whilst there’s a lot of proprietary tech underneath CDI to provide ‘SDI in the cloud’ with the same quality and latency characteristics as on-prem technologies we don’t want to have proprietary byte-packing. This is the way that bytes are transferred from host to host. We think that necessitates the input of the broadcast community.  

 

To that end AWS is taking its ideas to the VSF as part of its Ground to Cloud, Cloud to Ground initiative (GCCG). “We are not dictating anything,” Griggs declared. “We’re saying ‘here’s a byte-packing order’ and ‘here’s an AVM layer’ that sits on top of the CDI technology that we think makes sense but [industry bodies] shape it, improve it, make it better and we will reimplement into the CDI tech stack.  

 

“We can have an interoperability standard that takes away the friction and the concern around how to extend your existing broadcast infrastructure into an Amazon EC2 environment using CDI. The two will work together just like SDI does on-prem.” 

Saturday, 14 November 2020

Behind the scenes: Ammonite with Stéphane Fontaine, AFC

Words/interview for Red

 https://www.red.com/ammonite

The love affair between two women transcends the boundaries of class and prejudice in director Francis Lee’s follow up to the acclaimed romantic drama God’s Own Country. The based-on-fact relationship between amateur fossil hunter Mary Anning (Kate Winslet) and rich, young bride Charlotte Murchison (Saoirse Ronan) in 1840s England is written by Lee as a passionate paean expressing loneliness.

Lee chose cinematographer Stéphane Fontaine, AFC (A Prophet, Jackie) to capture the visuals. Together, they portray the clash of social spheres and personalities in the wild and brutal Southern English coastline.

“In a way, the storytelling and style mirrors God’s Own Country in that we have two main characters who are not very talkative to say the least,” says Fontaine. “In Ammonite, Francis was particularly interested in the faces because the lack of dialogue means the emotions expressed on faces is even more important.

Another key element for Lee was the ability to shoot long takes. In that regard, shooting long handheld shots made a big difference if the camera housing the large sensor was more compact than a bulkier 35mm camera.

He continues, “I first picked RED on Rust and Bone (2012). Since then I’ve shot with DRAGON and HELIUM, but the MONSTRO is such a big leap forward in terms of look, color science and ISO. It sees in the dark a lot better than any other RED before.”

The DP has shot several commercials on MONSTRO, including the 2019 feature My Zoe, directed by Julie Delpy. For Ammonite, he paired the camera with Canon K-35 primes, covering full frame and ranging T1.3-1.5 (18mm, 24mm, 35mm, 55mm, 85mm). Engineered in the mid-1980s, the lenses yield vintage-style color rendition and skin tones. “The lenses take a bit of the edge off the sharpness of the digital image,” says Fontaine.

“Another great thing with RED is the ability to change resolution depending on the shot,” he adds. “For instance, I may use a 55mm for a shot in 8K and just walk in a bit and punch in a little more in 7K. Sometimes we even shot 6K. It’s interesting because that means the same lens has very different personalities depending on the resolution you choose.

“He was also interested in showing the hands,” he continues. “As a self-taught paleontologist, Mary works with her hands. Francis was keen on seeing that and concentrating on all the tiny details of body language and physical expression.”

In the mid-19th century, gestures, costume, accent and mannerisms were a coded means of communication for both sexes. Understanding this is key to the filmmaker’s low-key, solemn approach to storytelling.

“For example, we understand Mary and Charlotte’s character partly in relation to how Charlotte’s husband Roderick (James McArdle) behaves to women,” Fontaine explains. “There are layers to this. Socially, he is bourgeois and both Mary and Charlotte are working class. He is from London and there’s a big difference in this society of being from London and the more isolated and poorer seaside. Thirdly, he is man, so he knows better, he introduces himself to Mary as an educated man. He is slightly condescending to women. This is the complex stage on which the story is going to develop.”

The cinematographer adds, “This means our camera is not ostentatious. It is quite observational. We’re not telling the audience what to think or what they have to feel.”

Fontaine has worked with RED on several projects, beginning with Rust and Bone on EPIC in 2012. His tool of choice for Ammonite was a RED DSMC2 brain with the MONSTRO 8K VV sensor.

“My approach in the look of the movie was more a photographic than a cinematic one,” he says. “If we wanted to focus on facial features and hands, it felt appropriate to have a big sensor, as if shooting medium format photography.”

“It’s a bit quieter when you’re in 8K and punchier when you shoot 7K,” Fontaine continues. “So, with one lens you obtain two different feels. I used the technique in a lot of scenes. Very often the wide is shot in 8K, unless I wanted to have slightly long lens look for which I’d shoot 7K.”

Ammonite was shot on location in Lyme Regis, a small seaside town on England’s south coast where Anning lived and worked on the nearby cliffs. Lee and Fontaine shot over the winter of 2018 through 2019 using largely natural light.

“One thing we didn’t want to have was sun,” reports Fontaine. “We were blessed because that winter was quite gloomy. We’d discussed whether we wanted a style that looked natural or one more cinematic that would feel like film lighting. We decided that if wanted to stay true to the story and characters we shouldn’t use all the tools you might normally use for a period movie, like a little haze and foggy atmospheres. The same with VFX. Very often when you shoot a period film, at some point the director or producer expect a big spectacular shot that establishes the whole milieu with hundreds of extras. I knew that Francis didn’t want to have this kind of contrast because it suddenly shouts to an audience that they’re watching a different movie. It would distract from the story. Instead we wanted everything to be fairly austere.”

To add to the unpolished texture, Fontaine lit scenes to appear as if solely illuminated by candles and oil lamps.

“We did some tests on the RED with Kate and Saoirse before shooting, lighting with just one candle and it was really stunning but at the same time I wanted a bit more than this,” he recalls. “I looked for the softest LED light source that I could find and basically used it to augment in a way that hopefully doesn’t look like it was lit by anything else than the candle.”

Fontaine further appreciated the ability to tweak the look in camera. “I am a big fan of FoolControl which is a fantastic [iOS] app developed for the RED to change colors, curves, and contrast. Another option is to use IPP2 to adjust the contrast or the highlight roll-off. It’s very handy and super quick, especially when you don’t have time to go to the DIT and tell them what kind of look you want. It’s something you can do on-the-fly on set literally 5 seconds before shooting.”

Fontaine says he deliberately didn’t watch period drama like The Favourite in prep, saying he didn’t want to be influenced by anything. “If the film you watch in prep is good, it is tempting to steal some ideas but that can mean you lose track of where you want to go for your own story. I believe that Francis and I managed to achieve a distinct look for Ammonite.”

 

Friday, 13 November 2020

DP James Kniest on The Haunting of Bly Manor

British Cinematographer

After an au pair’s tragic death, the owner of an English country Manor, Henry Wingrave, hires a young nanny to care for his orphaned niece and nephew. But all is not as it seems as centuries of dark secrets of love and loss are waiting to be unearthed.

https://britishcinematographer.co.uk/dp-james-kniest-on-the-haunting-of-bly-manor/

Horror series The Haunting of Bly Manor is Netflix follow up to The Haunting of Hill House, once again show run by Mike Flanagan.

Set in 1980s England and based on the gothic romance novellas of Henry James, the show is filmed in Vancouver. Maxime Alexandre photographed the first five episodes with DP James Kniest shooting episodes six, seven, eight and nine.

“Embracing someone else’s design is never ideal since everyone approaches telling a story a little differently,” says Kniest. “I have a good working relationship with Mike and he trusted me enough to do my own thing.”

Kniest inherited the ARRI LF and Signature Primes from Alexandre and the visual language shaped by DP Michael Fimognari on Hill House.

“The main decision I made was to give the series a moodier, darker look in keeping with the narrative arc as we approach its climax,” Kniest says. “I adapted the rear netting (diffusion technique) to be tighter, so the look was less Hallmark-y and minimised blooming in the highlights. I lit more with backlight to lend more fall off on faces while helping to carve people’s figures out and my camera style tends to be less static. I like to be fluid and to move the camera unless there’s a reason not to.”

The main Manor set was built in two separate studios; one housing the downstairs, the other the upstairs. “In almost every scene our characters go up or downstairs so it was tricky to track the shots on different days. The upstairs set had a green floor so we could shoot from top down below. We used a Technocrane to reach from foot to bannister.”

They also filmed on a farm set over winter, occasionally having to shift 3 ft of snow which wouldn’t quite suit a mild British climate. “We brought in a steam truck and melted 5 acres of snow, which turned into a muddy bog. We used the Technocrane so as not to tread around in the mud.”

The exterior set of the Manor grounds and lake also presented an environmental challenge. “SAG rules mean we have to heat the lake to about 80 °F since we have child actors in the water for one scene. With zero-degree air temperature that gave us a steam issue. We were able to get some fans, clear the steam and start rolling the camera before the steam would envelope us again.

“It was useful too, since the natural steam added to the ambient tone of ground fog and the spooky ethereal lake scene. There was another farm a few hundred yards away with bright sodium light, but we had so much steam we could block that out. So much of the fun in filming is leaning into these challenges which can end up being helpful elements.”

Unusually, episode eight was shot in black and white and set in the 1600s. “This was a lot of fun and one of the things that attracted me to the project,” he says. “We created a black and white LUT and monitored BW on set but protected the material by baking in a colour version.”

He employed Cinefade, a programmable accessory that allows the gradual transition between a deep and a shallow depth of field in one shot at constant exposure. “It helped the subject matter to pop off the background. It’s the first time I’ve used it, but it was easy to programme and without much light loss.”

Having finished principal photography by 28 February, editorial was hampered by Covid. That particularly impacted Kniest since he was overseeing the colour grade of all episodes, performed remotely by Corinne Bogdanowicz at Light Iron LA.

“I was viewing on iPad Pro which is not calibrated for HDR. We used Moxion to view episodes and share notes via timecode. It was challenging because some things got lost in the notes. I’m used to sitting with a colourist and both of us being able to see right there when we bring the contrast up or window a face. This was very daunting and I’m nervous about how it translates to the TV versus someone watching on an iPad. It’s so hard to grade without you both looking at a 1000 nit monitor.

“Sometimes colourists and producers err on the bright side of a grade to protect themselves. I like to push it to be as dark as possible but with compression and people viewing on different devices it is a tricky balance to know what to grade for. One of the things about Bly Manor is the nuances in the performance. The idea is to be just light enough not to miss those important story beats.”

Next up for Kniest is Midnight Mass, another Flanagan project for Netflix shooting in Vancouver based around a group of terminally ill young people.

“Each episode is standalone, so I’m looking forward to a lot of variety and to being able to design and create the look from scratch.”

 


Thursday, 12 November 2020

Analysis: Why Framestore snapped up Company 3/Method

IBC

In a torrid year for the VFX industry in which the mass halting of live action shoots forced a number of facilities to downsize, one company has managed to dramatically expand. 

https://www.ibc.org/trends/analysis-why-framestore-snapped-up-company-3/method/6994.article

With the acquisition of US-based creative services group Company 3 and Method (C3M), London-headquartered Framestore has become the world’s second biggest VFX company by headcount. 

C3M which includes VFX shops Method Studios, Encore and EFILM, has 3,500 ‘artists, experts, engineers and innovators’ on its books, now added to Framestore’s 2,500 employees. 

The 34-year old company, which made a profit of £19m on turnover of £91m in 2019, has operations in Canada, the US and India. The C3M acquisition boosts capacity in those territories while adding a division in Melbourne. 

The sale was financed by Aleph Capital and Crestview Partner which will become the majority shareholders of the enlarged company. It is unclear whether these venture capital companies have taken over the 70% share in Framestore owned by Chinese state backed Cultural Investment Holdings since 2016. Framestore’s management team, including co-founder and chief executive William Sargent, retain minority stakes. 

Upcoming films on which it has worked include Fantastic Beasts III, a CG and live action remake of The Little Mermaid and The Midnight Sky. 

Pandemic impact 
Framestore’s fortunes contrast with rival VFX giants as Covid-19 ripped through the sector.  

DNEG, the world’s biggest since 2014 with close to 7,000 staff, shut down its episodic unit in LA, reportedly costing 20 jobs. This time last year it had hoped to raise £150 million ($191 million) from an initial public offering which would have valued the company (based in London and with major operations in India) at £600m. Yet as lockdown bit, it sought to introduce salary reductions of between 20% and 25% as a way of preserving jobs. 

“This is not just about our company; this is an industry-wide issue that is affecting all companies in our sector,” Namit Malhotra, the company’s CEO, told Cartoon Brew.

“This decision to introduce salary reductions has not been taken lightly, and our aim is to preserve the jobs of as many of our employees as possible while ensuring that we continue to deliver the best possible work for our clients. To help mitigate the effects of the reductions we are introducing an equity program to help fund the repayment of sacrificed salaries.” 

A few weeks later, under pressure from Bectu in the UK, the facility partly reversed its plan, saying it had found additional funding from business partners. 

That prompted Bectu Assistant National Secretary Paul Evans to comment: We urge DNEG to salvage what is left of their reputation for being a good employer by withdrawing its ‘dismiss-and-rehire’ threats. They have created a fig-leaf claim about a business-partnership finding the funding, but our members can see this for the sleight of hand that it is.” 

Meanwhile, Technicolor’s earnings slid 48% in the third quarter 2020 with its production services division, encompassing VFX, down 53.7% year-on-year. Technicolor blamed this on, “the pre-Covid-19 delays in awards coming from one key client, and by the subsequent pandemic-related impacts on production around the world.” 

Its VFX and post division were hit when 50 sets of dailies stopped overnight in March. In May, the company merged its Mill Film and Mr. X division under the Mill Film brand and later filed for Chapter 15 in the US as part of a company-wide restructuring to pay down €1.44 (USD$1.59) billion in debt. A debt financing deal worth €420 million and the equitization (stock for cash) of €660 million has steadied the ship. 

Christian Roberton, who came up through the ranks at MPC, is newly promoted to president of Production Services and had been tasked with streamlining the operation. The company is “now in negotiations on major VFX tentpole projects that were delayed during the first half from one key client.” 

The other century-old film entertainment brand Deluxe underwent its own last minute aversion of bankruptcy last October. Nonetheless, Covid-19 couldn’t have come at a worse time for a company trying to re-emerge from $1bn of debt.  Its portfolio of VFX and creative post companies split from Deluxe in the summer operating as C3M under original founder Stefan Sonnenfeld before being bought by Framestore. 

Disney-owned ILM is the other major VFX company of similar scale to Dneg and Framestore but its staff count and earnings are unclear, although there are no reports of widespread redundancies. 

Likewise, at Weta Digital (with around 1500 permanent employees), Cinesite (around 1000) and Pixomondo (600) there are no reports of permanent damage although Pixomondo CEO Jonny Slow said that cuts such as reduced hours, unpaid leave and redundancies have come into effect for around one-third of his workforce. 

“VFX companies have lots of fixed costs in the form of employees, offices and tech,” he told Deadline. “That relies on a steady stream of new business to keep it going.” 

Scale of resource  
While Covid-19 has undoubtedly accelerated remote production workflows in visual effects as in other parts of the industry, scale of physical presence is still considered important in a business characterised by thin margins. 

“It would be very difficult to deliver a thousand shots on a big studio picture or the 1200 shots we delivered for His Dark Materials Season 2 with pop-ups everywhere,” says Fiona Walkinshaw, Global Managing Director, Film, Framestore. “The results would not look as good. You would not have the consistency of one company delivering that work. Nor would studio or episodic TV clients be comfortable working with multiple smaller groups delivering shots and assets on the very biggest projects. Ultimately, we are all beholden to delivery dates and there’s a lot of pressure put on suppliers to stick to milestones along the way. Our payments are scheduled against those milestones. With lots of little companies it would be hard for a studio to be sure they are going to get all the work on time.” 

The facility’s work also spans VR/AR experiences and graphics for theme parks where studios are able to extend their intellectual property. For example, Framestore played a leading role creating the character ‘Rocket’ for Guardians of the Galaxy and then worked with Disney to develop a Guardians of the Galaxy theme park ride. 

“Our vision for the future of our industry is storytelling across all the media of content delivery — from mobile to Imax; and headset to theme parks,” said Sargent in a releasemedia of content delivery — from mobile to Imax; and headset to theme parks,” said Sargent in a release.  

Walkinshaw adds, “Working with the same assets for a TV spin-off and a ride is definitely more efficient under one roof. Efficiency is what a lot of clients are looking for. If you’ve done Spiderman in a movie and they want a ride they want to use the same team.”  

Whether the deliverable is 2K, 4K or higher rendering pixels remains a huge cost tied to hardware and directly related to a facility’s efficiency. 

“We use Google rendering for peak provisioning alongside our render farm but there is a pressure for us to continually develop rendering technologies that are as quick and efficient as possible,” says Walkinshaw. “We have teams dedicated just to that.” 

Games engines don’t yet provide an answer. “They still cost to licence and to run and they don’t yet deliver feature film quality,” she says. 

While there is movement toward cloud workflows in VFX, the volume of shots for projects like Wonder Woman 1984 on which Framestore bids, means that on-prem workflows are still more efficient. 

“We put in a work from home platform at the start of lockdown with a virtual private network into desktops with encrypted secure access and workflows around dailies sessions. The end product that we’ve delivered remotely has been amazing but it takes a lot of time and effort.  

“More importantly, the long term impact of us all working remotely would be a loss of the creative mindset and mentoring of more junior talent and the cross collaboration of ideas. Those don’t happen as easily when we all in our own rooms at home. 

“It’s really important to get back to an environment where we can all work more collaboratively.” 

Brexit visa alert 
Post and VFX companies were not required to close during the lockdown, allowing them to continue by using a combination of working from home via remotely connected equipment or social distancing measures.  

“However, while some had work in their pipelines at the point of lockdown, demand for their services has dropped significantly and (as of May) almost dried up, as the filming of new material halted,” reports Neil Hatton, CEO of trade association UK Screen Alliance. 

Absent Covid-19 and spend on film and high-end TV production in the UK was on target to hit record levels this year, according to the BFI Research and Statistics Unit. It estimated that total spend of productions January to June 2020 would have risen from £699 million to £1.87 billion - the largest ever reported. 

The VFX industry is worth £1bn to the UK economy alone according to a BFI report of 2018 but Brexit even with a deal threatens to undercut this. 

While UK Screen is taking credit for getting almost all artist and production roles added to the Shortage Occupation List of the government’s Point’s Based Immigration system for skilled workers, the fact that once free movement ends at on 1 January 2021, new EU recruits will be required to have visas. UK Screen totals this at £2.5 million a year for VFX and animation employers 

“The immediate impact is going to be very real cost of visas.  A third of our crew in film and episodic are form the EU. We recruit heavily from the EU; we have to,” says Walkinshaw.  “Our business is growing, we need a mix of talent and we recruit from lots of colleges in Europe because the skillsets are there. The very real impact is cost in what is not a big margin business. It takes a lot to deliver such a high-end creative product where there are a lot of variables in the course of production of and to deliver that and maintain your business is always quite challenging. Any cost impact will be felt genuinely felt on our bottom line.” 

That said, visas also cost when Framestore recruits in Canada. “London is a more expensive cost centre to start with,” she says. “It’s incredibly disappointing that visa laws charges are going to be quite considerable.” 

Wednesday, 11 November 2020

Don’t Make An Ass Of aaS

BroadcastBridge

Media companies were already gradually moving towards as-a-service business models before Coronavirus hit. However, X-as-a-Service is not a simple transition and it is not for all product or all companies.

https://www.thebroadcastbridge.com/content/entry/15887/dont-make-ass-of-aas

The crisis is widely understood to have forced their hands into accelerating cloud-native product and software aaS to support it.

That’s because as-a-service models are more suited to the unpredictability of modern media markets, which has been taken to an extreme by the pandemic. As-a-Service goes together with the softwarization of formerly hardware-based products as well as with the modularity of being able to tailor workflows at a granular level using cloud-based microservices. The absence of trade shows also reinforces a preference among clients to receive regular firmware upgrades rather than wait longer periods to benefit from a new product.

The current remote working scenarios have forced media companies to adopt new technology tools, most of which are provided through as-a-service schemes.

According to IABM’s latest industry report, the transition to SaaS has financial implications for the supply side of the industry as companies moving from large and infrequent inflows of money to smaller and more regular payments, suffer from a painful and lengthy cashflow crunch. IABM research shows that this crunch has been exacerbated by the pandemic-induced shock on technology demand by forcing a move to subscriptions and on-demand billing.

“New product developments need to be offered as a Service,” affirms Julian Fernandez-Campon, CTO, Tedial. “This is good for customers as they can easily try them and decide whether to buy or not and for other vendors who want to integrate their solutions. A good example is the testing of AI services where customers can easily check results without buying the product. Another example is media processing services in the Cloud, where the vendor that is offering the whole solution can 'bundle' those services with a predictable cost.”

Major media technology suppliers such as Avid and Harmonic highlighted this demand shock in their recent earnings calls, pointing to increasing demand in their as-a-service offerings. They also highlighted that their legacy products had suffered from a more pronounced decline in revenues.

“There is certainly a great deal of interest in looking for as-a-service solutions from vendors, but these soft products are far from easily managed,” says Simon Browne, VP product management at Clear-Com. “It requires a fully formed back office and maintenance methodology to be successful. Our own customer experience tells us that only a few customers are ready to handle the licenses and repetition of payments that fall out of this service approach.”

Ciaran Doran, director of marketing, broadcast & media, Rohde & Schwarz, says, “it is often the case, we need to take care not to jump on a buzz in the industry for the sake of it. The X-as-a-Service model serves well in certain circumstances just like moving to cloud technology makes sense in some areas but not yet everywhere. With software defined solutions that are modular, COTS enabled and virtualizable are already here, we’re deploying these right now. I think that one of the most important issues is around the trust that a broadcaster can have in us to deliver the solution they need and support them in the future.”

Avid’s CEO Jeff Rosica says that “without question” the situation is teaching us the deeper value of subscription-based ‘as-a-service’ technologies. He says Avid’s own response to help media companies pivot to working remotely was aided in part by the company’s early focus on cloud-based content workflows, and its transition to providing those tools via subscription.

“This was accomplished by a workforce that’s been defined historically by its expertise in media,” Rosica says. “Going forward it is incumbent on the vendor community to drive the shift at a much faster pace by filling the industry’s talent pool with new kinds of talent including cloud/SaaS professionals and digital natives.”

The issue is best summed up by Graham Sharp, CEO at Broadcast Pix who warns. “Managing this transition has its perils and very few suppliers in our industry have done it. You have to cross the revenue ‘valley of death’ – convert permanent license orders to annual licences at 20% or 25% of the value. The business must be extremely well-funded to cross this chasm and I am not sure many in our industry are.” 

Tuesday, 10 November 2020

Microservices and advanced orchestration drives personalized streaming

copy written for Net Insight


By 2025, cloud-native, microservices-based solutions will be mainstream in the broadcast industry. The trend is underlined in a recent Devoncroft survey of media technology trends where executives prioritized cloud computing, virtualization, remote production and IP networking in their top six agenda items.

https://netinsight.net/resource-center/blogs/kenth-innovation-blog-2-microservices/?utm_source=Linkedin&utm_medium=SoMe&utm_content=Kenth-blog-2&utm_campaign=IBC2020

Microservices are a core component of the cloud-native architectures underwriting these substantive changes. But microservices cannot function without orchestration and containerization.

Let’s take a step back. What is a microservice?

Microservices are software-based solutions that enable broadcast workflows to exist in the cloud. They give broadcasters access to ubiquitous infrastructure, full flexibility, service agility and endless scalability.

How do microservices change media production and delivery?

Five years ago, the vast majority of media processing workflows deployed by broadcasters relied on hardware appliances. Production, encoding and playout was a siloed workflow with its own management, control and operations team. Today, most of those workflows are software-based with a similar if not better performance.

With microservices, the media processing workflow is broken down into relevant features (for example, baseband over ST 2110 support, channel encoding, channel branding). These are all independent, using their own resources (i.e., CPU, memory), but communicating together efficiently.

The need for orchestration

Advanced orchestration is needed to optimize the mapping of microservices onto the COTS servers and cloud infrastructure in order to maximize resource efficiency. Broadcasters can share a common orchestration system for resources and workflows, making deployment and operations simpler.

A rich, well-documented and supported API is important, as this opens those workflows to multiple media application vendors. Net Insight is on the verge of releasing an API that will open up the Nimbra Edge hyper-scale media cloud platform.

Finally, microservices embedded with containers enable cloud-native solutions to be deployed in a combination of public, private, and on-premise infrastructures. Individual applications and processes are packaged in its own container. This facilitates reproducibility, scalability, and resource isolation.

Cloud adoption has accelerated significantly over the past 6 months. According to a Cloud and Virtualization report by the IABM, 45% of respondent have already deployed some sort of cloud technology while 40% are likely to do so. Cloud-native, microservices-based software is being deployed for workflows such as pop-up channels, remote live sport events, channel origination and disaster recovery.

Video quality is just one area where the cloud can be leveraged to generate different variants of the same programme to match the end-user device capability. The device landscape is fracturing in terms of multi-codec support, resolution (HD to 8K) and HDR formats. Combine that with the different versions of streaming formats (such as HLS or DASH) and all the different DRMs and you end up with a spiralling complexity of combinations that only a cloud solution can manage.

All of this is designed to evolve a streaming service that will be much more personalised than today. By 2025, the main question facing broadcasters will not be whether cloud is a good solution, but which cloud solution is the best to manage a service that requires delivering personalised content to mass audiences.

Only an extremely agile solution will be up to the challenge.

Monday, 9 November 2020

Behind the Scenes: Mank

 IBC


https://www.ibc.org/interviews/behind-the-scenes-mank/6973.article


The techniques behind Mank, David Fincher’s digitally dexterous emulation of Hollywood’s classic era, are revealed.  


David Fincher’s passion project about the Citizen Kane screenwriter Herman J. Mankiewicz looks, as intended, like a love letter to 1930s cinema. The filmmakers employ sophisticated digital techniques to pay homage to the cinematic bravura that helps Orson Welles’ masterpiece regularly top the list of all-time classics. 

It’s a film the director originally intended as the follow-up to his 1997 thriller The Game, shortly after his father Howard, a journalist at LIFE magazine, wrote the script. For one reason and another, and reports suggest it was Fincher’s insistence on shooting in black and white, Mank was delayed until Netflix greenlit production late last year. Principal photography finished in February, just days before California went into lockdown. 

Fincher of course kickstarted the streamer’s original content by masterminding House of Cards. He has subsequently made two series of serial killer investigation Mindhunter, all sixteen episodes shot by Erik Messerschmidt ASC who is Fincher’s collaborator here. 

Mank follows the ‘scathing social critic and alcoholic’, played by Gary Oldman as he races to finish the Kane screenplay for Welles. It also stars Charles Dance as newspaper tycoon William Randolph Hearst and Amanda Seyfried as Heart’s girlfriend Marion Davies, satirized by Welles and Mankiewicz as Charles Foster Kane and mistress Susan Alexander. The connection with Hearst is strengthened by the fact that Mankiewicz was a frequent guest of Davies at Hearst’s fabulous California castle, dubbed Xanadu in Kane. 

As a homage to WWII-era Hollywood the decision to emulate the look pioneered by cinematographers like Gregg Toland in digital format is a bold one. 

“For this movie we wanted to shoot very deep focus photography for most of the film and then be very specific about where we used shallow focus,” says Messerschmidt. “Shooting on film would have significantly limited our creative choices, particularly with focus and depth of field.” 

Aside from black and white, deep focus is the principal aesthetic in Mank. It’s a technique invented by landscape photographers, adopted by film directors in the 1920s, and popularised in Kane.  

Instead of using a shallow depth of field to create an impression of space within the picture, deep focus keeps everything in the frame in focus. In Citizen Kane, Toland often positions the camera at a low angle with even the ceilings in frame and used light, shadow and the set design of Perry Ferguson to imply depth. 

Messerschmidt acknowledges that shooting black and white film can look “beautiful” and that it has lots of inherent qualities. “But you are stuck with those qualities making it difficult to deviate from them if they aren’t exactly what you’re after,” he says. 

It was obvious that Fincher would shoot on Red – the Red Ranger Helium Monochrome was the choice here  – he hasn’t shot with other digital cameras since lensing The Curious Case of Benjamin Button in 2008. “David is interested in repeatability and consistency and unfortunately the photochemical process is, by nature, antithetical to that. There are a lot of variables to getting the image onto screen whether that’s the bath temperature, the composition of chemicals, the age of the film stock, the projection process. A lot of people see it as an advantage but we were not particularly interested in embracing the variables we could otherwise control or eliminate with the digital camera.” 

Less clear cut was opting for a Red camera with a black and white sensor, but in tests Messerschmidt confirmed which way to go. Colour sensors employ Bayer filters to funnel either red, green or blue light onto each photosite, consequently blocking two-thirds of available light. Black and white, or monochrome, sensors on the other hand don’t need that filter which means that more detail and less noise will be captured at any camera setting. 

“Capturing monochrome natively is better than shooting in colour and eliminating the saturation in post,” he reports. “Because there is no demosaic of the image, what you get is a pure, accurate black and white artistic image and significantly more speed out of the sensor.” 

Filming noir 
Messerschmidt was on location in South Africa shooting episodes of Ridley Scott’s HBO+ series Raised by Wolves when the call came.  

“The opportunity to shoot black and white rarely comes along so I grabbed it with both hands,” he enthuses. “The more David talked through his vision, the more excited I became.” 

Aside from a commercial spot, however, the DP hadn’t shot in the format since film school.  

“Most people instinctually think of film noir when asked to imagine a black and white movie. Just as with colour film, there’s tremendous diversity in the styles and look of the noir genre let alone across the entire canon of black and white cinematography.” 

Noir has its roots in German Expressionist cinema of the 1920s and influenced Welles and Toland in making Citizen Kane. Welles would accentuate the high contrast light and shade and deep depth of field in later definitively noir movies like Touch of Evil.  

“My biggest fear was that I’d get drawn in by the desire to be dramatic and aggressive with light,” Messerschmidt reveals. “If a scene had some venetian blinds it was so tempting to put a light behind them. In Mank, we are emulating and paying respect to the cinema of the 1930s and ‘40s but we don’t want it to become a pastiche.” 

Fincher and Messerschmidt reviewed a range of black and white cinema, including Wuthering Heights (1939) and Grapes of Wrath (1940), both lensed by Toland, classic noirs The Big Sleep (1946), The Big Combo (1955) and Casablanca (1942) as well as features from later periods like In Cold Blood (1967) and Billy Wilder’s The Apartment (1960). 

“We also looked at modern movies like Manhattan (1979) which is more naturalistic in tone. We’d both pull scenes from different movies for reference, and went back and forth until we had nailed the look.” 

Shooting 8K 
Mank is recorded in 8K, giving the filmmakers the greatest possible latitude in the DI.  

“I would prefer the optics to be the bottleneck in relation to the image and not the sharpness of the sensor or its resolution,” says Messerschmidt about 8K. “The highest resolution sensor is best for me because that’s where I start to see the optics fall apart. In that instance, I can make a more measured choice in terms of what I am trying to give the audience visually. When I’ve shot on a higher resolution sensor it has always led to a better image, in my opinion.” 

It’s a format and workflow that Messerschmidt, Fincher and colourist Eric Weidt first used for series 2 of Mindhunter. “You can take 8K camera negative and transcode it for conform or editorial purposes in lower res and then go back and finish reading it. You can’t do that in [lower than 4K] capture,” says Messerschmidt. 

Other subtle nods to the photochemical process include the deliberate addition of side-to-side wobbles caused during the optical process when the film frame passes through the sprockets of a projector. These are most pronounced in Mank during dissolves and title cards.  

Messerschmidt gave himself a further challenge by deciding to shoot outdoors in broad daylight for one particular sequence set at night. Shooting day for night was a common technique in movies up until the light sensitivity of digital cameras made shooting in extreme low light conditions practical. The trick is to underexpose the scene – whilst shining enough light onto the faces of actors to sell the effect. 

“If you just underexpose you invariably don’t have enough light on the actor’s face for it to look natural,” he says. “It’s easier to manage in close-ups than in wides, particularly when the actors are moving. We spent a lot of time in prep figuring out where the actors would be in relation to the sun. I remember waking up those mornings praying there would be sun. The scene wouldn’t work if it were overcast, but we were really fortunate mother nature cooperated.” 

All of which begs the question, why this approach in the first place? Other night scenes in Mank are indeed shot night for night.  In this instance, it suited the narrative. “Mank and Marion enjoy a platonic romance and in this scene she opens up to him about her relationship with Hearst,” Messerschmidt explains. “They’ve built this strong friendship and Fincher wanted the scene to have a bit of a magical quality to it. This is enhanced because we’re showing them walking among Heart’s zoo of elephants, giraffes and monkeys, all of which required some visual effects help.”