Thursday, 31 January 2019

This incredible new technique allows us to see around corners in photographs!



RedShark 
"Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there." Deckard’s vocal instruction to the Esper machine to ‘see’ around the corner of a room in a photograph in the original Blade Runner looked unachievably cool in 1982 but could now be in reach.
Boffins at Boston University have devised a computer program that turns a normal digital camera into a periscope. They even call it “computational periscopy”.
There are systems that use lasers and sensors and a lot of hardware processing to achieve what’s called non-line-of-sight imaging but the breakthrough here has been made using a 4 megapixel digital camera and a mid-range laptop. The secret is in the algorithm used to decode the hidden object’s image and cunning use of information extracted from the object’s shadow.
“Basically, our technique allows you to see what’s around the corner by looking at a penumbra on a matte wall,” explained team member and associate professor of electrical and computer engineering, Vivek Goyal, in a  paper published in Nature. 
A penumbra is the partially shaded outer region of a shadow cast by an opaque object.
The researchers found that their algorithms worked best only when there’s a shadow of the object you want to reveal, so that requires something to at least partially block your view of it.
The resulting combination of light and shade at different points on the wall help the math reconstruct what lies around the corner. It does it in full-colour too.
In Goyal’s words, “Based on light ray optics, we can compute and understand which subsets of the scene’s appearance influence the camera pixels. In essence, computation can turn a matt wall into a mirror.”
In the test, they used an LCD screen to display several illuminated cartoon images. They then placed a sheet of black foamboard in front of the screen, casting a shadow onto a matte white wall. Off to the side was a digital camera, which couldn't see the LCD screen but could take pictures of the shadow on the wall.
While their program takes about 48 seconds to work out a hidden scene from a digital image, the researchers believe it could be could be done much faster with more computer power. Eventually, it may be fast enough to run on video footage, they reckon.
There are obvious applications for this sort of trick in autonomous vehicles to send alerts about pedestrians. The technology could also help police monitor buildings from the outside during hostage situations or first responders scout out collapsed buildings after disasters.
Since their Boston team’s research was funded by DARPA (Defense Advanced Research Projects Agency) the US military’s R&D wing, non-line-of-sight devices would also come in handy for spying, even for high-flying drone observation.
The technique could also be built into smartphones to allow you to take a picture of something that’s not even in the frame. Handy for paparazzi.
Perhaps the team’s greatest discovery is that when you realise how much light can be extracted you just can’t look at shadows the same way again.

Wednesday, 30 January 2019

2019 Oscar nominations: Inside the technical categories

IBC

From shooting in black and white to creating in-camera effects, we take a look at the technology and technique behind Oscar nominated films.
The pinnacle of awards season approaches with three of the films nominated for cinematography also contenders for Best Foreign Language picture.
Roma and Cold War are both based on the personal memoirs of their directors and lensed in black and white to evoke the past without sentimentalising the story in the ways that colour might have done.
Cinematographer Łukasz Żal also wanted director Pawel Pawlikowski’s Polish-language post-World War II romance Cold War to hark back to the monochrome films of the 1950s and 1960s. He shot digital on Alexa XT paired with Ultra Primes and Angenieux zooms but aimed to match the look of the film stock used in the former communist bloc. Location photography took place in Poland, Croatia and Paris.
“Everything changes when you shoot in black and white — costumes, makeup, hair, production design — because you are looking for the contrast in every place,” Zal told Deadline. “You are looking at a different reality.”
Roma is director Alfonso Cuarón’s low-key realist slice of biography about a family living in the Colonia Roma district of Mexico City in 1970. It has the signature elaborate long takes devised by Cuarón and cinematographer Emmanuel Lubezki in collaborations like Children of Men and Gravity, but this time it’s (mostly) the director’s work.
When the shooting schedule for Roma ballooned to over 110 days, Lubezki – who had helped select the camera and prep the shoot – found a clash shooting Terrence Malick’s Song to Song and had to leave the picture. Cuarón – who also co-edited and scripted – took up the reigns.
As with Gravity and Birdman, some of the lengthy takes are not quite all they seem. In the film’s most audacious sequence at a beach involving a rescue from drowning, several takes are meticulously woven together to fit as one. Designed by Cuarón, the VFX along with ‘invisible’ set replacement and decoration, was completed by MPC.
Shot at 4K resolution on the Alexa 65 to lend the black and white tones a contemporary feel, the film’s detailed slow build scenes break like waves over the viewer and are best appreciated on the largest screen possible. That presents something of a conflict for the director who could only get the $15 million film made at Netflix.
As awards season approached, Netflix gave the film a limited theatrical run with 70mm prints (processed from digital at FotoKem in LA) and a widescreen aspect ratio.
“Offering cinema lovers the opportunity to see [Roma] in theatres is incredibly important to me,” said Cuarón in a Netflix press release which demonstrates the streamer’s confidence in its approach. “The 70mm print of Roma shows unique details not available on any other version. Being shot in 65mm, these prints bring live detail and contrast only possible using a big format film. It is for sure the most organic way to experience Roma’”
The surprise on the list is German-language film Never Look Back by director Henckel von Donnersmarck. Like Cold War, this examines post-War eastern Europe and is filmed in Germany, Poland and the Czech Republic.
Inspired by the life of Gerhard Richter’s this is the fictional story of a painter who grew up in Dresden under the Nazis and struggles to reconcile the cultural liberalism of the 1960s with the horrific past. American director of photography Caleb Deschanel ASC has been praised for bathing the story in a warm glow reminiscent of painting (using Arri XT Plus with Zeiss master primes), although the style may sit a little uneasily for some scenes including a graphic account of a gas chamber.
“Just like you don’t expose evil in an obvious way as evil. It sort of lurks underneath things,” he told Goldderby about choosing a luminous rather than darker colour palette. “The evil really comes out of the characters in the way they behave and that was the core of what the film is about.”
This is Deschanel’s sixth Oscar nomination including previous nods for The Right Stuff (1983), The Patriot (2000) and The Passion of the Christ (2004). You couldn’t get further from the subject of Never Look Back than The Lion King, Disney’s live action remake, which Deschanel recently wrapped.
Matthew Libatique ASC used multiple Alexa Minis to lens the intimate story of Ally (Lady Gaga) and mentor Jackson Maine (Bradley Cooper) in A Star Is Born. The lightweight cameras proved particularly useful handheld or on Steadicam for Cooper’s direction which demanded extreme close-ups of the leads and in capturing the intensity of their performance on stage. Some of these events were shot as-live with little room to prep or light, including at the film’s beginning at the Coachella festival.
The tight shots of Cooper and Lady Gaga performing were shot on Kowa anamorphics, a lens choice which gave lots of flaring and hazing of the stage lights, intended to reflect the imperfections in Jackson’s disintegrating rocker. Cooke Anamorphic/i SF primes were used for most of the non-stage scenes. Grading was by Stefan Sonnenfeld at Company 3 in LA.
“Visually, I wanted the film to feel like it was Jack’s world when we enter, and Ally just had to exist within it as she starts her rise to fame,” Libatique told the ASC.
“Then, as she becomes an equal and surpasses him, I wanted everything to build to a larger-than-life feeling, with vivid colour. She’s playing in front of tens of thousands of people, and I wanted it to be beautiful, like a drug. Nothing could be better — but then that drug starts to wear off, and in her last scene I just wanted white light. The colour represents a manic rock-and-roll lifestyle, and the white light represents a kind of sobriety.”
The extreme distortion of the fish-eye 6mm lens for many scenes in Yorgos Lanthimos’ absurdist historical drama certainly catches the eye but the physicality of DP Robbie Ryan’s [BSC ISC] work won’t be lost on the Academy. The sole nomination to be shot on film, The Favourite has a distinctive texture which isn’t afraid to show the powered pours and flamboyantly wigged faces of its comic-tragic characters.
Filmed using natural light, a technique which Ryan has ably demonstrated on films for Andrea Arnold (Fish Tank) and Ken Loach (I, Daniel Blake), he also bought into the Greek director’s insistence on perpetual movement for the 35mm camera without using Steadicam. This was a task which Ryan took on himself, wearing an Exoskeleton rig and electronic handheld Double Helix camera stabilisation combination from Mr Helix, based at Pinewood Studios, which was specially adapted to take the Panavision Millennium XL2 camera.
Processing of Kodak Vision3 stock was done at i-Dailies, now Kodak Film Lab London.
“We were pretty blessed with weather on location but a couple of days it went really dark so we pushed the stock more than I ever thought possible,” Ryan told IBC365. “Yorgos was adamant that we not use lights otherwise it would feel artificial even during scenes when we literally had just a handful of candles.”
A bow for camera equipment credits can be taken now by Arri. Its technology was used by four of the five nominees for cinematography and five of the Best Picture nominees: Roma, A Star is Born plus Green Book (Alexa Mini), Black Panther (Alexa XT Plus) and Bohemian Rhapsody (Alexa 65 and Alexa SXT).

Best VFX: High-tech plays lo-fi
Disney goes up against itself for this year’s Best Visual Effects Oscar as its films are nominated three times and in-house VFX house ILM twice.
ILM has the lead on Solo: A Star Wars Story assisted by several other facilities including London’s Jellyfish and Lola VFX, Montreal’s Hybride Technologies and Raynault and California’s Tippett and Exceptional Minds. The Third Floor provided some previs. Ncam and Nvizage provided virtual production systems.
Framestore took the bear’s share of shots for Christopher Robin, Disney’s fresh take on the AA Milne classic which depicted Winnie the Pooh and friends interacting with the real world.
Some 677 of the 727 shots the UK house made for this film featured creature animation, among the most demanding of all VFX disciplines. Framestore’s Oscar-winning VFX supervisor Chris Lawrence (Gravity) led the overall VFX on the film with Framestore’s global head of animation Michael Eames as animation director.
“Pooh is a very minimal performer,” says Lawrence. “Ewan [McGregor] would have to do these very emotional scenes against him and he wanted to know what Pooh would be doing; the answer was a very held-back, restrained performance.”
Disney/Marvel’s Avengers: Infinity War was made by an army of facilities including ILM, Weta Digital, Dneg, Framestore, Cinesite, Digital Domain, Method Studios, Lola VFX and RISE.
The CG for the film’s 8ft tall supervillain, Thanos, originated from the facial-scanning of actor Josh Brolin. The system called Medusa devised at ILM and used to create Andy Serkis’ Snoke in the recent Star Wars movies is due to receive its own award from the Academy at the annual Scientific and Technical Awards.
Medusa also played a part in Steven Spielberg’s virtual reality ride Ready, Player One. It was used to design Parzival, the avatar of hero Wade Watts. Warner Bros’ film may have been partly shot in Birmingham with motion-capture at Leavesden but VFX duties went back to Hollywood. Digital Domain and ILM roughly split the work with DD responsible for the motion effects and grungy look of the film’s ‘real world’ and ILM taking charge of action inside the virtual ‘Oasis’ for which it got to recreate the Overlook hotel in an extended sequence from The Shining and the T-Rex from Spielberg’s Jurassic Park which ILM had originally built in 1993.
Dneg has become synonymous with the films of Christopher Nolan (Dunkirk, Interstellar) who likes to shoot on film and create effects in-camera but its artists had to up the realism for space travel biog First Man.
As the primary VFX vendor, Dneg (Vancouver) was tasked with creating multiple rendered sequences for use on a giant 60 ft wide by 35 ft tall curved 180-degree LED screen. This LED wall was the best option to achieve the clarity and brightness director Damien Chazelle was looking for and to capture as much in-camera as possible, according to Oscar winning VFX supervisor Paul Lambert (Blade Runner 2029).
“It allowed us to shoot certain space and in-flight elements with our CG content to fit within the boundaries of a film being shot 16mm and 35mm,” he says. Full scale physical mock-ups of the (Apollo, Gemini) capsules were hoisted on gimbals with the Ryan Gosling and other actors on board surrounded by projections of the rocket’s journey. This enabled Chazelle to film entire sequences rather than hundreds of smaller shots to be rendered for VFX.
“In the VFX world we generally focus on creating visuals for single shots, most lasting between 5-15 seconds,” says Lambert. “Damien didn’t want to be limited in this way. We had to create content for entire sequences, some lasting 10,000 frames. We began rendering front and side views to the screen but realised creating 360 spherical images gave us the most flexibility since we could rotate it in any direction. This was a challenging amount of work to complete before shooting even began.”

Tuesday, 29 January 2019

Raising the IQ of sports production



FEED


Mobile Viewpoint's AI-driven production solution makes broadcasting possible for any sport anywhere.


The evergreen popularity of live sports is part symptom and part cause of the rapid adoption of video streaming services but while the demand is huge, only half of the opportunity is being seized. Accelerating output to keep up with content-hungry sports fans is the next big challenge and one that automated production systems can help deliver.
“In the same way we have seen costly satellite trucks replaced by backpack-sized live video transmission units, AI will deliver similar cost savings to live production and streaming,” says Michel Bais, managing director, Mobile Viewpoint, developer of a system for sending video over the internet using mobile networks.
Whether that’s by using AI to analyse audio signals, video images, people or objects to identify which cameras to switch to and control, removing the need for an expensive camera crew, or from algorithms that can generate replays and graphics and then live stream content, AI (augmented or assisted intelligence, however you describe it) looks set to revolutionise the industry.
Mobile Viewpoint is not alone in looking to extend its capabilities beyond the competitive cellular uplinks market. Canada’s Dejero, for example, has a mobile app to helps bridge the gap between traditional remote production and cloud production while TVU Networks has also identified automatic sports logging and production as a related field for its core bonded cellular units.
The Dutch developer, though, has arguably gone further than any in co-opting AI into the workflow. It’s worked with research institute TNO to retune an off-the-peg AI brain from Google’s TensorFlow to fit the patterns, behaviours and relevant data sets of live streamed football.
The package, branded IQ Sports Producer, comprises two parts and includes a 32MB dome camera which contains four lenses, the feed from which is stitched into a 180-degree panoramic field of view. De-warping and stitching technology provides a corrected image with straight line markings. This image can be cut out, panned, or zoomed into as directed by the AI-software which is trained to follow the action as a human director would. All this takes about 30 frames, or 1 second delay.
At its crudest the AI will track groups of players (being most likely where the action is) but more sophisticated applications can be taught to anticipate where a ball might be passed to.
The camera itself was originally built by China’s Hikvision for surveillance purposes but has proved ideal for this solution, not least because it is relatively cheap (around Euro 5000 per module) and has been designed to withstand all weather as well as vandalism. Plus, the camera’s 4 x 4K feeds are reliably in sync which, Bais says, is not always the case with similar cameras.
The second part is hardware containing three Nvidia GPU boards, the AI software and the mobile connectivity which for Mobile Viewpoint is a no-brainer. “IP links are our bread and butter,” says Bais.
The AI software further combines motion-tracking with positional and other biometric data gathered from sensors (RFID, accelerometers, GPS) worn on special vests by the players themselves.
The first (currently only) system with the 32MP camera in Europe is installed at AFC Ajax, the biggest club in the Netherlands and part of the UEFA Champions League. Ajax is using it to film and stream its matches to club web channel Ajax TV. The footage is also captured to gather data on player performance analysis during training.
The Ajax training academy can monitor why a player missed a goal, why they failed to make an assist, and help improve their performance. Image detection means the AI technology can recognise different players and follow them, or, detect a ball on a pitch and follow its movements.
“The potential for AI in this respect is huge, especially in the production of live sports content,” says Bais. “As algorithms develop, AI can detect faults (yellow or red cards) or injuries as it learns how to make productions more interesting and story-like. There are plenty of smaller sports that could use this technology to become content owners in their own right at a low cost, and then monetise it.
“Only 10 per cent of professional sports are premier or first leagues. The majority are second or third tier sports many of which are played at grounds or parks with limited internet connectivity, even in middle of a city.”
That’s where the bonded cellular links come into play. It’s 3G and 4G capable today, encoded in HEVC H.265, and primed for 5G which telcos worldwide including local giant KPN will rollout in 2020.
The Netherlands Pro women’s football league, for example, is keen to install the system at six venues throughout the country as a budget-friendly means of launching a new online matchday streaming service.
However, to completely automate live sports streaming at the top level without the need for a production crew and director is perhaps a bridge too far—for now. AI used for assisted production alongside humans is the immediate goal. One of the obstacles to fully automated streaming is that it simply takes time for an algorithm to learn the nuances of what is interesting or important for each sport. For example, an algorithm may think that capturing a fight breaking out during a football match is the same as capturing a punch being thrown during a boxing match—but for the viewer, these are two very different experiences—one that is normal, and one that isn’t.
“Getting the algorithms up to speed requires time, so there is still very much a role for humans,” says Bais. “It tends to look for people walking or running rather than people on the ground so it doesn’t necessarily zoom in on a player who went down following a tackle which is maybe what the viewer would like to see.”
Higher image resolutions help too, since the more pixels the AI has to play with it can, for example, more easily differentiate the ball from a piece of white paper on the pitch.
While most budget-conscious customers want a single camera solution, more cameras around the venue are needed to capture different angles and more data to achieve greater accuracy.
Planned enhancements to the software include AI advertising insertion, auto playback of replays following a goal, auto summary (highlights) generation and to train the system on other sports including motorsport and ice hockey.
It has even created a means of translating the game into 3D VR graphics within which a user can select the point of view of any player to analyse decision making during a game. Currently used by Ajax as a training tool, “our ambition is to bring this to the home too,” says Bais. “With KPN we are working to bring a layer of interactivity to live streamed games running on KPN’s standard set-top box.”


Friday, 25 January 2019

Sling TV Readies its Offerings for the Coming SVOD Onslaught

Streaming Media

Dish Network's live streaming service tackles mass personalization and aggregation in an effort to create something viewers can't get anywhere else.


In preparation for the intensifying SVOD battle, Sling TV has adjusted its business model and re-engineered its underlying data stack based around mass personalization.
The original multichannel live streaming TV service is orienting toward SVOD aggregation.  A year ago, the platform added NBA League Pass to its service. This followed the integration of science-centric SVOD channel CuriosityStream (set up by Discovery founder John Hendricks) and Up Faith & Family and Pantaya as add-ons to its main service.
Last week the company announced it would offer free trial access to content, without any registration or credit card hand over, including episodes of popular shows like Heartland. What’s more, existing Sling TV users can now sign up to a selection of à la carte options, including CuriosityStream, Docurama, and Stingray Karaoke, for free.
“It’s no secret that competition is coming and with quite a few major players already in the OTT space including DirectTV Now, Hulu, YouTube TV, Amazon, and Apple there is definitely a time pressure to continue making advances,” says Brad Linder, director of cloud native engineering at Sling TV.
With Disney+ and an HBO-anchored WarnerMedia service scheduled to launch this year, along with Apple’s SVOD entrance and a direct-to-consumer play from NBCU/Comcast in the works, the battle for OTT video market share could see its most heated exchanges yet.
It could be one that pits direct-to-consumer services against content aggregator models from existing pay TV providers and OTT providers such as Amazon Channels, Hulu, or, maybe, Sling TV. 
Dish Network’s offering has close to 2.4 million subscribers across 16 platforms, including a variety of smart TVs, tablets, game consoles, computers, smartphones and streaming devices like Roku.
“With so many options available to potential cord cutters it is important to provide a first-class experience that makes your product stand out in a market that is becoming more and more saturated,” Linder says.
“As such, it is critical for Sling TV to provide a highly resilient and highly available service that is personalized to each user and one that scales on demand to keep up with our expanding customer base and changes on the internet. This includes the ability to centralize business logic across our 16 platforms to deliver a common experience to our customers.”
Personalization of service is key, Linder stresses. “When I joined Sling TV 3 years ago we had legacy systems developed in a non-cognitive era and not necessarily web-scale. What we’ve been trying to do for the past 18 months is to bring in solutions that scale horizontally to keep up with future growth. Our goal is to deploy in a datacentre so we reduce the time to market. A datacentre model will allow for another instance of the Sling TV backend to be built on-demand as needed in a hybrid cloud environment.
"Part of the solution was a common data store for core customers and personalized content that would be available in all data centers serving the middleware stack.”
The solution needed media distribution capabilities that included authentication, program catalogs, and feature metadata storage.
“What we try to do is give the customer what they want. We try to get relevant information and content quickly to folk with less friction—fewer clicks to video.
"We’re not just paying lip service to customer feedback. We want to solve the pain points they have and engineer the platform accordingly.”
Sling TV selected DataStax Enterprise (DSE), a proprietary version of the open source Apache Cassandra database. DataStax media customers include Comcast, Sky in the UK and M6 in France, while Apple (with over 75,000 nodes storing over 10 PB of data), Netflix (2,500 nodes, 420 TB, over 1 trillion requests per day), eBay, and Chinese search engine Easou all use Cassandra.
“First and foremost we wanted a partner,” Linder explains. “We don’t like vendors who charge for this and that. We want a partner with which we can evolve as a platform. We have a growing customer base and a very elastic business model, so we have to deliver our software into any public cloud as well as on-premises private cloud with close to the same tooling. From a technical perspective we need low latency and high availability. With DSE, we are now able to replicate data across the country in less than two seconds, which is a big win for us.”
Clues to Sling TV’s personalized service can be seen by Roku users. It has made search easier by automatically displaying Popular Searches, allowing users to quickly browse through the most searched-for content each day.
In addition, next episodes automatically play to encourage binge-viewing.
Sling TV is billing itself as "the only live OTT service to allow you to purchase à la carte channels without subscribing to a base service…. making it easy to stream all of your favorite content in the same interface."
While there will be some consumers who are introduced to Sling TV and subscribe to it through their initial purchase of an NBA League Pass, for example, such a strategy appears risky unless Sling TV can indeed position itself as the "new cable TV" by signing up multiple DTC partners.

Thursday, 24 January 2019

BARB Fingers SVOD for Rise in 'Unidentified Viewing'

StreamingMedia

BARB's best guess of growing SVOD viewing habits is that Netflix, Amazon, and Sky Now are the cause, accounting for 19% of total activity in 2018.
As the UK’s media landscape further fragments, subscription video-on-demand (SVOD) viewing on the TV set continues to march upwards, taking up 19% of total activity in 2018, according to latest figures from ratings agency BARB.
Since the main SVOD services, including Netflix and Amazon, have so far refused to share their data with BARB, the measurement service is making a best guess about changing habits and trends.
It found that time spent viewing a channel or on-demand service that does not report to BARB—so-called unidentified viewing—had risen 3% on 2017, or from 40 to 46 minutes a day on average.
Viewing patterns roughly approximate those of traditional TV behaviour—viewing increases in the evenings and over holiday periods, but there are some notable exceptions.
Last summer’s World Cup, during which England fared better than usual, saw a boost in viewing to the TV for watching coverage of rights holders ITV and BBC. Unidentified viewing sources fell at the same time. 
Like total TV set viewing, unidentified viewing also increases at the weekend, but to a greater degree, suggesting that audiences are waiting for the weekend to binge watch an SVOD box set—or it could be to do some gaming—BARB just doesn’t know for sure. 
"We can’t be certain, but we do have a growing body of evidence that points to SVOD services being a primary catalyst" in driving the growth in unidentified viewing, BARB says.
BARB has been asking respondents about their SVOD subscriptions for several years and used the most recent survey to extrapolate that 11.6 million UK homes had at least one of Netflix, Amazon Prime Video, or Now TV in Q3 2018—a year-on-year increase of 22%. Netflix is the main driver of this increase, having added 2.2 million homes compared to Q3 2017. Amazon too has shown impressive growth, adding more than a million homes, while Now TV has added just under 200,000. The number of homes with two or more services has gone up 40% from 2.8 million to just under 4 million in the past year.
It finds a positive correlation between Netflix landing on Sky Q boxes at the beginning of last November and a boost in unidentified viewing in Sky homes. 
"We cannot definitively say that this was due to Netflix viewing, but there is a strong likelihood that this is the cause," BARB says.
When the next set of figures are released in February it will be able to see whether Netflix has also seen a corresponding increase in subscribers following its availability on Sky Q, or whether this trend is simply down to households that already subscribe to both Netflix and Sky choosing an easier route to access their Netflix accounts.
Another finding from BARB's report examined the social grade SVOD subscribers. Netflix and Sky’s Now TV are in line with the average number of leading ABC1 households that subscribe to SVOD across the UK, with 63% each. Amazon though seems to be scoring a higher percentage of the advertiser-prized ABC1 profile—indexing 73%, or 38% ahead of the UK average. 
The presence of young children, and the unique pressures that they bring, may be playing a part here, posits BARB: "The double carrot of on-demand content and next day delivery of urgently needed household items may be enticing these ABC1 families towards Amazon." 
That said, the profile of households with all three services is even more heavily ABC1—with 77% falling into this category. The ability—and willingness—of ABC1 households to pay for three services is clear which is either a plus or a minus for the prospects of pending SVOD launches such as that from UK broadcaster ITV.

Wednesday, 23 January 2019

Florian Hoffmeister BSC creates The Terror


British Cinematographer

Florian Hoffmeister BSC makes cramped confines work to deliver a richly detailed hallucinatory experience from the terror’s of a ship to the sprawling, icy landscapes of the Arctic. 

The tradition of the handsome, impeccably cast period drama gets a horror makeover in the new AMC series which adapts from Dan Simmons’ celebrated novel of the same name to create an immersive world of Antarctic nautical nightmare.
 Produced by Ridley Scott and inspired by the true-life tale of the H.M.S. Erebus and H.M.S. Terror, which famously disappeared in the ice on a mission to discover the fabled Northwest Passage almost two centuries ago (the ships were only found in 2014), The Terror sketches out the fantasy-horror version of a nautical journey gone wrong in gorgeous, excruciating detail.
From the ravages of nature constantly biting at the men’s heels to the bitter dynamics between the crew that sow the seeds of their own downfall, cinematographer Florian Hoffmeister, BSC was given the time to create a unique look that details the torment as it unfolds.
“This show had some particular challenges but two showrunners - Dave Kajganich and Soo Hugh - were terrific in fighting for us to have the time to test and find a look,” he explains. “We spent three weeks trying to solve the issues, and we succeeded because we had the time to prep as if this were a feature rather than an episodic TV.”
 Having shot prestige dramas like The Deep Blue Sea and Great Expectations (both 2011), Hoffmeister says he was attracted to this project because the producers wanted a unique look.
“The setting is in one sense very real, since it’s based on true events, but it’s also a horror thriller which suggested a genre heightening. With period pieces there’s a danger of ending up with the same look – candles and costume – but the idea of designing something that stays truthful to the heritage but had grittiness to it was basically what I set out for.”
The Terror was shot almost entirely at Stern Studios near Budapest, which was the only soundstage available in the crowded city at the time. Even using two greenscreen stages of 23,700 by 16,150 square feet, the space was barely big enough to fit the construction of the main set.
Striving for authenticity, the production had effectively reconstructed the ships to their original specifications. The vessels were built by production designer Jonathan McKinstry to the same basic template mounted on an 8-foot-high gimbal with the top deck designed to be changed so that the full set could become either the Terror or the Erebus.
The confined space made conventional lighting impractical. “The heat emitted from regular lights on actors wearing fur coats made that a no-brainer,” says Hoffmeister. “The creative pay-off was in using LED lights to really play with colour.”
Six hundred LEDs were arrayed on the studio ceiling and hooked to a console. “In the Arctic, the sunset changes hue from pinks to lavender and I knew at some point we’d have to show the aurora borealis. With LEDs I could make these colour changes at the press of a button rather than having to wait hours for gels to be changed.”
His camera choice was RED DRAGON 35mm sensor capable of 6K.
“I suggested RED for a number of reasons but primarily because I wanted large-format photography, and secondly I like the way the image falls off in blackness. I think there’s more of a look to it, than a technically perfect clean Alexa image, which is something I wanted to play with.”
He continues, “Personally, I am less interested in resolution than in the field-of-view gained from a bigger sensor. For example, a 40mm lens on a Super 35 sensor will give you the field of view of large-format portraiture. It means you can take a longer lens with less distortion but a wider field-of-view than using a regular sensor. Given that we had limited shooting space and yet we wanted to capture all the detail of the ships, it made sense to shoot longer lenses than just on 18mm.”
He referenced Ernest Shackleton’s contemporaneous black-and-white photographs and also Lincoln, the Steven Spielberg biopic lensed by Janusz Kamiński. “He has such an expressive back-lit driven lighting approach especially in playing with silhouettes.”
Since AMC required an HD 16:9 finish, shooting 6K was not high on their list of priorities but Hoffmeister convinced them otherwise.
“Shooting RAW was not an option – the data would be too great. But the RED shoots 6K in REDCODE RAW compressed to a level at which the data is all but the same as HD. I shot some tests with the Panavision Primos and ran a colour pipeline to post to make my point. AMC understood and signed it off.”
 Director Edward Berger agreed with Hoffmeister that the show be framed for composition using dolly moves and Steadicam for both cameras. “I find that if you operate a large sensor camera handheld the movement quickly gets super frenetic,” he says.
One of the advantages of locating the show in Budapest is access to Colorfront, the facility with a heritage of negative processing and the inventor of the Lustre (Colossus) digital grading tool.
Working with Colorfront, the cinematographer tested the RED cameras against wind, snow, fog and other special effects, in each case lighting for dramatic atmospheres and learning how to create silhouettes. “It was very beautiful and rewarding,” he says. Meanwhile, VFX studio UPP in Prague gave Hoffmeister the creative freedom to set the tone for matching backgrounds and adding set extensions.
He spent another couple of days finding the colour space. “The test rushes looked nice but a bit too clean. I wanted a slight destruction of the image without necessarily defocussing using it.” Colorfront’s film background came to the rescue. “They had a bleach bypass function where you could literally dial it in like you used to do in a lab. If I said ‘let’s look at 20%’ they showed us that. These tests defined the look and enabled me to build the post pipeline.”
The team created LUTs with the bleach bypass built in and calibrated all the monitors on set and in post. Hoffmeister lit to that for the pilot and episodes two and four while cinematographers Frank van den Eeden and Kolja Brandt followed his cue in shooting the remaining seven episodes.
“We knew our contrast levels and you can really paint onto the monitor. What’s more, because it is set up in the pipeline there is common agreement from the studio to the edit. By the time it came to final grade (by Jet Omoshebi at Encore) everybody was accustomed to the extreme look. If we’d begun this in the grading suite at the end there is no chance we would have achieved the same consistency.”
The cinematography creates contrasting settings of flickering, lamplit darkness and blinding white vistas of endless snow, neither of which offer any comfort to man from the cruelties of nature. The editorial tone is one of bleak, piercing panic and the threat of something incomprehensibly dreadful just beyond the veil of reality.
One sequence in episode two depicts a thunderstorm on the ice. The showrunners urged Hoffmeister to push it even further. “They wanted it to be so distorted that you couldn’t see faces anymore.”
Aside from some exteriors on location in Croatia, the only dramatic element shot out outside was a 40-metre-high section of mast.
“People said we were insane to build [the set] inside but in honesty the stage gave us a chance to shape the show to a more dramatic extent. If you look at photographs of the Arctic you can see the light has lots of contrast. If you’d shot this in central Europe in November it would be flat.”
Digital, he adds, has reached a level of technical maturity. “You can go out and create something truly unique - provided you get the time to experiment and, with the right people, to design a proper colour pipeline.”


Behind the scenes: A Quiet Place

IBC


Ethan Van der Ryn and Erik Aadahl, the supervising sound editors of Oscar-nominated A Quiet Place, on establishing the rules of silence and the shock of what happens if you break it.
It’s the goal of any suspense movie to have audiences on the edge of their seat but the breakout horror hit of last year is so tense it has literally achieved this.
The premise of A Quiet Place, directed, co-written and starring John Krasinski, is that any noise leads to certain death on-screen but the sound design is executed so expertly that audiences found themselves complicit in the conceit.
“It’s almost as if they’re afraid to breathe. They are certainly not rustling the popcorn,” observes Ethan Van der Ryn, supervising sound editor with Erik Aadahl.
“What’s most pleasing is that this film forces people to change the way they perceive cinema. Many modern films, visual effects or action-dramas in particular, will tend to have a bombastic soundtrack which naturally pushes the audience to recline in their seats but what happens when you strip out the sound is that you remove this comfort blanket.
“If you can tie that silence to the jeopardy on screen, then people start to lean forward. They become aware of the sound they are making and hold their breath just like the characters. In a way, the audience becomes an active participant in the story.”
This psychoacoustic effect didn’t come about by accident. Van der Ryn and Aadhal are two of the most accomplished sound designers in Hollywood, teaming for the last decade on some of the biggest action, VFX and creature movies like Godzilla, Monsters vs Aliens, The Meg, Bumblebee and every Transformers movie. Before partnering with Aadhal, Van der Ryn had worked on The Godfather: Part III, Terminator 2: Judgement Day, Titanic, Saving Private Ryan, Pearl Harbor, The Lord of the Rings trilogy and King Kong, while Aadahl’s credits include I, Robot, Superman Returns, Argo and The Tree of Life.
“When John sent us the script we knew this would be different from anything we’d done before,” says Van der Ryn. “Even in movies where there’s a lot going on visually we’ve found that the story often worked best if we pared back the number of sounds to focus attention on the central character or key action within a scene. A Quiet Place took this to a completely other level. We stripped everything back to the bone and forced the audience to focus on very specific sounds.
Aadhal has a shorthand for their craft. “Where the cinematographer is concerned with the contrast between light and dark, our job is to contrast loud with quiet – or in this case quiet and quieter.”
The film depicts a family in a world terrorised by strange creatures, blind but with such hyper-sensitive hearing that any slight noise will lead them to kill you. Spoken dialogue is minimal although the main characters communicate in sign language, which is subtitled.
“Sound and picture always evolve together but in this case sound was so integral to the storytelling we had to conceive the philosophical brush strokes before anything was shot,” says Aadhal, who with Van der Ryn was hired in pre-production to conceptualise the film’s universe.
“The most difficult scene was therefore going to be the opening one since the film’s narrative would depend on how well we laid down the ground rules. An obvious point is that there’s not a lot of words so that the sound design has to help describe the situation.
”Some of this is subtle ambience building to help relay the idea that basically nothing would live very long if it made too much sound. For instance, we use the ambience of crickets but the sound is very even - no single insect stands out from the background. By the same token there are no single birds heard although you see some single crows way up in the sky. Lots of subtle cues like this in the sound design help establish the set-up.
“At the same time, it’s all contextual. You could make a louder sound if a background sound masks your own, as is the case later in the film at a waterfall. So, at the start of the film we emphasise atmospheric sounds outside the drug store which provide cover, to an extent, for the characters inside.”
The sound designers next developed a set of aural identifiers, or “sonic envelopes” in their terms, to tell the story from different perspectives. These included the mother and father (played by Krasinski and Emily Blunt), the stethoscope to listen to the heartbeat of the baby she is carrying, listening to shortwave radio and of Regan, their deaf daughter (played by Millicent Simmonds who also happens to be deaf).
“With Regan we made two envelopes, one when she has her cochlear implant turned on, and another when it’s off,” Van der Ryn explains. “Millicent and her mother were a great help in designing this since they were able to explain that when the implant is on she can hear albeit in very muffled tones. When it is off, she is in complete silence, something we built into the script at three moments.”
Going to this “digital zero” was not something either of them had done on a project before.
“We didn’t want this to feel like a sound experiment but I think it says a lot about the bravery of the filmmakers and the studio to go with this idea,” says Aadhal. “We had to judge how far to stretch it.”
What’s more, the film only has one musical cue, Neil Young’s ‘Harvest Moon’ heard over the radio, and a spare if menacing score from Marco Beltrami that had to blend into the naturalistic details.
“Too much music and it would pull the audience back into the cinema and away from feeling like they are participating in the story world,” he adds.
Creating the creature itself meant solving a paradox. How could one compose a signature style for a creature which is violently disturbed by sound?
“The main direction we got from John was that it’s got to be terrifying and scary,” explains Van der Ryn.
“We got to a point early in design when we had too much sound for them and we realised that it’s scarier when they made less noise. These are creatures whose sense of hearing is amplified beyond what a human would hear, so we started to pull it back. It’s a bit like Jaws. The less you see or hear, the scarier it is.”
They were also working in the dark, since the VFX team had yet to devise what the creature would look like or how it would move.
“While sound is painful for the creatures they rely on sound to navigate. So, using the concept of echolocation and sonar we played around with the real-life clicks and squeaks of animals like Beluga whales, dolphins and bats.”
When these sounds were found to be “too relatable” they decided to create something that no-one had heard before. To the patterns of echolocation they added an electrical component, taken from a story plot point involving static, interference and feedback from TVs and Regan’s implant.
“In any project we treat ourselves as the first audience,” says Van der Ryn. “Our decisions are often about how we respond on a gut level. A lot of A Quiet Place is based on that emotion. Do we feel goosebumps? If so, we’re onto something.”
While the film is primed for the hermetically sealed world of cinema, injected with a Dolby Atmos mix in the best theatres, how will the film play in the more hectic living room?
“Turn off the dishwasher,” is Aadhal’s advice. “And the washing machine. Close the door. Turn off the lights. Ideally wear headphones. We’re all multi-tasking everywhere, even watching TV, that we rarely get the chance to just focus and listen.”


Fitter, happier, more productive meeting room solutions


Corporate marketing for ATEN and Inavate

Collaboration brings people together, but for it to be really successful it has to be made easy. The Presentation Switch Series from ATEN aims to integrate all AV sources without the need for extra devices.
With technology constantly changing it can be daunting to upgrade infrastructure. This is the case no matter the sector you work in but particularly so in the cash-strapped, time-pressured corporate and education environments.
Companies need their employees to work smarter, faster and more productively. AV collaboration technology can bridge this gap, allowing companies to expand their reach without expanding their physical spaces.
The modern workforce and the millennial student population share the same high expectations of real-time and easy connectivity with colleagues, managers or lecturers regardless of whether the screen is local or offsite.
Along with the rising demand for collaboration comes the need for properly designed and equipped meeting spaces. Learning environments should connect students and educators ideally with a system that’s easy for any educator to use.
Meeting rooms are used by people with various levels of technical ability, so it’s important to keep things simple. This doesn’t mean you can’t include advanced tech – just that the functionality needs to be made available in a way that is intuitive and easy to navigate.
Interoperability is key.
Devices in the meeting room might be incompatible with one another. Many of them require individualised inputs that have to be passed around during meetings and presentations.
To improve meeting technology, companies should look for an AV system that supports different kinds of devices across multiple platforms, without having to worry about compatibility.
Investing in a well-designed conference room with a flexible videoconferencing system, intuitive presentation software and reliable sound quality will achieve higher productivity and learning outcomes than a poorly designed space, which can leave the people who are using it frustrated, confused and embarrassed by their inability to make the technology work.
ATEN Presentation Switch Series
 These multi-in-one solutions incorporate leading edge functionality including video matrix switch, audio, extension, streaming, and analogue-to-digital conversion functions with unique features into one compact enclosure. The design simplifies AV integration by eliminating the need for numerous individual components and the compatibility challenges that accompany them.
With the user in mind, the series comes with a straightforward and accessible OSD and Web GUI to streamline operation for both local and remote participants. It integrates all the multi-format audio/video sources you need without calling for extra devices.
Models in the series are made with unique features to satisfy a wide variety of meeting space demands in meeting rooms, boardrooms, conference rooms, classrooms or other presentation environments.
Seamless collaboration
The ATEN VP2730 is a 7 x 3 multi-functional presentation switch integrating a video matrix switch with scaler, streaming, audio mixer, HDBaseT extender and analogue-to-digital converter into one compact device.
With seven multi-format inputs to two HDMI and one HDBaseT high definition outputs, this device empowers larger meetings and facilitates collaboration between local and remote participants in boardrooms, conference rooms, lecture halls or distance-learning classrooms.
Conference room
Those faced with designing a solution for a typical conference room need to integrate various sources that use different analogue and digital interfaces. Audio is a critical consideration with a requirement to both extract audio and connect to the existing audio system in addition to overlaying the microphone input on top of other audio sources to aid collaboration.
The VP2730 is a compact device that consolidates multiple source formats simultaneously and displays them as your choice of PiP (picture in picture), PbP (picture by picture), quad-view or more. Bidirectional AV source streaming is enabled via a web interface for small-scale meetings but it can just as easily integrate with advanced AV conferencing systems for enterprise wide meetings.
A special Moderator Mode lets the moderator discretely access any output display to switch content or change settings without interrupting the meeting flow; and there’s built-in support for audio embedding and de-embedding as well as mixing the mic inputs with any external analogue or digital audio source.
Lecture Halls
Challenges for AV integration in these spaces often come down to managing signals to and from displays up to 70 metres away. You will need to extract audio and connect to the venue audio system and ensure there’s no black screens when switching sources. That’s where long-distance extension via HDBaseT comes in.
Effortless presentations
The ATEN VP1920 is a 9 x 2 3-in-1 presentation switch integrating video matrix switching, audio processing and analogue-to-digital conversion. With nine inputs to two 4K outputs, it is designed to boost the efficiency and impact of professional presentations for all small-to-medium sized corporate and education presentation environments, such as meeting rooms, classrooms, training rooms, or any other presentation setting, such as exhibition centres or hotels.
Finding a switch that can integrate multi-various legacy and next-generation sources each with different analogue/digital interfaces is a perennial challenge but alongside this the typical meeting room needs to be PC-equipped for participants without laptops. The meeting moderator needs to be able to manage two PCs with a single keyboard/ mouse and yet the overall solution will ideally minimize device count to keep configuration simple.
The VP1920 packs a lot into its compact footprint. Switching is fast between six HDMI and three combo inputs (HDMI/VGA, HDMI/ DisplayPort, HDMI/Component/Composite) to two HDMI outputs with support for Coaxial, Toslink, stereo and audio outputs so no device gets left behind.
With USB control functionality integrated into one device, the VP1920 can switch the control focus independently to control two PCs. A special Source Preview mode allows users to quickly identify and switch to the target content.
No more guessing which port connects to which content source.
 Meetings rooms and classrooms
For this application, integrators are often faced with having to connect equipment with a mix of legacy and newer interfaces, perhaps including a document camera, touch panel, set-top box, audio amplifier, and more.
A solution will therefore require a matrix-style display of different sources ideally across two screens. It should be possible to further throw any interactive touch panel content on a larger projector screen for the class. And of course, the end user wants as simple a configuration as possible to avoid any incompatibility.
Here’s where the VP1920 scores big. It consolidates multiple source formats and integrates them easily with a digital podium whilst maintaining a small profile.
Your choice of Matrix, Mirror, and PiP display modes meet the needs of all types of event or presentation styles while the integrated touch panel mirrors the content to a projector screen.
 System build is radically simplified by reducing the number of additional switches / converters and redundant USB extenders. With streamlined control from front panel pushbuttons, IR remote controller, OSD and RS232, the VP1920 also lowers operation complexity.
Swiss-blade knife
In sum, the ATEN Presentation Switch Series is the AV integrator’s Swiss-blade knife. The VP1000 Core series offers multi-format solutions for high-quality video and audio switching and converting.
The VP2000 range is ideal for collaborative presentation facilitating frictionless distance-free content sharing along with advanced audio.
Now, all you need to do is show up and start presenting!


Monday, 21 January 2019

Behind the scenes: First Man

IBC


The sonic soundscape of Neil Armstrong biopic First Man fuses authentic reconstruction with the insanity of the race into space.
Director Damien Chazelle’s inner space odyssey of the first man to set foot on the moon required that the filmmakers not only authenticate every detail but also interpret Neil Armstrong’s out-of-this-world experience.
That’s why, instead of the explosion of a rocket, at times you might hear the roar of lions or elephants or even a stampede of wild animals bursting out of the sonic mix.
“Damien wanted space travel to feel like a journey to the underworld since it is somewhere where no-one is supposed to go,” explains sound designer Ai-Ling Lee, who also served as supervising sound editor and re-recording mixer on the film. “At certain points we needed to go deeper into Neil’s psychology and amp up the intensity in ways that might subconsciously surprise you.”
Lee re-united with supervising sound editor Mildred Iatrou Morgan for First Man, after the duo made history with Chazelle’s La La Land as the first female sound team ever to be nominated for an Oscar.
They explain that the director emphasised the need to approach First Man as a documentary, but also sent them a number of films to watch as references for the soundscape he was seeking. In keeping with the lo-fi visual effects and use of 16mm, 35mm and IMAX 65mm, the sound design has a gritty, analogue feel that echoes both the 1960s era and the visceral danger of riding the fragile craft.
For going into space, the touchstone was Das Boot, Wolfgang Petersen’s submarine drama of men confined in a death trap. Son of Saul and Saving Private Ryanwere inspirations for emotional tone and audience immersion. The air traffic control chatter in Paul Greengrass’ United 93 was a guide for ground to pilot communications while horror films Rosemary’s Baby and The Exorcist were highlighted for their off-kilter audio.
“In pre-production Damien sent us hand drawn animatics with some sample sounds that he’d researched for some of the set pieces of the movie, like the Gemini 8 sequence,” says Lee. “From that starting point we dived into our own research to make the sound as accurate as possible.”
They got in touch with former Gemini and Apollo astronaut Jim Lovell, who described to them over email how it felt to be strapped into the capsule. “What came across from him was the bone-shuddering noise of the Saturn V [the rocket that carried Apollo into space and still the most powerful ever made] and that, when suited up, how little he could actually hear. Just the sound of your own breathing and the air hissing from the life support system.”
They scoured NASA’s extensive archive, which included audio and transcripts of the mission’s in-flight comms, as well as TV and radio broadcasts of the actual launches, and used these to portray the same events in the film.
“I wanted to become very familiar with the way they spoke, the technical terms used and the cadence of communications,” says Iatrou Morgan, who handled all the film’s dialogue, production sounds and ADR. “I wasn’t sure how much we’d repurpose and how much would be from the actors.”
Chazelle wanted all 30 actors in the Mission Control scene to be individually wired for sound. “[Supervising dialogue editor] Susan Dawes worked really hard on this to deliver a cleaned-up version which had to have a background of movement and paper rustling – the chaos of the room’s environment – yet bring the dialogue into focus when needed,” says Iatrou Morgan. “Even then, the dialogue needed to be unpolished, like a documentary.
Where original recordings were used, such as Mission Control’s comms to the astronauts as they approach the moon landing, the challenge was to match it with the actor’s dialogue.
“I had to make sure that the dialogue was clear enough to understand, but leave in a lot of texture and fuzz. As the astronauts get further away from earth the crackle and delay is greater to emphasise the distance until it cuts out on the dark side of the moon.”
The sound in the build up to the moon landing itself is amped up so that, when the door of the Eagle opens and Armstrong steps out, there is sudden silence – much as many people will have watched it live on TV with baited breath.
Ryan Gosling had a recording of Armstrong’s famous words playing back as he performed the line but Iatrou Morgan was tasked with getting the phrasing closer still.
“I used a plugin called Revoice Pro so that Ryan’s performance matched Neil’s in rhythm and cadence,” she explains. “I stole the static from the original and added that in to make it near identical.”
Some sound effects had to be rebuilt from scratch. While NASA retained audio of the Saturn V, the recording was distorted and not of production quality. The nearest to its power today in terms of rocket-fuelled propulsion is the Falcon Heavy made by Elon Musk’s SpaceX which just happened to have its debut launch last February.
In return for sharing professionally recorded samples of the lift-off, SpaceX permitted Lee, Iatrou Morgan and sound effects recorder John Fasal to plant dozens of mics around the launch pad at Cape Canaveral. They did this using high dynamic mics which could withstand a heavy load of sound pressure at various distances from lift-off so that they could later be matched with the shot selections made by editor Tom Cross. When the SpaceX rocket re-entered the Earth’s atmosphere., its sonic boom was used for the film’s opening sequence of Armstrong piloting an X-15.
Also bedded in the mix was the deep rumble of the rocket’s thrusters adapted from low frequency recordings of nitrogen gas being blasted into NASA’s Jet Propulsion Laboratory. Additional raw sounds were captured from a SpaceX Falcon 9 launch at the Vandenberg Air Force Base and from other companies developing moon landers out in the Mojave Desert.
To convey the turbulent racket and roll of the ‘tin can’, the pair recorded the shakes and vibrations of motion simulator rides with additional metallic stress and strain made by foley walkers Dan O’Connell and John Cucci.
Re-recording mixer Frankie Montaño went to such meticulous lengths as recording the buttons and control switches from within one of the original lunar modules. He also made recordings of the click of helmets, umbilical attachments and the movements that the space suits of the era made.
“As Tom [Cross] began his editorial we sent him a kit of sounds – turbulence, shake, helmet and glove clicks - so he could weave these textures into his cut early on,” says Lee.
The Award-winning sound duo have been working on and off together for several years on projects including We Bought a Zoo, Hitchcock and Wild.
Iatrou Morgan credits include LA Confidential, Ed Wood, The Fast and the Furious, Rise of the Planet of the ApesWalk The Line, Steven Spielberg’s Catch Me If You Can and The Terminal and Terrence Malick’s Tree of LifeShe is currently working on Stargirl directed by Julia Hart.
Lee’s feature work includes Spider Man 2, Transformers: Dark of the Moon; X-Men: Days of Future Past and Deadpool. She is currently working on Lucy In The Sky, a feature about an astronaut (Natalie Portman) who returns to Earth from a life-changing mission.