Wednesday 30 October 2019

This is extremely clever advertising tech

RedShark News
The perimeter boards surrounding sports or music events are habitually activated with sponsored virtual graphic overlays during broadcast coverage, opening up new revenue streams for broadcasters and rights owners. So far, so corporate, but a Swiss company has come up with a novel solution which enables broadcasters to show sports fans in different countries different perimeter advertising simultaneously and with the same live streamed event.
For example, a viewer watching a broadcast stream in Japan will see advertisements for Japanese products on the perimeter LED screen while a viewer of the same stream in France will see French ads on the same screen. Four unique streams can be broadcast simultaneously.
Parallel Ads, devised by Appario Global Solutions (AGS) of Steinhausen near Zurich, can be implemented into any professional broadcast camera chain but is being marketed in conjunction with Sony.
The solution is based on what’s called Dynamic Content Multiplication, a patented technique of AGS that enables the broadcast of up to four simultaneous feeds for event-based LED advertising from a single camera.
The technical approach of parallel advertising differs from many market participants using virtual overlay: by using a small hardware chip, multiple feeds are transmitted on the perimeter board in the stadium. To put it simply, there are several boards in one allowing advertising in parallel – while keeping the fan experience in the stadium untouched.
Unlike virtual ads, the parallel DCM feeds can be viewed on all camera angles, and will be included on slow-motion shots and highlights. Also unlike virtual overlays, there’s no post-processing.
The LED boards are synchronized with the cameras, permitting capture of more than one signal at the same time by using standard high frame rate functionality to output several signals in standard speed. The multiple feeds are fed directly to the vision mixer in an OB truck. No additional hardware, post-processing, tracking or other pre/post setup is required.
What’s more, it is immune from interference from the elements like rain, fog or snow.
Sony has paired the ad-tech with its HDC-5500 cameras which offer both UHD and Global shutter technology.
The human eye will only be able to detect the venue feed, so the viewing experience is unchanged. This is possible because of a patented process in which, in a multiple framerate mode, the tech ‘deletes’ the additional signals for the human eye while cameras are able to capture multiple signals at the same time. The regional images are shown in every frame but invisible to the naked eye due to the deletion process.
According to Sony, this makes the technology perfect for broadcasting fast-paced sporting or music events that require freedom of camera movements and angles.
Parallel Ads allows rights owners “to engage with different sponsors simultaneously or tailor sponsored content to each market; greatly increasing its impact and value,” claimed Karsten Schroeder, AGS’ chair in a press release. “In conjunction with cameras from Sony, broadcasters now have an ultra-low-latency, real-time solution that doesn’t sacrifice key features such as slow motion.”
The tech has already been used at golf’s Betfred British Masters this year and by German Bundesliga club Bayer Leverkusen.

This is the world's smallest commercially available sensor, and it really is tiny!



RedShark News
Like a scene from the unforgettable 1966 sci-fi fantasy Fantastic Voyage starring Raquel Welch, surgeons can enter the smallest veins of the human body and take video with a sub-miniature camera that is officially the world’s smallest.
At just 0.575 mm x 0.575 mm x 0.232 mm the OVM6948-RALA chip from US-based Omnivision Technologies is a fresh entrant into the Guinness Book of Records for being the smallest commercially available image sensor. That’s the size of a grain of sand.
It fits inside a “wafer-level camera module” that measures just 0.65mm x 0.65mm x 1.158mm and offers a 120-degrees of view.
The OV6948 has an 1/36-inch optical format, and an image array capable of capturing 40-kilopixels (200 x 200 resolution) colour video at 30 frames per second. Each photosite measures just 1.75 µm across.
That resolution may seem a little low but not when you consider that internal medical procedures for neurology and cardiology or spinal injuries are currently performed using far lower-resolution fibre optic feeds – or simply carried out blind by experienced surgeons.
Due to the sensor's low power consumption, less heat is generated at the distal tip of the endoscope (try not to think about it), improving patient comfort and thus permitting longer-duration procedures. The sensor is capable of analogue data transmission up to 4 metres with apparently minimal signal noise.
“Previously, procedures in the body’s smallest anatomy were performed either blind or using low-quality images from fiberscopes, as existing cameras were too big and reusable endoscopes were not cost effective,” explained Omnivision’s Aaron Chiang in a press release.
Initially developed for wince-inducing medical endoscopes and catheters, the OV6948 (which is designed to be disposable) might readily find its way into a wide range of applications outside of healthcare, including internet-of-things (IoT) devices and wearables.
One could have entire articles of clothing sewn with the sensors for some artistic effect. You can bet the CIA has already ordered a truck full for ultra-covert surveillance. Someone (art-autopsist Gunther von Hagens, perhaps) will have a series of internal examinations performed on their organs, with the video stitched together and entered for a Turner prize.
While this may be the outer limit for miniaturised silica, it could be far from the limit of molecular nanotechnologies which build objects from individual atoms.

Tuesday 22 October 2019

Ad management: Broadcast stability versus online chaos

IBC
Workflows for TV ads are a model of consistency compared to the wild west of digital, and the approach of traditional TV has a lot to teach the ‘new media’ of online video.
You’ve created your video ad and you want to get it to screen? You’ll need an advertising management platform, of which there are a handful of major ones operating globally. But while TV ads follow an established, fairly linear groove, digital ad delivery workflows are by some accounts less well organised.
Taking video ads destined for broadcast first, we pick up the story once the creative process is done and dusted.
  • Step 1: The creative agency creates an order (on an online ad-management platform) including for all the stations needed for playout
  • Step 2: The post house uploads the finished ad having already cleared it for compliance (in the UK via Clearcast)
  • Step 3: The ad is passed through automated technical checks like file type, sound levels, video levels, and duration to meet the broadcast specs
  • Step 4: The file is auto transcoded into different encodes needed by each broadcaster
  • Step 5: And delivered automatically into the broadcast workflow at a playout facility, slotted into the traffic system
That’s a very simplified schema, but it works.
Digital ad workflowDigital workflow, on the other hand, is, according to Doug Conely, chief product officer at video ad management platform Peach, one of “email chaos” and an almost “anti-workflow.”
“A lot of digital workflows start when the TV workflow ends by picking up the finished set of assets from the TV shoot,” he says. “This has to go to get repurposed for different aspect ratios and different durations.
The video can move from post house back to the creative agency on to digital media agency and maybe a different creative agency for online. At some point, it will be uploaded to an ad server and tags create to track the campaign properly, then back to the media agency to load into different buying systems and attach the tracking tags.”
The bulk of this is set up around an email workflow, with spreadsheets and low-resolution video files flying around. While the creative process for broadcast is by and large the same for every client, with little involvement from media agencies, the digital wings of media agencies are far more involved in creative workflows in online video.
Specialist digital teams, specialist programmatic teams, special social teams might be included in the creative to delivery process of a single campaign. What’s more, every agency does it differently and even within agencies, the workflow can differ for different clients.
“One of the consequences of this chaotic workflow for online video is that of a lot of campaigns start late,” says Conely.
“People would get fired in TV for missing airtime but it doesn’t happen because of the mature workflow. No-one gets fired for late starting campaigns in digital, despite the very real implications for buyer and seller.”
There are no technical video quality checks in the online workflow either. This was fine for low weight files on low-resolution screens but is a bigger deal with connected TV. Only BVOD in the UK is required to use cleared copy – a source of much arm-waving among broadcaster sales teams who declare that digital media owners are not playing to the same standards.
While traditional linear TV is considered ‘old media’, in terms of the consumer experience, it’s ironic that it has a lot to teach the ‘new media’ of online video. “In terms of a robust, scalable workflow, with a greater emphasis on the technical quality of the video and tracing what content went where broadcast TV is still leading the way,” he says.
“So many people touch the assets in different ways before anyone hits run in the ad server. The smart effort should be in the creative, audience and buying strategies not in the fulfilment mechanism. This is the case in TV but in online today.”
Server side ad insertionOf course, the broadcast side is never quite as simple as our five steps made out. An increasing number of channels and services are delivered as IP streams, digital solutions replace traditional playout.
This is where a media owner might want to choose Server-side ad insertion (SSAI), an increasingly prevalent process aimed at giving them greater control over the end user’s experience.
Explains Sam Wilson, regional vice president at online video platform SpotX, “SSAI is used to insert ads into a piece of high-quality, long-form content. For media owners, the ad insertion process allows for a buffer-less transition from content, to ad, then back to content, providing the same user experience as broadcast TV.
This process involves an SSAI vendor that sits between the online video player and ad server to mediate the stitching of the ads into the content. Advantages include the prevention of ad blocking, adaptation to different bandwidths to support poor connections (no dreaded spinning wheel); and it allows broadcasters to insert digital ads over a linear slate
“Online video typically hasn’t cared about frame accuracy on duration (i.e exactly 30 seconds for an ad),” Conely says. “If you’re stitching into a live stream, you’d better be frame accurate.”
Server-side ad insertion can be used to replace traditional ad breaks with digital ads, whether that is a live or pre-recorded event. Broadcasters will also utilise an SSAI vendor’s technology to handle high spikes in concurrency normally seen with large-scale events.
“Some vendors have the ability to pre-cache ads, so they can start calling their ad server before the break actually occurs,” Smith says.
Changing workflowsA shift to 4K and demand for versions for addressable ads would make a shift to cloud storage and transfer inevitable. Sending work in progress and finished video files to a cloud storage platform would benefit greater security over FTP and provide pipes that require less bandwidth to transfer large files in and out of premises.
This will happen as a matter of course with future rounds of infrastructure spend but the industry appears in no urgent hurry. One driver would be the move to UHD 4K ads but the dial has barely moved on this in the UK.
There are reports that some ads are still delivered as SD as upscaled to HD.
The other benefit of cloud workflows would be to fulfil larger orders for addressable ads – where versions could be created with more automation, more efficiently, in the cloud.
“The tools for dynamic ads have been around for a while but outside the occasional case study at Cannes, there have been few people using it in anger. That has just started to tip in the last six months or so, particularly in the US,” says Conely. “The jury is out on whether the costs justify the results but it is worth exploring.”
Targeted adsAdvanced TV and online video are all powered by targeting technologies that enable video advertising to be more personally relevant. That can be as simple as more granular dayparting controls, through to changing creative in real-time to local weather, through to audience segmentation (done with personal data regulation in mind, of course).
Ad targeting is happening online and to an extent on TV via buying and delivery mechanisms like AdSmart but there is a risk to premium video.
“There’s a reason why people don’t like online display ads - they can look overly templated,” says Conely.
“The industry creates beautiful TV ads with emotion and rich visuals much of which would be destroyed by forcing it into templates. The challenge back to the industry is how we can use technology to tell better, more personally relevant stories without losing the ‘big idea’.”
He adds, “Ad tech has been on a wild ride for more than ten years with ad exchanges, demand-side platforms, programmatic, header bidding, bid shading, VAST 4.1… but it’s been dominated by math men, not madmen. We feel the pendulum is swinging back to focus more on the creative opportunities from technology and not just the media opportunities.
“To deliver on the promise of relevance we need to find ways to deliver the many hundreds or thousands of variants required for more personally relevant video creative at a cost that drives brand campaign metrics. At the moment, the technology and workflow of the creative lags behind the targeting.”

Thursday 17 October 2019

Minority report style interfaces just took a step closer to reality

RedShark News
Minority Report has a lot to answer for, not least the stimulus given to a million articles like this about the future of the human-machine interface. Controlling internet-connected devices with gesture and voice is widely seen as the future but nothing has come close to the slick air interface imagined in Steven Spielberg’s 2002 movie.
Google hasn’t cracked it either – but it’s got something that has potential and it’s already inside an actual product, the Pixel 4 phone.
It’s disarmingly simple too and stems from the idea that the hand is the ultimate input device. The hand, would you believe, is “extremely precise, extremely fast”, says Google. Could this human action be finessed into the virtual world?
Google assigned its crack Advanced Technology and Projects team to the task and they concentrated research on radio frequencies. We track massive objects like planes and satellites using radar, so could it be used to track the micro-motions of the human hand?
Turns out that it can. A radar works by transmitting a radio wave toward a target and then the receiver of that radar intercepts the reflected signal from that target. Properties of the reflected signal include energy, time delay and frequency shift which capture information about the object’s characteristics and dynamics such as size, shape, orientation, material, distance and velocity.
The next step is to translate that into interactions with physical devices.
Google did this by conceiving Virtual Tools: a series of gestures that mimic familiar interactions with physical tools. Examples include a virtual dial that you turn as if miming turning a volume control. The virtual tools metaphor, suggests Google, makes it easier to communicate, learn, and remember interactions.
While virtual, the interactions also feel physical and responsive. Imagine a button between thumb and index finger. It’s invisible but pressing it means there is natural haptic feedback as your fingers touch. It's essentially touch but liberated from a 2D surface.
“Without the constraints of physical controls, these virtual tools can take on the fluidity and precision of our natural human hand motion,” Google states.
The good news doesn’t end there. Turns out that radar has some unique properties, compared to cameras, for example. It has very high positional accuracy to sense the tiniest motion, it can work through most materials, it can be embedded into objects and is not affected by light conditions. In Google’s design, there are no moving parts so it’s extremely reliable and consumes little energy and, most important of all, you can shrink it and put it in a tiny chip.
Google started out five years ago with a large bench-top unit including multiple cooling fans but has redesigned and rebuilt the entire system into a single solid-state component of just 8mm x 10mm.
That means the chip can be embedded in wearables, phones, computers, cars and IoT devices and produced at scale.
Google developed two modulation architectures: a Frequency Modulated Continuous Wave (FMCW) radar and a Direct-Sequence Spread Spectrum (DSSS) radar. Both chips integrate the entire radar system into the package, including multiple beam-forming antennas that enable 3D-tracking and imaging.
It is making available an SDK to encourage developers to build on its gesture recognition pipeline. The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high-precision position and motion data and gesture labels and parameters at frame rates from 100 to 10,000 frames per second.
Just imagine the possibilities. In the Pixel 4, Soli is located at the top of the phone and enables hands-free gestures for functions such as silencing alarms, skipping tracks in music and interacting with new Pokémon Pikachu wallpapers. It will also detect presence and is integrated into Google’s Face Unlock 3D facial-recognition technology.
Geoff Blaber, vice president of research for the Americas at analyst CCS Insight, says it’s unlikely to be viewed as game-changing but that marginalises the technology and Google’s ambition for it.
In fact, this radar-based system could underpin a framework for a far wider user interface for any or all digital gadgets. It could be the interface which underpins future versions of Android.
Google has hinted as much. In a web post, Pixel product manager Brandon Barbello said Soli “represents the next step in our vision for ambient computing”.
“Pixel 4 will be the first device with Soli, powering our new Motion Sense features to allow you to skip songs, snooze alarms, and silence phone calls, just by waving your hand. These capabilities are just the start and just as Pixels get better over time, Motion Sense will evolve as well.”
This is a way of describing the volume of internet-connected devices likely to be pervasive in our environment – particularly the smart home – over the next few years. Everything from voice-activated speakers to heating, light control, CCTV and white goods will be linked to the web.
Google makes a bunch of these (from smoke detectors to speakers under its Nest brand) and wants to link them up under its operating system (self-fuelling more data about individuals to refine the user experience). The battle for the smart home will also be fought between Microsoft, Apple, Samsung and Amazon. Soli may be the smart interface that links not just Google products, but perhaps all these systems together.
Of course, it’s early days. The virtual gestures may be intuitive, but we still have to learn to use them; our virtual language needs to be built up. Previous gesture recognition tech like the IR-driven Kinect and the Wii have proved to be an interesting novelty but clunky in practice. Gesture will work best when combined fluently with voice interaction and dovetailed with augmented reality so that we can view and manipulate text, graphics, even video, virtually.
Just like Minority Report – except without the gloves which Tom Cruise’s PreCrime detective wore.
It couldn’t get everything right.

Thursday 10 October 2019

Behind the scenes: Le Mans ‘66

IBC
James Mangold’s Le Mans ’66 - also known as Ford v Ferrari - looks at the iconic 24 Hours of Le Mans race. Cinematographer Phedon Papamichael talks about shooting emotion in the cold metal of motorsports.
Motor racing drama Le Mans ’66 puts the pedal to the metal for a gritty evocation of the sport’s heyday.
“There’s no point having action car chases if you’re not emotionally connected to them,” says cinematographer Phedon Papamichael ASC GSC. “It becomes boring. We want to feel the intensity of being inside these metal death traps with huge engines strapped to them. We need to feel the asphalt.”
In other hands, Ford v Ferrari (the film’s US title) could be a sterile corporate clash but director James Mangold has always been more interested in character-driven drama. He made the violent X-Men character study Logan and the Oscar-laden Johnny Cash biopic Walk the Line, which was lensed by Papamichael.
Based on real events and inspired by such classic 1960s race films as Grand Prix, the intent was to replicate the rebel vibe of the sun-bleached California car culture that resulted in maverick engineer Carroll Shelby (Matt Damon) and equally gung-ho driver Ken Miles (Christian Bale) taking on the legendary racing dynasty Ferrari at the gruelling marathon of Le Mans.
“This is an old-fashioned Hollywood movie,” Papamichael tells IBC365. “We wanted as much as possible to shoot practically. We built over 35 race cars and all the accidents and collisions are real. There’s only one full CG race car in the entire film and that was a shot that we shortened because it wasn’t really our language.”
Instead of sweeping crane shots or drone moves the races are depicted from the drivers’ POV and the most effective way to do that was to be “super close and super low.”
He had ARRI Mini and ARRI LF cameras mounted either on the cars or in pods attached to the back or front of cars parked on a chassis and driven by stunt drivers at speeds up to 100mph.
“Even though the real speeds were closer to 200mph, I wanted the actors to experience the vibration of the car, the g-forces and the sheer racket of these cars that you can’t re-create on a sound stage,” he says.
He typically shot close-ups with wide-angle to capture a sense of the cars passing just inches apart, underscored by using an anamorphic lens.
“We want to show the limited view the drivers had and convey a sense of their danger and proximity to other cars.”
Since anamorphic lenses don’t cover the entire frame of a large-format camera, he turned to Panavision’s special optics team to customise a set of vintage C series glass for use on the show.
“I was using prototype lenses. They gave this beautiful fall off in the background that is very painterly and has a natural vignette which is an effect you usually have to add in post. Anamorphic is really something that works strongly with this movie because you have so much activity in the pits, with all the crews and all that other cars. So, even if you’re in tighter intimate shots, you’re not isolating your actors. You always feel that environment.”
For a wet race scene where spray is getting kicked up by the tyres, he equipped the Ferrari with yellow headlights and gave the Ford warmer white lights.
“That may not be 100% historically accurate, but it acts as a graphical reference so you could identify just identify them in the rearview mirror.”
60s’ vibeFurther visual inspiration came from the films of the 60s and 70s, rather than contemporary interpretations of race car films like Days of Thunder or Rush. The filmmakers wanted grime and gasoline, not glamour and gloss. Grand Prix (1966) starring James Garner was screened on a loop in Mangold’s production office; Papamichael revisited Chinatown (1974) to get a grip on the Kodachrome used to capture the dry Californian sun, and Paper Moon, a 1973 film shot black and white, to get a measure of how much film grain to add in post.
“The colours come from [production designer] Francois Audouy’s design and I tried to embrace what was there rather than forcing it with coloured filters or a stylised look,” he says. “For instance, we talked about Detroit’s cooler tones and the desert environment of Willow Springs, where you see Christian Bale and his goggles looking like Lawrence of Arabia. It’s a great contrast with the cooler corporate side of the Ford factory and Michigan, all those Ford Falcons in the assembly line, in a cold steel blue.”
The Le Mans track no longer exists in the way it did in the late 1960s when remarkably there were no barriers on long sections of the circuit. One scene of Christian Bale entering Le Mans town by bus was shot on location but the race scenes themselves were assembled from footage shot at a 400ft pit wall set built at Agua Dulce airport near Los Angeles and on sections of country roads near Atlanta, Georgia.
Editor Mike McCusker even found real footage of the original Wide World of Sports broadcasts of the Le Mans race and added it organically to the cut.
Le Mans ’66 marks Papamichael’s sixth film with Mangold. He also often works with Wim Wenders, George Clooney, and Alexander Payne, earning an Oscar nomination for his cinematography on Payne’s Nebraska.
“James and I have a very similar aesthetic and film language. On set, I might say to my operator ‘boom down, push in, pan a little right’ and on another part of the set the crew would hear Mangold, who doesn’t wear a headset, instructing the very same thing. We’re so in-synch it’s almost scary.”
While Mangold is an alumnus of California’s Institute for the Arts rather than film school and has managed to carve a very successful career as an indie filmmaker making Hollywood productions (he executive produced The Greatest Showman), Athens-born Papamichael was apprenticed under Roger Corman making erotic thrillers like Poison Ivy. Coincidentally, both filmmakers have fathers who were painters.
“My film inspirations were Cassavetes and Truffaut but James and I both share a bond over Japanese filmmakers like Ozu and Italian neo-realism. I noticed that on our first collaboration that he’s very strong visually and will spend a lot of time in colour correction. We’re about the same age too and we both come from outside the studio system, so that helps our connection.
“My father was a painter, photographer, and art director who designed cars. As a kid, I drew cars too. Funny how things turn full circle.”

Wednesday 9 October 2019

3D’s comeback is inevitable - but next year?

RedShark News 
Like the Python parrot, it was only restin’. Far from dead, stereoscopic 3D was always due for resuscitation once the technology for glasses-free (auto-stereo) high-fidelity multi-viewing could be solved.
https://www.redsharknews.com/production/item/6698-3d%E2%80%99s-comeback-is-inevitable-but-next-year
It still hasn’t – but there’s an inevitability about its development because it’s just so obvious that there’s a need for it.
We see in three dimensions so why not get all that immersive performance into our screened entertainment. It’s the argument which continues to be made by die-hard practitioners of native stereo 3D filmmaking like director Ang Lee. Everyone knows glasses block out the light, not to mention other people, and they’re grossly uncomfortable.
Computing technology is becoming wearable, friction-free (seamless to use is the jargon) and the goal is to make the interface as transparent and natural as possible.
There is a legion of developments in this field, from Microsoft HoloLens to Magic Leap One, but a couple of recent patents by other tech giants are worth recording.

Apple AR

Last month, Apple published a patent for a mini-projector or series of mini-projectors, which would fire laser beams into your retina to create 3D imagery. Perhaps this would have to work with some form of AR glasses which seems an open secret at Cupertino, or perhaps fitted to your iPhone for viewing augmented reality rendered by ARKit.
The patent would seem to fit with Apple’s purchase a year ago of Akonia Holographics, makers of AR headset HoloMirror.
Then, this month, Sony was reported by a Dutch website Letsgodigital as patenting a 3D holographic display screen.
The patent speaks of pixel elements, light emitters and micromirrors with “at least one micromirror positioned and moveable to direct light from the first emitter outwardly from the display, and to direct light outwardly from the display at a second time at a different angle than light is directed the first time”.
It then states: “Facial recognition can be used in conjunction with the eye-tracking of a viewer to identify the viewer or viewers that the images are being displayed to”.
This plan ties in with an earlier patent filed by Sony for eye-tracking and head motion tracking technology so that the 3D image can adjust in line of sight.

Games consoles

Since the patent goes on to say that such components could be used in Sony PlayStation consoles and name-checks Microsoft and Nintendo and other manufacturer’s VR/AR headsets, there’s speculation that Sony is lining up a holographic device for PS5.
Since the PS5 launch is a year away, that’s possible but we think unlikely. The tech simply is too rudimentary at this stage.
RED’s Hydrogen phone and holographic media ecosystem was premature, to say the least and even NHK, the broadcaster which has cracked 8K broadcasting (albeit at huge Japanese subsidised expense) only has prototype of a basic auto-stereo screen. Its Integral 3D Display was demonstrated at IBC2019 and looked like an interesting novelty.
Volumetric content is widely considered the future generation of video where the user can experience a sense of depth and the sense of parallax. Arguably, it is the creation of content using light-fields which is even harder to crack than the holo-display. It’s the reason why holographic camera maker Lytro folded with the brains of the company forming Light Field Labs to concentrate on the ‘slightly’ easier nut of holo-displays.
Naysayers to the whole volumetric video/holographic display enterprise remind us that all the attributes for a completely immersive audio visual experience are contained in current obtainable UHD specs: 4K/8K, with High Dynamic Range; high bit rates and surround sound - no special glasses needed.

Tuesday 8 October 2019

Polly Morgan launches Lucy In The Sky with support from Panavision

Panavision
Lucy In the Sky is the story of an astronaut who, on returning to Earth, begins to fall apart and lose touch with reality. Co-written and directed by Noah Hawley, the film is inspired by the real-life romantic drama of a U.S. astronaut, and stars Natalie Portman as fictional character Lucy Cola.
Hawley and cinematographer Polly Morgan, ASC, BSC tell Lucy’s story with ambitious visual flourishes. The creative team chose Panavision’s DXL2 camera and optics to bring this astronaut’s disintegrating mindset to the big screen.
“Written into the script was a lot of magical realism that charted Lucy’s emotional decline and helped the viewer to go on this journey with her in a very visual way,” Morgan says. “There was a lot for a cinematographer to tuck into.”
Most notably is a bold use of aspect ratio to chart the central character’s emotional freefall.
“When we find Lucy in space, she has this sense of immense freedom and the screen reflects that in widescreen format,” Morgan explains. “When she returns to Earth, she feels a sense of claustrophobia, of everything constricting her, so that’s illustrated in the frame itself where we move to a squarer 4:3. It echoes her breathing. When she feels relaxed and free with a sense of wonder the screen opens up to reflect that, and when she feels overwhelmed and trapped, the frame boxes her in.”
The transitions between aspect ratios were suggested by visual cues in the script. “For example, as Lucy goes through a doorway, we’d use the darkness of the frame to take us from widescreen to 4:3. Or we’d match the light of the space shuttle entering the Earth’s atmosphere with a bright light bulb at a doctor’s office to switch from widescreen back into 4:3.
“We discussed the transition elements a lot during pre-production so it would work without jarring the audience out of the story,” she says. “Noah and I had used aspect ratio changes on Legion to complement the story. Here we used light and darkness and set design as well as camera moves to translate the concept smoothly to the big screen.
The desire to convey the central character’s altered state through aspect ratio and distortion convinced Morgan to choose anamorphic glass. In turn, that meant selecting a camera which would enable them to crop 4:3 from anamorphic 2.39:1 widescreen without losing detail in the image.
“We tested a number of large format cameras but DXL2 seemed the perfect choice,” she says. “Shooting on the MONSTRO’s 8K sensor allowed for both anamorphic squeeze and extraction of 4:3 from the 2.39 capture for a 2K deliverable. We shot REDCODE raw at 4:1 compression and 6:1 when we went to high speed.”
She adds, “I was really impressed with not only the sensitivity of the native 1600 ISO chip and its latitude but also its color rendition which felt very organic. Everything just pointed to using the DXL2.”
In order for Morgan’s choice of G Series Primes to work with the format, she worked with Panavision’s Guy McVicker to customize the set.
“Panavision adjusted the G Series to cover the large sensor, but it still optically stretched the image to the limit,” she says. “I fell in love with the look because when you took the 4:3 extraction from the 2.39 it gave this interesting effect where just the top and bottom of the frame had focus fall off and where cropping in at the sides helped us to focus sharpness in the center – to punctuate the fact that Lucy is not seeing clearly and her world is distorted. I felt the lens gave a very painterly quality to the image and a softer, more feminine approach to the story. The optical artifacts helped deliver that.”
Since a portion of the story is set in space for which several sequences required visual effects, Morgan decided to shoot these spherically and at full resolution 8K.
“Scenes of NASA’s underwater training facility were going to require particularly heavy VFX, so after some debate we decided to approach that dry for wet and to shoot it clean, full sensor, and spherically using Primo 70s. We added anamorphic optic effects to match the G series in post.”
She also deployed spherical 35-200mm glass from Panavision for an infinite zoom effect to convey the out of body moment when Portman’s character hears news of her grandmother’s stroke.
“We shot still plates at Lucy’s house at night and then down an east-facing street through dawn into morning so you can see the sun rising in the plates,” Morgan explains. “VFX stitched those together to create a factual zoom. We then shot Natalie standing in front of an open elevator against blue screen in which we comped the infinite zoom plates. We created interactive moving light to tie Natalie into the plates and created movement by placing her on a dolly which is then pulled back and moved down a hallway in the hospital into her grandmother’s room. The audience feels as if they are being pulled in a continuous shot from the house, down the street, and into the hospital.
“The idea is that when you suffer traumatic news, time kind of disappears. When you think back to how you got somewhere you can’t remember the details. It’s one of the magical realism elements we used to illustrate Lucy’s mental state.”
The film is designed in primary colors – red with blue and green reflecting the natural world. “We wanted to have quite a vibrant color palette to contrast with the cool tone throughout the film. That was established on set and continued into the DI.
“We really tried to protect highlights and have them roll off and to expose with quite subdued tones which was helped by the sensitivity of the DXL chip,” she adds.
Morgan notes her relationship with Panavision began early in her career. “I would go to Panavision in my spare time and label camera cases and filters for any production that would let me,” she recalls. “I’ve grown up with Panavision and I’ve always been a fan of their optics, so it was natural to turn to them for Lucy in the Sky.”

Friday 4 October 2019

Ben Davis BSC uses Blackmagic Micro Camera For Captain Marvel

British Cinematographer

Captain Marvel, the twenty-first instalment in the Marvel Cinematic Universe and the first with a female lead superhero, features action sequences shot using the Blackmagic Micro Cinema Camera.
Co-directed and co-written by Anna Boden and Ryan Fleck, the Disney release is the fourth Marvel film lensed by cinematographer Ben Davis BSC after Guardians Of The Galaxy (2014), Avengers: Age of Ultron (2015) and Doctor Strange (2016).
Set in the mid-1990s, Captain Marvel follows Carol Danvers (Brie Larson), a former US Air Force fighter pilot, as she turns into one of the galaxy’s mightiest heroes. She joins Starforce, an elite military team, before returning home with new questions about her past and identity when the Earth is caught in the centre of an intergalactic conflict between two alien worlds.
“There were two major action sequences in particular in which I planned cameras to be rigged to vehicles, go-karts and aircraft, and I was really searching for that magic combination of size and quality – basically as small as I could get, with the best quality I could find,” Davis explains. “I tested various options and the Blackmagic Micro Cinema Camera was my favourite.”
The Micro Cinema Camera’s Super 16 sensor records RAW internally and comes with a Micro Four Thirds lens mount. “Having a fixed lens size where I couldn’t change the focus or set the exposure wasn’t going to work on this film. I wanted far more control over exposure, and the ability to shoot at a good ASA rating.”
Above all, Davis wanted to cut the footage in with the Alexa 65, the principal camera system used on the movie. “Other cameras of this size offer quite a sharp electronic look, whilst the Blackmagic Micro has a really good dynamic range. I knew from having seen the dailies that the images and colour fidelity to our look would work very well together with the Alexa 65.”
Davis framed for 2.40:1 whilst protecting up to 1.85:1 for IMAX distribution, “at as high a resolution as we could get without blocking-up our post pipeline with data,” he says. He shot in 1080 60p using a small set of off-the-shelf lenses, capturing on-board in RAW 12-bit log to SD cards.
“The mistake I’ve made before when using small action cameras is to put a big lens on them, with a protective cage and a motor to drive focus, a monitor to frame it and a transmitter to transfer the information, because in the end you’ve defeated the very reason you are using it for in the first place,” says Davis.
“So, this time we kept the cameras as small as possible, breaking out the transmitter into a backpack worn by our camera operators, stripping the unit down to one cable. The camera’s expansion port gave us lots of options for creating rigs for remote operation and monitoring. The fact you can record in-camera to data cards also helped keep the camera nice and small.
“We also built cages for the cameras, but in the end made little use of them. The structure of the body is so solid you don’t need to cage it in my opinion.”
Davis deployed up to three cameras, variously clamped into place or mounted on arms on-board the vehicles, with a fixed focus. On one occasion he used a video assist for monitoring, but would usually check footage after the sequence had run, make any adjustments, and reset ready for another take.
“That’s where the camera came into its own,” he remarks. “They’re very quick to rig. We’d often run two or three at the same time, particularly on the sequence involving a go-kart track. We just lined them up, switched on, pressed go and recorded. There’s a lot of vibration and heavy hits on those karts, but there was no corruption of the files at all.”
 The Blackmagic Micro Cinema Camera was used by Davis and the film’s second unit throughout the shoot, which largely took place on location in LA and Louisiana between March and July 2018.
“We always had our wireless kart ready-to-go and I’d often grab one of the Blackmagic’s to get a particular shot,” he says. “The bar for action sequences has been raised so high that, for these to work, you have to be very versatile with the camera. I want to put the audience in amongst the action, so having a small camera you can rig very quickly is great for that. That’s what I like about action cams like the Blackmagic. They can get you into places you can’t always go with ordinary cameras.”

Behind the scenes: Joker

IBC
Production designer Mark Friedberg and editor Jeff Groth explain how they mapped Manhattan to fit the warped and rotting mind of the clown’s alter ego.

You can’t separate the psychology of the Joker from the psychogeography of Gotham in the new comic book origins story. The destruction of professional clown and wannabe stand-up Arthur Fleck’s personality is rooted in the dysfunctional, dirty and decaying city – best known as the fictional home of Batman.
“Gotham oppresses Arthur as much as anything in the film,” says Mark Friedberg, the film’s production designer. “This version of Gotham is also a version of Joker himself.”
It is also an evocation of New York from the early 1980s when Friedberg and director Todd Phillips were growing up in the city.
“It’s a world we understood personally,” he says. “When I first sat with Todd I pitched an unforgiving view of Gotham. Gritty. Hard. The version of NYC that Travis Bickel and Rupert Pupkin prowled. I didn’t think we should stylise the world, particularly. It should look like we were a crew that tumbled out of a van and just started shooting.”
Scripted by Phillips and Scott Silver, Joker introduces us to Arthur Fleck, a character marked by a sort of delusional naivety and optimism. He wants to bring laughter to the world but the failing city continues to drag him down.
“It really smells, people are really harsh and it’s hard,” Friedberg says. “Arthur lives in the same dehumanising place as the masses, yet for some reason what affects us affects him more.”
In envisioning Gotham, it helped Friedberg to map it over the neighbourhoods of his hometown, giving him a more authentic sense of history and place to guide his imagination.
“There is garbage everywhere, traffic, the police are corrupt and even the people keep beating him up. Physically we looked for areas that still showed decaying buildings and vacant lots. We also looked for areas with overhead trains that block out the sun in a way that also bears down on Arthur. We tried to make the city labyrinthine. He is a rat in a maze.”
For Arthur’s home neighbourhood, for example, Friedberg chose the South Bronx. “It’s hillier than Manhattan, which helped distinguish Gotham from NYC. The stairs that Arthur climbs are a burden at first but ultimately become his dancing stage when he becomes Joker. That scene is both exciting for him and creepy for us. A very dark Busby Berkeley.”
The filmmakers also had to make their film tonally distinct from the Gotham of Warner Bros.’ DC Extended Universe, since Joker is not, as yet, a continuation of that franchise.
“We didn’t look at any other comic movies at all,” Friedberg says. “We looked at Taxi Driver, The King of Comedy, Network, Midnight Cowboy. The most important thing was that the stakes are real but where we might get upset to the point of rage, at some point we pull back into civilised society. Unfortunately for Arthur, he is mentally ill and since there is zero societal support for him he turns to his worst instincts. Even though this is an invented environment, it’s a place we understand intimately, with its amorality and the gravity of what people are facing.”
After surveying the city, Friedberg started on concept art inspired by research into TV ads from the period as well as tabloid stories and photography.
“I pitched the idea that the film should feel as doc as possible and that our sets should feel very tactile. It goes back to Arthur’s arrested way of interacting with the world, thinking about it almost like a child running their fingers along a wall. But also, that’s the whole story of urban America – a tapestry of textures stitched through history, painted over or ripped down and sanitised and replaced with cold, smooth uniformity as our cities turn to glass.”
Dark nightsThat idea played into lighting textures too. Most of the lighting on set came from practical sources that were rewired into programmable LEDs so that DP Lawrence Sher and his crew could adjust them from an iPad. For exterior scenes, the LED street lights were replaced with ‘70s/’80s era sodium vapor bulbs, accentuating the harsh look.
“We used lots of tungsten and fluorescent – as oppressive and miserable as any concrete wall – built into the sets and wired to dimmer boards,” Friedberg explains.
Sher often shot with a narrow depth of field on Alexa 65, isolating Arthur in his environment – an effect augmented by shooting in 1.85 to pen him in just a little more.
Scenes of a Johnny Carson-style TV studio, built within a sound stage set, was shot by seven cameras (A and B camera ARRI 65 bodies, Alexa LF and four Alexa Minis) hidden inside props which looked like studio cameras of the 1980s all so that the angles could be cut together as if from the actual TV show. Betacam cameras were used to shoot TV news footage and even VHS was dusted off for comedy club footage of Arthur performing.
It may be a coincidence that Friedberg’s work on films as diverse as Synecdoche, New York; Wonderstruck, The Amazing Spider-Man 2 and If Beale Street Could Talk – all feature the city but there’s clearly something about the Big Apple that makes it so malleable to different stories.
“It’s a place where all the world has come in a grand experiment to live differently by each of their own culture and where we live as one,” he observes. “I’ve made Vietnam, Detroit, other planets here. I think you could make a western in Central Park if you needed to.”
Indeed, if Friedberg weren’t a production designer he says he’d be a New York cabbie or a detective.
“I am of this place, made of it. I used to think you could blindfold me and drop me anywhere and I’d be able to tell where I am from the sounds and smells. Now I’m not so sure, but there are very few people who have seen and experienced as much of New York as I have. I’ve been driving around this city since before I had a licence. I used to teach a class called ‘My Best Design Tool is My Car.’ It’s where I start all movies and it’s where I start to see.”
Getting inside Joker’s headEditor Jeff Groth also approached Joker as a character study but his concentration was on Arthur Fleck.
“We’re not making a Joker movie but a film about the person who became the Joker,” Groth underlines. “Keys to understanding the character is to know that he is wearing this mask of Arthur. He is ultimately always destined to be Joker but when he puts on the Joker mask he is, in fact, taking off the mask of Arthur.”
The response to the film has been schizophrenic. Lavished with praise for braving to be the most political of superhero movies, and winner of the Golden Lion for Best Film at the Venice Festival, it has attracted criticism for its glib portrayal of mental illness and showing sympathy for the devil.
“Because he is the main character you have to be sympathetic with him and his situation to even watch the movie,” Groth says. “The character is kind of romantic. He doesn’t want things to be the way they are but the world dumps on him and it gets worse as he discovers more about his life. At times he is human, and at times homicidal and we’re showing the complexity of the emergence of an evil that happens over a long period of time.”
Much of Groth’s work on the film was figuring out which of Joaquin Phoenix’s many exceptional takes to choose to highlight Fleck’s transformation.
“I’d hold close-ups on Arthur a little longer or remain in wider shots when the consequences of his personal evolution begin to erupt around him. Joaquin is giving an immeasurable amount of himself to the role. You see the progression of his performance in any given shot, so one of my guidelines was to stay out of the way of what he was doing by not overcutting. It’s challenging when you have all these puzzle pieces and several fits in the right spot.”
Send in the clownIn an unusual move, Groth cut the movie as consecutively as possible, working on set for much of principal photography. “I found this helpful because you can see the progression of the character and story, from beginning to end,” he says.
One pivotal scene in Fleck’s transformation is when he first puts on white make-up but is yet to fully add the Joker’s face. He says, “It encompasses so much emotion and tension, horror, and humour. His face is so beautiful and at the same time kind of tragic.”
He also picks out the scene when Arthur comes out of the nightclub with erstwhile girlfriend Sophie (Zazie Beetz) to the song ‘Smile’ by Jimmy Durante. “The lightness and happiness to that scene doesn’t happen elsewhere in the movie.”
‘Smile’ is one of a number of ‘on the nose’ tracks written by Phillips and Silver into the script. Others include Frank Sinatra numbers ‘That’s Life’ and ‘Send in the Clowns’, Fred Astaire dance tune ‘Slap that Bass’ and Cream’s ‘White Room’. The choice of 1972 single ‘Rock ’n’ Roll Part 2’ by Gary Glitter has attracted most controversy and seems designed to stir up a reaction.
“It is time appropriate – it wasn’t really known at that point [when the film is set] what [Glitter’s] thing was,” Groth reasons.
A better defence is when Groth adds that the tracks comments on Arthur’s deranged character. “I guess [everything we now know about Paul Gadd] feels like what [Arthur] listens to inside his head. He doesn’t know the future, but he knows it is disturbing.”
Groth’s own research included watching Patton, a 1970 biopic about a World War II general starring George C Scott. “It’s a great character study with a terrific central performance and score by Jerry Goldsmith. I didn’t need an excuse to rewatch The French Connection and we have a sequence on the subway that recalls its chase scene. Popeye Doyle is an extraordinarily unhinged hero. All of these influences play into your head as you make a movie.”
He adds, “You are seeing a city which seems like New York of the late 1970s but it is never quite the same.”
Even the sound of emergency sirens, part of the background noise familiar to any visitor to the city, are deliberately off-kilter.
“We played around with European sounding sirens before our sound supervisor [Alan Murray] created his own siren. The sirens are not part of the story, they are just background, but things like this may tip your subconscious into believing that something isn’t right.”