Tuesday 31 August 2021

What Frames What?

NAB

The famous dictum “the medium is the message” remains essentially as relevant in the era of digital as it did to theorist Marshall McLuhan in the televised world of the 1960s.

https://amplify.nabshow.com/articles/what-frames-what/

Could it also be that the content itself matters even less than it did then with worrying consequences for our collective sense of society and truth?

Michael Sacasas, blogging at The Convivial Society, thinks so. He suggests that the age of social media and the massive proliferation of images has led to a degradation of the cultural power of the image.

He has this thought-provoking idea: “As we approach the 20th anniversary of the September 11th attacks, I’m tempted to suggest that the image of the towers burning might be the last example of an iconic public image with longstanding cultural currency. As a simple thought experiment, consider how different the documentary evidence of 9/11 would be if the event had occurred ten years later after smartphones had saturated the market.”

He argues that 9/11 marked the beginning of the end for the age of the image. Specifically, it is the end of the age of the manufactured image that speaks compellingly to a broad swath of society.

Post-9/11, “the image economy began to collapse when the market was flooded with digital representations.”

Theorists before McLuhan, notably Walter Benjamin in “The Work of Art in the Age of Mechanical Reproduction,” pointed out that art works lost their ‘aura’ or authority when no longer unique but capable of being reprinted by anyone and displayed anywhere.

Sacasas extends this by saying that, in turn, the image loses its own cultural standing in the age of its digital manipulability.

Images, he writes, “are no longer received from a class of professional story tellers who have monopolized the power to produce and interpret the symbolic currency of cultural exchange. The image-making tools have been democratized. The image itself has been demystified. Every image we encounter now invites us to manipulate it to whatever end strikes our fancy.”

The point applies to all pre-digital media. Television, film, radio, print — all are taken up and absorbed by digital media either because they themselves become digital artifacts (digital files rather than, say, rolls of film) or because they become the content of digital media feeds.

“This means, for example, that a movie is not only something to be taken whole and enjoyed on its own terms, it is also going to be dissected and turned into free-floating clips and memes and gifs. What’s more, the meaning and effect of these clips, memes, and gifs may very well depend more on their own formal structure and their use in our feeds than on their relation to the film that is their source.”

Posing the rhetorical question, “What frames what, the televisual medium or the digital?” Sacasas contends that the answer is pretty straightforward: increasingly, digital media frames all older forms, and it is the habits, assumptions, and total effect of the digital medium that matters most.

Why does this matter? It’s couched in the academic language of Media Ecology but he is underlining the point that McLuhan made: that the medium of communication matters as much as, if not more than, the content that is being communicated through it.

Take the recent events in Afghanistan. How long will those images remain on the news agenda? How much cut through did they genuinely have on citizens around the world given that attention span of news making agendas?

“It’s the sense that nothing seems to get any durable traction in our collective imagination. I’ll provisionally suggest that this is yet another example of the medium being the message. In this case, I would argue that the relevant medium is the social media timeline. The timeline now tends to be the principal point of mediation between the world and our perception of it.

“Its relentless temporality structures our perception of time and with it our sense of what is enduring and significant.”

 


Pay TV Habits Die Hard

NAB

We’re looking at a pretty long and slow death spiral for the current linear TV ecosystem. The end will come in one of two ways.

https://amplify.nabshow.com/articles/pay-tv-habits-die-hard/

“At some point over the next 10 years, cord cutting will have grown to a point where it’s no longer profitable for the people running the current linear pay TV system to keep it going the way they have for the past 40 odd years,” says Alan Wolk, co-founder and lead analyst for media consultancy TV[R]EV, writing at Next TV. “That doesn’t mean linear ‘cable TV’ will implode or that the current players will disappear. It just means that things will be different.”

TV habits die hard. Because of that the division between TV and OTT will become indistinguishable and won’t matter to people outside of the tech and media analytics biz.

Guy Bisson, co-founder and director at Ampere Analysis made a similar point recently to me. “We’re at the stage where most households have multiple paid streaming service and multiple ad supported streaming services that all these distinctions between AVoD, SVoD, linear, BVoD and FAST are irrelevant,” he said. “As far as the consumer is concerned it’s all television. So, at that point we just need to call it TV. It is just migrating from one means of delivering Free to Air TV to another. The migration is a tech shift not a business or economic shift.”

Wolk anticipates one of two things will happen in the US market. In the first scenario linear TV’s main players (Comcast, DiscoveryWarner, Disney and ViacomCBS) will begin to realize that there’s no profit to be made from creating separate programming slates for both streaming and linear and will gradually start to integrate them.

“Some may do it gradually, some may do it suddenly, it really depends on how much the numbers and the moves their competitors make will take them by surprise.”

The second possible outcome is that the various MVPDs and vMVPDs that keep cable distribution going will come to a similar conclusion. Wolk says, “They’ll slowly begin to realize that the amount of money they spend on carriage and retrans fees and maintaining their set top boxes and apps isn’t being offset by subscription fees and ad revenue and that it no longer makes sense to view the whole thing as a loss leader to get people to sign up for broadband.”

 If the various parties are smart, die-hard linear TV consumers won’t really notice the change, which will be presented as a series of “upgrades” that still give viewers access to hundreds of linear channels. Those channels will, of course, be part of the various FASTs (Free Ad-Supported Streaming TV), not separate networks, but a well-designed UX will make that largely unnoticeable.

“New streaming bundles will take the edge off the confusion of á la carte, which will still be available as an option, but an increasingly unattractive one, devoid of the price come-ons the various services will have on offer for viewers who sign up for yearlong bundles.”

Over-the-air broadcast is the last piece here, says Wolk, and while it’s easy to predict that it will just fade away, that’s an unlikely outcome for both legal and economic reasons.

“Local broadcasters have been making a to do about ATSC 3.0 for a while now, but it remains to be seen if there’s much of a market for over the air TV at all once pay TV delivery goes to full time streaming.”

The bottom line is that traditional pay TV won’t disappear. It will merely be reincarnated in a brand new form, one that is no more or less dramatically different than the one that arose when the industry shifted from broadcast to cable back in the 1980s.

 


The Tech That’ll Take Streaming Into the Future

NAB

Content may be forever king but our enjoyment of it goes hand in hand with tech developments. That’s been the case since at least Gutenberg (Johannes not Steve) invented the printing press circa 1450 or Edison and the Lumière brothers et al birthed the film camera and hence cinema.

https://amplify.nabshow.com/articles/tech-thats-taking-video-streaming-into-the-future/

Stefan Lederer, co-founder and CEO of Bitmovin, wants to shout out some of the more recent tech developments lest they go under the radar.

“It’s the overall experience driven by technology that delivers outstanding, high-end streaming across all devices that determines how long [people] stay [on streaming services],” he argues at OTTVerse.

His back-end technology picks include SMPTE 2110 which has he says modernized traditional broadcasters with new digital workflows.

“The most recent adjustments allow for greater flexibility compared to SDI connectivity by specifying how to deliver compressed and uncompressed video and audio streams. This will allow broadcast content providers — once the de-facto format for television — to compete more readily with the leading streaming video platforms. It also means that consumers will have additional choices in their content selection, offering more options than ever before.”

Do or DAI

AVOD is booming but it would not be possible without the advertising component and in particular Dynamic Ad Insertion (DAI).

As Lederer points out, DAI may not be new but “advancements in latency and customization capabilities have allowed streaming services to seamlessly insert ads without any technical issues that could hinder the viewing experience.”

Those innovations can be attributed to the artificial intelligence built into each part of the DAI workflow, he says. “By using consumer information more efficiently, the latest DAI tech allows streaming content providers to deliver ads that are relevant to both the viewer and the content the viewer chooses to watch.”

Fine Tuning the Stream

OTT services also benefit from the development of Context Adaptive Content Delivery Workflows (commonly known as CAD) that employ multiple methodologies to produce an optimal end-user experience. Adaptive Bitrate Ladder (ABR), which compresses (encodes) and delivers content at the ideal resolution at a given bitrate, is the most well-known method.

“The true streaming innovations are coming from various technologies that apply ABR throughout their workflows, such as future codecs like AV1, HEVC, and VVC,” Lederer says. “Better still, Cloud-Based Per-Title Encoding makes it possible to provide viewers with top-notch streaming quality while reducing the costs of storing and streaming content.”

High Dynamic Range

The development of technologies capable of getting more of the contrast and nuances of light and color into the picture is, for many, a more dramatic improvement than upping pixel count.

Federer focuses here on Dolby, one of the many companies which has helped take HDR into the cinema and home streaming.

While Dolby Atmos uses “superior audio depth and spatial metadata” to extend “aural bliss” Lederer says Dolby Vision, provides the HDR features “necessary for rich and vibrant visuals.”

“Instead of streaming in the stone age — or at the very least limiting consumers with a mishmash of quality that varies per service — it is now possible for all streaming platforms to deliver prestige results. This is as exciting for consumers as it is for the hardworking content developers, who are now free to deploy their entertainment without any technological limitations.”

I might still be inclined to highlight the role of tech vendors across the industry in getting higher resolution pictures to screen — from 4K cameras and encoding tools to device displays capable of showing it.

 

 

Girl and the Hood: In The Heights

British Cinematographer

Alice Brooks captures the vibrant spirit of New York’s immigrant neighbourhood In The Heights

https://britishcinematographer.co.uk/alice-brooks-in-the-heights/

When Lin-Manuel Miranda debuted his theatrical musical sensation Hamilton in 2015, only stage aficionados were aware that he had created a previous Broadway smash with his semi-autobiographical work In the Heights. Now, after a year long delay due to the pandemic, the film version of his 2005 musical of the same name is on general release. Directed by Jon M. Chu, (Crazy Rich Asians) from a screenplay by Quiara Alegría Hudes, who wrote the original book, the film is photographed by Alice Brooks. One creative goal was to retain the dizzying energy and street-level vibrations of the Tony-winning Broadway smash.

Brooks and Chu studied film at the USC School of Cinematic Arts and went on to make web series The LXD [aka The Legion Of Extraordinary Dancers] in 2010-11 and feature Jem and the Holograms (2015) both of which were dance and music-based stories. Their previous project together was the drama Home Before Dark (pilot and season one) for Apple TV.

The opening 10 minutes of In The Heights, leading to the main title, comprise a self-contained story introducing the ensemble cast led by Anthony Ramos, Corey Hawkins and Leslie Grace, going in and out of song, and painting New York’s Washington Heights’ neighbourhood.

“Jon and Lin Manuel [also the film’s producer] felt they needed to shoot as much as possible in Washington Heights itself,” Brooks says. “The characters express their hopes and dreams, their anxieties and fears not just in song and dance but through the environment around them.”

In the Heights, is after all, a story of immigrants’ emotional struggle between home and place, cultural identity and the strength of family and community. The story is set over the course of three hot summer days, involving characters in the largely Dominican and Puerto Rican neighbourhood with which Miranda is intimately familiar.  Brooks joined Chu and production designer Nelson Coates on extensive location shoots in March 2019.

“We started looking at all these beautiful architectural spaces in New York including the Palace Theater in Washington Heights but they all felt too grand in a way. Jon did want some theatricality but we didn’t to tip the scales into fantasy.”

Brooks filmic references for the shoot take this aesthetic into account. Although a life long fan of musicals like My Fair Lady it was classic New York stories like Moonstruck and Do The Right Thing which were her touchstone.

“Anamorphic was an immediate intuition for both John and I,” she says. “We loved the idea of large format combined with anamorphic. While this is a huge musical spectacular it’s also a very personal story and we loved the depth of field.”

Nonetheless anamorphic wasn’t a slam dunk. They tested sphericals and looked at the Alexa LF and LF Mini before alighting on a Panavision package of DXL2 (7K 6:5 2x Anamorphic 2.40) with G-series lenses customised for the occasion.

“The LF Mini had only just released and came just a little too late for our production,” she says. “Panavision has supported both John and I throughout our careers. Jon has this letter he sent to Panavision asking to shoot on an F900 Panavised camera 18 years ago and a positive reply from [then president] Bob Harvey. They have just been amazing resource.

“To me the DXL2 and G-Series felt like the right choice in terms of shooting long New York city streets with the backgrounds out of focus. The bokeh is beautiful.”

Much of the movie is shot handheld but A camera operator Mark Schmidt apparently felt the DXL’s balance very similar to a 35mm camera. “He felt it grounded him in a way that some lighter cameras don’t,” Brooks says.

With Schmidt, Brooks’ camera department included 1st AC Basil Smith, 2nd AC second AC Marvin Lee; 1st AC B-camera Gavin Fernandez and C-cam / vfx Denise Bailie. Her gaffer was Mike Hoffman and key grip Kevin Lowry.

The light and look of The Heights

When the DP first arrived in Washington Heights she began to notice the things that differentiated the area from adjacent blocks.

“It has a very different colour,” she observes. “The sunlight is different to anywhere else I’ve been in the world. It’s not a tropical summer, not a LA summer, it’s definitely not a Hawaiian summer. It’s an urban summer but it’s not even like the rest of New York City. Washington Heights has these grey and yellow brick buildings whereas the rest of Manhattan has red brick and new buildings. Washington Heights buildings are much lower and the way sunlight hits them really influenced my choice in the DI.

“We started colouring the movie in middle of the film with a number called ‘Carnaval del Barrio’ which takes place in a courtyard. I could have gone with full out saturated primary colours in that scene in the DI, I could have cranked it all up but we worked really hard to not do that. I kept talking to [colourist Stephen Nakamura, Company 3] about bringing it back and we ended up desaturating the whole film to achieve a colourful but pastel feeling.”

Nakamora and Brooks created a LUT based on the Light Iron film LUT. “We started with that as our base and then tweaked it. The final version is better than our dailies as you’d expect but it wasn’t a huge shift from our dailies to what the final look was. When they screened the movie our colour was already 90 percent there.”

They shot for about 4 weeks on location with another period on soundstage where they recreated the main intersection of the film at 175th Street and Audubon Ave and several interiors. To match studio lighting to real locations, she used Arrimax 18Ks, Arri S60s and an array of 20K Fresnel and 20K Mole beams to accurately recreate the light of the sun and sky.

“We were able to get access to buildings all the way up and down the blocks in both avenue and street directions. We used a lot of Arri SkyPanels with S360s on tops of buildings for night work.”

She deployed Skypanels to create firework effects for the musical number ‘Blackout’ which builds to a critical narrative moment (at the end of Act 1 in the play). “Our dimmer board operator was able to dial in the colours of the fireworks for different streets and see them play off the buildings. It was fantastic.” For another number she lit a subway tunnel with Astera LED tubes and fixtures obtained from a theatrical lighting house in New Jersey.

Most of the musical numbers in the film were pre-recorded, save a few songs which were recorded live. ‘Champagne’ was one such tune which was executed on set as a ‘oner’ which took fourteen takes to nail. This is a two-person number between Usnavi, the owner of a Bodega, and his would-be flame Vanessa who works at a beauty salon. It is set in the apartment of Usnavi’s abuela (grandmother), a narrow 14ft wide by 30ft long railroad townhouse with a window at either end.

“We wanted it to feel almost like a yo-yo as these two people come together and pull apart,” Brooks describes. “The apartment had a mirror built into its fireplace and we used that together with  another piece of glass in a kitchen cabinet and used the reflections to see both characters on occasion at the same time.”

“The week before we shot that, Jon and [choreographer] Chris Scott and I spend a day at the apartment location with our iPhones walking though the space. It took maybe eight hours to figure out how we could move the camera. It’s less of a dance, more of a choreographed movement and we used the entire house which is three rooms interconnected so you can see into each room.

“The following day, Monday, we were due to shoot the movie’s finale outside but it thundered and rained so much we were able to pivot quickly and walk everyone through the blocking we had prepped inside the apartment.”

With Scott and Chu, Brooks forms quite a team. They had all worked on The LXD. “It really felt like this was the movie we’d been working our whole careers to make. We are very in sync with each other and have great shorthand. While Jon and I were out working on shot lists, Chris would be at dance rehearsal. We’d meet up, he’d show us his work, we’d share our ideas and we’d both revise accordingly. It was a very fluid collaboration.”

Principal photography had finished and Brooks was already at work on her next film – another musical this time directed by Miranda called Tick, Tick…Boom! - before Covid struck.

“I’d started the DI when I was in New York but had to finish it remotely with Stephen in LA. We had a direct stream using Sohonet ClearView Flex which I was able to view on my iPad which worked very well.”

Her personal take-away from the film is the feeling of community she found in those few blocks of the city. “I’d been there eight or nine weeks and realised I had fallen in love with Washington Heights,” Brooks says. “I’d fallen for the smells and the light and the sound and the people and realised that my job was not to make the Heights into something it wasn’t but to show the beauty that already exists there on screen.”

Brooks’ industry roots run deep, having acted in more than 40 national commercials as a child. Her father was a playwright, her actress mother introduced her to Broadway shows, and she says she spent her teenage downtime in a darkroom.

“We even lived across the street from Warner Brothers for a while. I’d watch the camera people make magic. I knew by the age of 15 that I no longer wanted to act but that I absolutely wanted to be a cinematographer.”


Monday 30 August 2021

“The Sparks Brothers:” Assembling a Musical Odyssey Turned Pop Art Documentary

NAB

“If you want to look at Ron and Russell, you have to look at them through one prism. And that prism is cinema,” says Franz Ferdinand frontman Alex Kapranos, one of the contributors to the new documentary about The Sparks Brothers.

https://amplify.nabshow.com/articles/the-sparks-brothers-assembling-a-musical-odyssey-turned-pop-art-documentary/

Director Edgar Wright's debut feature doc captures the art-pop pioneers at an improbable late career high, as well as recounting the story of how they got there, asking why they aren't as celebrated as they deserve to be, and finding out how they became your favorite band's favorite band.

Their eclectic body of work spanning 25 albums and five decades is, among other things, inherently cinematic. The Maels, who began making music while studying film at UCLA under the influence of Ingmar Bergman and the Nouvelle Vague, create songs that present themselves as a three-minute elevator pitch for a romantic drama or a black comedy. They often use such meta-narrative cinematic techniques such as whipping away the wizard's curtain and breaking the fourth wall.

In the film’s production notes Ron Mael compares their fractured sense of narrative to walking in halfway through a film, and figuring out what's going on (something he and Russell frequently did as children). They are also, literally, filmmakers, albeit perennially-thwarted ones: projects with Jacques Tati and Tim Burton didn't make it to screen (though Annette, a collaboration with Leos Carax, is currently awaiting release).

The Sparks Brothers documentary is as genre-promiscuous as Sparks' discography itself, using Wright's trademark superfast edits and several styles of animation to push things along, as well as the more traditional use of archive clips and talking heads.

Wright personally conducted over 12 hours of interviews with Ron and Russell over two years as well as interviewing numerous Sparks admirers and collaborators such as Beck, Bjork, Steve Jones of Sex Pistols, Flea from the Chili Peppers, Mike Myers and Giorgio Moroder.

There was also the challenge of finding archival footage, some of which had never been seen before.  They began with over 6,000 separate archival assets which included hundreds of full performances, boxes of personal photos, contact sheets, and 345 songs.

To bridge sections as well as illustrate anecdotes and add visual grace notes, Wright enlisted the help of animators Joseph Wallace and Greg McLeod.

“I always had the idea that because the brothers are so filmic, and interested in film, additional animation and visual non-sequiturs would be perfect,” says Wright. “I never directed a Sparks video, but I wanted to have imagery in it that would be worthy of one of their videos.”

Editor Paul Trewartha was tasked with shaping and condensing the material.

“We were working with countless formats, aspect ratios and frame rates that we were constantly interpreting as frame for frame in the project window to remove blending at every opportunity,” Trewartha tells Adobe. “My incredible assistant, Andy Laas, then reproduced this interpretation with the hi-res material after lock and completed the full conform in Premiere Pro, eye matching over 2,000 separate cuts of archive alone before feeding these mix downs out with associated XMLs to the grade. It was a lot of work, but allowed us to troubleshoot in a controlled environment before feeding it out.”

Trewartha also animated billposters, flyers and album covers in After Effects and manipulated hundreds of contact sheets directly in Premiere Pro by importing the stills as high-res files and then cutting and repositioning to bring them to life. “I don’t know how we would have achieved the final aesthetic in any other way,” he says.

All of this also helps the film visually represent the eclecticism of Sparks’ career.

As producer George Hencken says, “The typical thing about Sparks is there’s nothing typical about them, and this film reflects that.”

 


“Chaos Edits:” Art Form, Social Commentary, Neither, Both?

NAB

It’s neither the collapse of culture nor an ingenious new artform, but the trend for chaos edits is sweeping the internet.

https://amplify.nabshow.com/articles/chaos-edits-art-form-social-commentary-neither-both/

At first glance these videos shared on social media are just plain bad. They’ve no right to be gaining attention. But their very artlessness is the point.

TikTok is the natural home for these crazed, weird, incoherent sh**posts (the term for posting aggressively, ironically, and trollishly poor quality video online). Sh**posts are intentionally designed to derail discussions or cause the biggest reaction with the least effort. Sometimes they come across as ads — as this one for Amazon apparently did — causing speculation that brands were jumping on the bandwagon (Amazon denied it was real).

Writing at Vox, Rebecca Jennings notes that chaos edits “can be made up of anything the creator wants, but many share certain stylistic qualities: sped-up audio, intentionally sh**ty image or sound quality enhanced by watermarks or graininess, and disturbing or gross-out humor.”

She also lists other examples: Video of marine life against audio of gunshot sounds and the theme from Titanic, spliced together with PowerPoint transitions. A slideshow of mildly cursed images set to Aphex Twin. A series of clip art images and Jason Bateman headshots that appear to create a horror movie about wanting to have sex with Jason Bateman. An imagined vlog of opening night of Swan Lake, as told by Pyotr Ilyich Tchaikovsky and illustrated by clips of Drag Race and the sounds of Nicki Minaj.

Since the first skateboarding cat videos appeared on YouTube, everyone in conventional media — that is, anyone in professional content creation — has wondered why anyone would watch such rubbish.

Well, the joke’s on them. This kind of stuff exists as antithesis to the polished, graded, precisely cut and paced videos we’re supposed to watch.

“Perhaps it’s a pushback against the tyranny of Instagram perfection; perhaps it’s simply the logical endpoint of mass availability of video editing software,” Jennings says. “Perhaps it’s because chaos alone can encapsulate what a chronically online brain looks and feels like.”

Perhaps. Jennings herself prefers another reason for why these sorts of videos and memes are enjoying a moment in the sun. “It’s because they’re sort of cool and alt, and when you publicly share a chaos edit or a sh**post, you get to feel superior to other people who might not fully get it.”

If nothing else, the randomness and energy of these videos suggests there’s a place for anarchy on the internet, and that should be celebrated.

Vuela Embraces Virtual Post Operations with LucidLink

copy written for LucidLink

Vuela is a boutique color grading studio in Quito, Ecuador with international ambitions. The post-production house has established a leading reputation locally working on commercials for blue-chip brands like Ford, Chevrolet, Amstel, and Santander.

https://www.lucidlink.com/case-study/vuela-embraces-virtual-post-operations-with-lucidlink

After working for over 7 years in motion graphics and postproduction for advertising, adding more than 20 years of collective experience, Vuela has now expanded its facilities to include color grading with Vuela Color and established a fully cloud-based operation based on LucidLink, with aims to widen its reach across South and North America. 

Needing to change up

Vuela harbored the idea of operating remotely but it was the pandemic that created the scenario for it to happen.

Explains Owner and Colorist Julian Crespi, “Before Covid, all the footage from a shoot for a commercial would be delivered to us on hard drives. It was just the conventional way of doing things, but it was not ideal for the increasingly fast-paced nature of production. Using hard drives always requires a wait for the drives to be delivered and then additional downtime to transfer. This may only be a few minutes each time, but incrementally it all adds up.”

When Covid necessitated an end to in-person meetings and made the exchange of physical media difficult, Vuela’s clients began asking for alternative ways to handle the material.

“We tried uploading media from local hard drives directly to our staff and our clients using online solutions like WeTransfer and Google Drive, even FTP, but none were reliable for our needs. You’d also have to go through the process of downloading files, uncompressing them and copying them to a project folder, and then the reverse on export. This was inefficient and wasting precious time.”

Finding LucidLink

At the Hollywood Professional Association Tech Retreat in February 2020 (which took place in person just before lockdown) Crespi first learned of LucidLink. He had met with Jeff Olm, the Technology Director at Eclipse Tech and fabled VFX artist (Titanic, The Fifth Element).

“I was lucky to have a chat with Jeff and he told me about how LucidLink enables you to mount a drive onto a virtual machine with all media hosted in the cloud. As soon as Covid forced us all to work remotely, this idea made a lot of sense to us.”

“We immediately tested LucidLink and it just worked brilliantly straight away. It quickly became a standard part of our service.”

Time-saving productivity

With Crespi, his business and craft partners as well as third-party post suppliers, client directors, agencies, and DPs all working from home, Vuela Color was able to continue working entirely remotely on projects throughout the pandemic.

“What we valued most at first, was what our clients valued which was the simplicity of working with LucidLink. Essentially, everyone on a project sees the same virtual hard drive on their workstation as a local drive which is always up to date. There’s no need to send any manual or video explainer. It is incredibly easy to understand and that is very valuable to our clients.”

LucidLink has been hugely beneficial Vuela’s workflow. Working on data-heavy commercials projects with 4K RAW EXR files that average over 3 GB per shot can sap bandwidth as connections struggle to manage the load, but Vuela finds LucidLink has slashed these wait times.

“Compressing and uploading a sequence of image files using links like WeTransfer feels like it takes ages. The great thing with LucidLink is as soon as you start copying the image sequence folder to LucidLink Filespaces the sequence starts downloading immediately at the receiver’s end. There’s no lag waiting for files to upload or download. So, by the moment your upload ends, your client already has every file downloaded on their computer. The process happens simultaneously and instantly. “

Centralized file structure

Another important attribute to Vuela is LucidLink’s file structure. “We operate to tight deadlines where everyone wants the latest version right this second. LucidLink was able to transfer files extremely fast but crucially in a very orderly fashion. 

“When you’re working so fast and collaboratively it can be hard to keep track of it all, especially if you’re receiving material from different sources. You are at risk of duplicating files and duplicating effort by not working on the most up-to-date version. LucidLink changed our workflow overnight by unifying all our inputs and outputs, and because the process is so intuitive, the file directory remains centralized so that everyone can join in from anywhere and always be in sync with the workflow.”

LucidLink’s technology streams data on-demand, eliminating the need for storing unprotected copies of files on multiple devices, radically different from any existing solution.

“Security has never been an issue,” says Crespi. “Our clients trust us to secure their property and we trust LucidLink to encrypt that data. It has never been an issue even once.

“All around, LucidLink has reduced the stress of working in a pandemic environment both for us as a service provider and for our clients. LucidLink is technology we can rely on.” 

Grading and finishing on Da Vinci Resolve at Vuela has been entirely decentralized for months with media in the cloud organized via LucidLink. Vuela could in theory operate from anywhere and access server-grade high-speed connections on demand. So successful has the model been that Crespi says the facility will remain virtual from now on. 

“The core concept of a centralized space that is easy to share may sound simple but its impact for us and our clients has been nothing short of sensational. For Vuela, it is something that adds value to our service. When a client comes to us we can tell them that they will receive this amazing software as a standard part of our service. For us, LucidLink is a differentiating factor. Now that our clients know how it works, they expect it from us too.”

 

How the Metaverse is a Capitalist Utopia

NAB

The Metaverse is a giant cash cow. There’s no need to dig any deeper. It’s not going to be the socialist utopia that Tim Berners Lee dreamt the original World Wide Web would be.

https://amplify.nabshow.com/articles/the-metaverse-is-a-capitalist-utopia/

If you’re a media tech titan like Disney, Epic Games, or Facebook, then staking real estate in the metaverse is a logical extension of your current business. If you’re a content creator then new forms of monetization — NFTs, blockchain, disintermediated payments — will be a backbone of the Metaverse if you believe the hype and if you’re a brand then you have to be where the eyeballs are headed.

Picking up that latter point, Alaster Armitage-Brain, senior digital producer at ad agency Imagination asks: what is the Metaverse and why should brands take it seriously?

Describing the Metaverse as an “experience ecosystem” he says it will open up “a whole new blended-reality world for future-thinking brands willing to boldly go.”

We have become accustomed to the blending of the real world and virtual worlds, he argues, especially since the pandemic has made many more people familiar with and accepting of the benefits of the virtual world.

“The metaverse is the next stage along this journey — joining the physical and the digital worlds in new ways. For example, an experience on the high street might have a digital twin in the metaverse.”

Currently we all connect with friends, family, and colleagues online and millions of us use “hyper-connected platforms” such as Facebook or WeChat. We can log in to any website and store our personal information and share relevant data with our connections — but it’s still a digital world, separated from the physical one.

Armitage-Brain suggests that what will happen — indeed, has already begun to occur — is that the internet will decentralize.

“The internet giants will no longer own users’ data outright; instead, the user will take full responsibility for their digital footprint. We are already seeing this with the uptake of cryptocurrency and NFT auctions which allow the ownership of digital assets to be recorded.”

Like physical cash payments transitioning to digital contactless payments, he predicts that social networking “will transition from a chat window on a website or in an app to AR moments over a dinner table at a restaurant with your friends.”

Advertisers are constantly looking for the next frontier, the next up-and-coming platform, the next “any space” where they can authentically connect with consumers.

The Drum identifies this as Roblox. The gaming platform recently received fresh attention from first-mover brands who see it as the media-tech crossover of today with the most advertising potential. Nike, Gucci, and Hot Wheels are among those investing in branded experiences on the site.

“With many younger players on Roblox, ones that will become the next generation of consumers, it makes sense that traditional brick-and-mortar brands would try to connect offline products to online worlds,” says Natalia Vasilyeva, VP of marketing at in-game advertising platform Anzu.

The field is being pioneered by media and content franchises. In-game partnerships have brought the worlds of Marvel Universe, DC Universe, Star Wars, John Wick, and Stranger Things into the Fortnite universe.

Brand marketing in the metaverse is still in its early days, says Vasilyeva, but as technology advances quickly and creativity flows, “we can expect to see more brands to enter the space in exciting and engaging ways. Although some advertising such as banner ads exist in Roblox, the channel is relatively untapped.”

Expect to see an explosion of advertisers popping up in Metaverse building platforms like Roblox, Fortnite and Minecraft making gaming a more regular part of their media mix.

 


6G and the Possibility of a Haptic, Holographic Internet

NAB

As 5G continues its phased rollout even the promise of mobile 8K VR experiences can seem like yesterday’s news. What if we could communicate with tactile holographs? According to researchers involved in scoping a 6G network, this is our future.

https://amplify.nabshow.com/articles/6g-and-the-haptic-holographic-internet/

In a paper published by IEEE, academics from Sweden, New Zealand, Southern California (USC), and London imagine 6G as the building block to “vastly connected societies.”

The study outlines what it calls a “high-fidelity holographic society,” one in which “holographic presence will enable remote users [to be represented] as a rendered local presence.”

That’s not so farfetched given that even now Microsoft, Google, and others are developing next-level videoconference systems designed to enable some form of holographic telepresence.

The authors note that 4G and expected 5G data rates may not enable such technologies — but that 6G might — owing to the fact that “holographic images will need transmission from multiple viewpoints to account for variation in tilts, angles, and observer positions relative to the hologram.”

Another promising possibility the study teases involves what they call a haptic Internet. “We believe that a variety of sensory experiences may get integrated with holograms,” the authors write. “To this end, using holograms as the medium of communication, emotion-sensing wearable devices capable of monitoring our mental health, facilitating social interactions, and improving our experience as users will become the building blocks of networks of the future.”

Other use cases mentioned in the paper involve what they call extremely high-rate “information showers” — hotspots where one can experience terabits-per-second data transfer rates — mobile edge computing, and space-terrestrial integrated networks.

Let’s not get carried away. As report co-author Andreas Molisch cautions, “There is [still] a lot of research that needs to be done…before the actual standardisation process can start.”

 


Moving Streaming Production to the Cloud

NAB

Remote live production has become all the rage in the past 18 months for reasons that should be obvious. This doesn’t necessarily mean that every workflow has shifted to the cloud, but that is the logical next step. Even BT Sport, one of the world’s premier sports production teams, is doing R&D on lifting its current remote collaborative workflows into the cloud.

https://amplify.nabshow.com/articles/moving-streaming-production-to-the-cloud/

One company with experience of it in practice is production outfit LiveSports. Its president, Jef Kethley, who is also “Chief Problem Solver” at cloud workflow specialist PIZAZZ, has shared his observations on the pluses and the minuses with Streaming Media.

“The first thing to think about when considering a move to cloud production as an alternative to traditional, centralized, on-prem production workflows is, why the cloud?” he asks, rhetorically. “It’s because it has specific advantages for certain applications.”

These advantages may be well rehearsed but are worth repeating to add weight to the argument. In no particular order, Kethley suggests that the cloud offers resiliency for your live stream. In the unlikely event that AWS West Coast goes down, it is easy to switch redundancy to another (East Coast) provider — provided you’ve done the homework.

The cost and efficiency benefits of distributed workflows are perhaps the easiest to grasp. “Whenever we get on site, we can go out with our engineers and our basic A-team on the ground, but our normal operators — our producers, our directors, our audio guys, even our graphics folks — are familiar with our workflow, and can be working anywhere, doing multiple events,” he says.

That’s much more efficient than having the whole team spend days traveling, setting up, show day, and then back again. Rather than having to dedicate that team to one event, a cloud infrastructure with a distributed workflow permits multiple people working on multiple events happening one after another.

Another benefit to distributed workflows and cloud production is the ability to copy-and-paste complete workflows once you’ve created a template that works for you.

Kethley says, “You can take the workflow you’ve used for one show and re-create it for another event while using another data center. It allows you to spin up multiple productions and multiple events without having to worry about starting over every single time.”

Final distribution is another benefit of cloud production. “If you’re going to streaming destinations, whether overseas or domestic, final distribution is definitely easier in the cloud,” says Kethley. You’re already on the internet. You can’t get any closer to it.

Disadvantages of Cloud Production

It’s not all pie in the sky. There are drawbacks too.

Getting familiar with a production workflow based on GUI access, as well as figuring out what your inputs are and how they get there, along with outputs and how to get them out, can be challenging, he says. “It’s not as quick or as easy as saying, ‘There’s an SDI plug, let me plug it in.’ ”

Similarly, diagnosing a problem is much harder in the cloud. It’s usually not as simple as, “Oh, that cable is unplugged.”

Likewise, training and operations in IP are a new challenge — simply because not everybody is as a fluent in using the cloud as they are in on-prem production.

“After a year of being in a pandemic, you’ll definitely find more producers and operators who are familiar with distributed workflows, but you should expect to have to spend some time training and teaching better operations.”

Finally, last-mile connectivity is crucial. It’s tied in with the training, Kethley says, in that what anyone new to cloud production might think a standard internet connection suitable for live when unmanaged networks are anything but.

The second half of Kethley’s article details some of the equipment LiveSport has used in its productions. His advice includes using Teradici as the GUI solution for switching cameras; Elgato’s Stream Deck for hardware control; SRT protocol to secure and smooth your contribution uplinks (he also uses solutions from Sienna, LiveU and Matrox; and the Vizrt Viz Vectar Plus for video switching.

 


Big Bang or, Um, Small-to-Medium Bang? Lessons in Cloud Migration

NAB

Anyone involved in technology migration projects will probably tell you that it’s much easier building systems from scratch than incrementally migrating existing workflows to new systems. But the reality is different. Because a return needs to be made on past investments, or because entirely-new workflows will cause too much operational disruption, hybrid workflows will be important for several years, allowing every company to transition to the cloud at their own pace.

https://amplify.nabshow.com/articles/big-bang-or-um-incremental-bang-lessons-in-cloud-migration/

A hybrid scenario implies that the media lives, and is processed, both on-prem and in the cloud. In some situations, hybrid operations will be required regardless as it may be more cost effective to work on-prem before assets are moved to the cloud.

Before embarking on migration, CTOs and broadcaster engineering/IT teams should be aware of the advantages and limitations that come with working on the cloud. Lessons learnt should be used to optimize workflow.

A panel convened by Streaming Media provides some perspective from executives who have made the move and come out the other side.

The panelists are Gerry Field, VP of technology and distribution services at American Public Television (APT); Renard T. Jenkins, VP of con­tent transmission and production technology at WarnerMedia; Richard Oesterreicher, president and CEO of Streaming Global; and Shiva Paranandi, VP of technology operations and cloud architecture at Paramount+.

Each of them is looking in the rearview mirror, at least to some degree, and each make the transition to the cloud sound easy, when of course it’s anything but. However, they all declare the move a success in that they are consistently able to scale and run their environments in the cloud.

Read a summary of the discussion at Streaming Media, and watch the full panel in the video below:

This summer, most of the non-live and near-live programming that APT distributes will be moved to the cloud and will no longer be fed on satellite. Among other things, this will eliminate redundant recording and storage across its 356 stations.

Nonetheless said Field: “We are very clearly in a hybrid workflow environment, and we are going to remain that way. There are some things that just don’t make sense for us to do in the cloud. Our [quality control] is still very much an on-prem process. If we had to pay for that, it would add considerably to the cloud bills that we’re paying.”

ViacomCBS, Paramount+’s parent company, has taken the first of two steps on its cloud journey. The entire streaming business, Para­mount+, and some CBS news and sports properties have already been delivered to the cloud. This was roughly a three-year transition, including a time when the company was running its on-prem and cloud services in tandem as it moved fully to the cloud.

“That intermediate state, when you’re migrating from the data center to the cloud, is extremely important because you’ve got to keep both systems running,” said Paranandi. “That’s double the effort, so you have to make sure there’s enough automation and processes so you don’t double your staff, but you still can keep all your uptime going.”

Jenkins also advised a phased approach: “Look for the low-hanging fruit to find those things that you can easily move to the cloud. One of the first things that people think about is your archive, especially if you have an archive, a ‘near-chive,’ and a deep archive. You move the archive first to your ‘near-chive,’ and you make copies and you keep it local so that you can continue to work.”

Latency issues continue to be a problem which is why Paramount+ is edge-caching of a lot of its VOD content. “It helps reduce the latency in the cloud quite a bit,” reports Paranandi. “When it comes to live streaming, there’s a lot of the network backbone that we have to pay attention to. Where the content is sourced from and how it is distributed is pretty relevant.”

An advantage on the pro-cloud side is the ability to turn things on and scale quickly. The cons? Learning new skill sets and work approaches, dealing with security, and anticipating costs when almost every aspect of the workflow starts a meter running.

Rather than hiring in entirely new personnel with the requisite software programming and data analytics skills, Jenkins prefers to train up existing staff.

“If someone is willing to learn, and they have historical knowledge, that’s going to be a really valuable player for you,” he said.

A fundamental learning is to ensure that processes that are ramped up for one event are spun down again at the end. Only that way will you optimize the pay as go OPEX of cloud.

“You find out that a process has been running constantly in the background,” Jenkins said. “It’s something as simple as that, that you have to really get people to focus on and that makes a very big difference because it’s not set and forget, like we do with a lot of traditional broadcasts.”

And one important lesson is that sometimes, old school is best. “Some of the efficiencies that you get from broadcasting content are just really hard to beat,” said Field. APT provides content for public television stations, and while last-mile delivery is being done primarily over broadcast transmitters, that transmitter output is also being fed to a streaming service.

“The content that we distribute eventually winds up on transmitters. It’s a question of how it’s getting there.”

 

Man In the Machine: The Ethics of AI

NAB

As artificial intelligence gains more sophistication and penetration into our daily lives questions do need to be asked about controlling its power. These questions aren’t new and in many ways are an extension of the classic Three Laws of Robotics devised by science fiction author Isaac Azimov eighty years ago. You don’t have to be a Skynet technophobe to join the conversation.

https://amplify.nabshow.com/articles/man-in-the-machine-the-ethics-of-ai/

Consider the following ethical challenges around AI as neatly outlined by futurist and “influencer” Bernard Marr.

Biases

AI’s are trained on data and we need to be aware of the potential for bias in that raw input.

“When we train our AI algorithms to recognize facial features using a database that doesn’t include the right balance of faces, the algorithm won’t work as well on non-white faces, creating a built-in bias that can have a huge impact,” Marr says.

Control and the Morality of AI

The increasing use of AIs to make split-second decisions should be a cause for concern. Automating a goal highlight for churning out to social media is one thing; having your car react when a child runs out in front of it at 40 mph is another.

“It’s important that the AI is in control of the situation,” Marr writes, adding, “This creates interesting ethical challenges around AI and control.”

Privacy

We need data to train AIs, but where does this data come from, and how do we use it? Marr cites Mattel’s Barbie, which now has an AI-enabled doll that children can speak to. “What does this mean in terms of ethics? There is an algorithm that is collecting data from your child’s conversations with this toy. Where is this data going, and how is it being used?”

This clearly speaks to a need to check the power of big corporations with stricter rules around data collection, its transparency of use and legal protection.

Marr extends this idea of power balance, and the dangerous lack of it to governments (and therefore the industrial-military complex, so yes, Skynet).

“How do we make sure the monopolies we’re generating are distributing wealth equally and that we don’t have a few countries that race ahead of the rest of the world?” he asks. “Balancing that power is a serious challenge in the world of AI.”

Ownership

Of immediate concern to anyone in media should be due protection of intellectual property. If an AI is trained on a data, that data will likely originate from a human source, so to what extent should their rights be protected — and recompensed?

Blockchain is the likely solution here as a means of tracking an IP asset as it is parsed at lightspeed across the internet. But this is field is nascent.

Environmental Impact

Marr suggest that training in AI can create 17 times more carbon emissions than the average American does in about a year. That’s a pretty startling stat and of course it’s an extrapolation of our daily use of the internet. Every single email and internet search clicks the gears (power, water, heat) in a data farm somewhere. It’s not in the cloud, the impact is real.

“How can we use this energy for the highest good and use AI to solve some of the world’s biggest and most pressing problems? If we are only using artificial intelligence because we can, we might have to reconsider our choices.”

Humanity

Marr’s final challenge is “How does AI make us feel as humans?” As AI automates more of our jobs, what will our contribution be, as human beings? Even if AI augments more than it replaces jobs Marr says “We need to get better at working alongside smart machines so we can manage the transition with dignity and respect for people and technology.”

It’s clear that the discussion around the ethics of AI are actually about the morality and ethics of us as a species. The challenge is now only how we impose or insert that morality and ethics inside of a machine — but if we can.


Wednesday 25 August 2021

Ariana Grande’s Fortnite Concert Opens Up The Metaverse

NAB

Epic Games’ latest major in-game live music event could be its biggest yet. The psychedelic Ariana Grande live experience on Fortnite was a multi-day affair that hooked into wider storytelling on the games platform.

https://amplify.nabshow.com/articles/ariana-grandes-fortnite-concert-opens-up-the-metaverse/

The concert followed previous in-game concerts featuring rapper Travis Scott and Marshmello.

Whereas those events were one-off experiences, the Grande event tied into multiple aspects of the platform, from the way the tour was announce to the tie-ins with aliens and iconic Fortnite moments and imagery.

“The sequence kicked off with players surfing a rainbow racetrack, hitting power ups in a cross between Mario Kart and Splatoon,” describes Tech Crunch. The racetrack sequence was followed by bouncing players through a Dr. Seuss-style landscape with candy-pink trees and giant floating eyeballs before dropping them into a mini-game shooting down the game’s Storm King boss to Wolfmother’s “Victorious.”

“Finally, in a black room lit by stars, a towering Grande appeared, taking players through an extremely surreal world,” explains The Verge. “There were giant floating bubbles in the sky, a ride on a glittering llama, an M.C. Escher-style castle, and finally the pop star emerging from a crack in the ground to smash all of her fans with a bejeweled hammer.”

Grande even looked like a Fortnite character, The Verge noted, with glowing white eyes and a dress made of shimmering shards of glass. You can, of course, buy a skin so you can play as her in-game.

Earlier this year, Epic’s chief creative officer Donald Mustard described Fortnite as “an opportunity to almost create a new medium.”

Fortnite doesn’t have a traditional plot or characters, but instead uses live events and its ever-changing world as tools to create a long-running narrative.

“Steadily, nearly every aspect of the game has been pulled into this focus on storytelling, even the copious licensed tie-ins,” Mustard said.

Travis Scott’s in-game performance was seen by 12.3 million live viewers. The Ariana Grande event is likely to top that and persuade many more artists to engage in the Metaverse.

 


Who Will Win That Eternal Battle for Eyeballs?

NAB

https://amplify.nabshow.com/articles/who-will-win-that-eternal-battle-for-eyeballs/

Advances in TV technology are yet one more reason why simultaneous theatrical and streaming film releases are here to stay. The gap between cinema and home cinema is narrower than ever, according to the latest stats.

The Evolution of the TV Set 2021 report from Hub Entertainment Research published in July indicates that a majority (57%) of US homes have a set with a screen that is 50 inches or larger (it was only 28% in 2016). And 7% have a television that is 70 inches or bigger.

With most movies still distributed at 2K for cinema projection, playback in the home can be better with 44% of homes owning a 4K capable TV — almost ten times as many as in 2016.

The Hub reports that 40% of TV homes have an external sound system — a TV connected to a sound bar, home theater speakers, or a stereo system.

Commenting on the report, David Tice at TV Rev notes that the pandemic increased consumer investment in home entertainment. In the Hub’s July 2021 study, “Predicting the Pandemic,” about 40% of those who watch on a smart TV say they bought one during the pandemic.

“It’s clear that TV tech enabling a cinema-like experience is more prevalent than ever,” Tice says.

Watching at home has always been less expensive and Tice runs the numbers. A two-person ticket to watch Black Widow costs $25, plus concessions ($10), for a total of about $35 in addition to the time taken to drive and park. Versus staying at home and paying $29.99 for the Disney+ premium buy, about 25 cents for a bag of microwave popcorn, and a two-minute walk downstairs to an 80” 4K set with surround sound.

“In terms of money and time, the choice between going out or staying in is about even,” Tice says.

The economics swing completely in favor of home viewing when weighing up the cost of sending the family to the theater.

“It’s true that for some movies, it’s hard to beat watching a movie with 500 other souls. But the in-cinema experience also varies widely and can contribute to a desire to stay at home. Whether it’s people talking to each other, texting or talking on their phones, or food and drinks being served during a movie, the experience has suffered over the past decade.”

Before the pandemic, exhibitors were revamping the cinema experience by offering Premium Large Format (PLF) screens incorporating Dolby Atmos, super large screens and other luxuries like superior seating and food to chair service.

Theater owners will hope that options like these continue to differentiate their venue for watching the new movie. “It may well work for blockbusters but ‘art house’ movies may well turn into ‘in-house’ movies enjoyed at home – the home experience will be good enough for character studies and documentaries,” Tice thinks.

There is another option, which is for the industry to buy into Doug Trumbull’s vision of a 4K HFR and 3D presentation.

“If we want the movie experience to be different from TV we’ve got to offer a spectacular, mindboggling experience that is more like a live Broadway show, a concert, a Cirque du Soleil,” he says. “It’s got to be bigger, better, much more immersive, more intimate and more spectacular.”

 


Neill Blomkamp Experiments With Volumetric Capture for “Demonic”

NAB

Director Neill Blomkamp has embraced volumetric capture for “simulation” or dream scenes in new horror feature Demonic — the most significant use of the technique yet.

https://amplify.nabshow.com/articles/how-neill-blomkamp-deployed-volumetric-capture-on-demonic/

Volumetric capture (which is also referred to as lightfield, or computational cinematography) involves recording objects or people using arrays of dozens of cameras. The data is then processed to render a three-dimensional depth map of a scene whose parameters including camera focal length can be adjusted in post.

The director of District 9 has a background in visual effects and says he was fascinated by the challenge of using the technique to create an entire narrative feature.

It’s an experiment with a number of important lessons. The main one being that the technology needs to advance before it becomes as mainstream as virtual production.

Tech Details

For the project, Volumetric Capture Systems (VCS) in Vancouver built a 4m x 4m cylindrical rig comprising 265 4K cameras on a scaffold. That was supplemented by “mobile hemispheres” with up to 50 cameras that would be brought in closer for facial capture.

Viktor Muller from UPP was Demonic’s visual effects supervisor (and an executive producer on the film). The filmmakers also employed the Unity game engine and specifically its Project Inplay, which allows for volumetric point cloud data to be brought into the engine and rendered in real time.

The data management and logistics “was an absolute goddamn nightmare,” Blomkamp shares in interview with befores & afters. The team were downloading 12 to 15 Terabytes daily. Even to process that in order to keep the show on schedule, Blomkamp had to add another 24 computers to those at VCS.

Acting inside of such a confined space (ringed by cameras, remember) was also a hard task for the actors Carly Pope and Nathalie Boltt.

“If they were semi-underwater, maybe that would be the only thing that would make it worse,” he acknowledges. “So, hats off to the actors for doing awesome work in that insane environment.”

Nor could the director actually view the performances on a monitor in real time. Since it was taking a day to calculate the data from cameras there was nothing to see.

“That means you don’t get any feedback from the volume capture rig. You’re just sitting around like a stage play, basically.”

However, it was the inability to capture sufficiently high resolution data necessary for filming a narrative drama that proved the trickiest problem to surmount. If the cameras were brought a few centimeters from an actor then high-resolution is certainly possible — “You may even see individual strands of hair” — but trying to use more of the conventional framing of close-up, medium and wide meant “an exponential drop-off in resolution.”

In tests, what resulted was a glitchy lo-fi look, which Blomkamp turned to his advantage by making it core to the story. In Demonic, the vol-cap scenes are presented as a “nascent, prototype, early development, VR technology for people who are in comas, or quadriplegic situations. I think in the context of the movie, it works.”

The captured material included RGB data presented in a series of files or geometry meshes.

“The idea of taking that and dropping it into Unity, and then having it live in a 3D real-time package where we could just watch the vol-cap play, and we could find our camera angles and our lighting and everything else — I mean, I love that stuff. That’s exactly what I wanted to be doing, and the narrative of the movie allowed for it.”

Blomkamp describes the post-process of direction as “like old-school animation, where you’re just hiding and un-hiding different objects over a 24-frame cycle per second. And then you just keep doing that.”

If it sounds tortuous, Blomkamp wouldn’t disagree but he feels that, given advances in computer processing power, the technique will get faster and the data easier to sculpt.

“The whole point of real-time cinema and virtual cinema is to be able to be freed from the constraints of day-to-day production… that you can be in a quiet, controlled post-production facility and load up your three-dimensional stuff in something like Unity, grab a virtual camera, and take your time, over weeks, if you want, to dial it in exactly the way that you want. So, in that sense, I don’t really think it matters that there’s a delay between gathering your vol-cap data and then crunching it down to 3D, so you can drop into your real-time environment.

“I think what does matter, though, and what is a huge issue, is how you capture it, in this highly restrictive, absolutely insane way that it currently has to be done. That’s what will change.”

 


Tuesday 24 August 2021

DP Rhet Bear on capturing Netflix's Coming of Age Series Never Have I Ever

copywritten for RED

Never Have I Ever returns to Netflix with fans eager for more of the high school, friends and family drama experienced through Devi (Maitreyi Ramakrishnan), a modern-day first-generation Indian American teenager. The series is inspired by producer-writer Mindy Kaling’s own childhood.

https://www.red.com/stories/rhet-bear-never-have-i-ever

The first season was a global hit for Netflix and producer Universal Television, attracting over 40 million households in the first month after launch in April 2020. Created and executive produced by Kaling with Lang Fisher serving as co-creator, executive producer, showrunner and writer, the sophomore season is executive produced by Universal Television and by 3 Arts Entertainment’s Howard Klein and David Miner.

The runaway success of the show came as a surprise to cinematographer Rhet Bear, who has lensed all 20 episodes spanning both seasons. “When I got the call from director and producer Tristram Shapeero to do the pilot, I read the script and was really excited to do a TV show about an Indian American teenager, particularly one that broke down stereotypes, but I had no idea quite how big a hit it was going to be,” Bear recalls.

Bear’s credits range from music videos with the Foo Fighters “Times Like These” to TV series such as The Sarah Silverman ProgramSpeechless and The New Negroes to features like The First Time. He says he realized how popular Never Have I Ever was because of the volume of emails and social media messages he received from over the world. “What was fascinating was that up-and-coming cinematographers in India or Sri Lanka were reaching out to me. That’s kind of unusual!”

Bear set the show’s look along with Shapeero for the first episode, and the look that has gone largely unchanged through Season 2 with one important exception.

“To help the audience get into the head of our teenage characters it was particularly important to Tristram to shoot close up and wide as opposed to using longer lenses. We looked at large-format cameras and decided on RED HELIUM sensor because we loved the look of the pictures when paired with 70mm Panavision Panaspeeds.”

He carried the full set of 17-100mm Panaspeeds but primarily relied on the 21, 27, 35 and 40mm.

Bear’s experience with the RED arsenal on cameras includes music videos and Season 3 of The Sarah Silverman Program on RED ONE and the comedy series One Mississippi for Amazon Studios on RED EPIC DRAGONS.

“The HELIUM sensor at 7K combined with the Panaspeeds on a 70mm mount just lent the images a different feel from what we’re used to.”

Netflix greenlit a second season of Never Have I Ever as soon as it saw the show’s global reach, but that meant production started under COVID safety protocols in November 2020.

“We tried to keep the same look and feel of the show for consistency, but Season 2 was totally different in terms of lensing,” Bear explains. “The idea of getting close and wide with the actors was not an option. The crew, including my operators (A Camera Patrick McGinley and Steadicam Brian Hart) had to always maintain at least a distance of 6 feet from the actors. That meant using more traditional longer lenses — like an 11:1 Primo Zoom, a 15-40mm Primo and a set of Primo Primes detuned by Panavision’s Dan Sasaki to match the feel we had on the Panaspeeds on Season 1.”

Bear found backgrounds particularly challenging given the limited number of extras permitted on set for covering scenes at the bustling high school, a cross-country event or on prom night.

Lighting was also a continuation from the first season with gaffer Christian Grosselfinger using a lot of softer bounce light rather than hard sources on the actor’s faces.

“With this show we want to see these kid’s faces. That’s not to say we didn’t do darker scenes – but all these actors look great – there’s no need to hide it!”

The show LUT is based on an original design by Shapeero and Bear for the pilot and carried forward in the grade by colorist Larry Gibbs at Universal. It deliberately removes blues, skewing the blues to cyan.

“We don’t have blue skies and blue jeans come up looking teal,” Bear says. “We use the same look for the entire show apart from the last episode where we wanted to mix in more blue.” That episode, which centers around a high school prom, was directed by executive producer, co-writer, and showrunner Lang Fisher.

“Working with Lang was a great ending to what was a very challenging season,” relates Bear.