Wednesday, 17 June 2020

Casting 101: Common Rates For Actors, Talent, And Influencers


copywritten for Ambient Skies

One of the first lines in your production budget will be for front of camera performers. Whether you’re booking David Duchovny or Dave Debutant your production is going to need to conform to Screen Actors Guild rules and SAG rates. These are the minimum amounts of money talent must make for a given production. However, calculating your production’s SAG rates can be harder than casting.
Pay rates vary based on the agreement in play and that depends on the type of production and the total budget (and sometimes the distribution plan). There are both daily and weekly scales with discounts usually on offer for weekly rates. In addition, different classes of performer will command different fees.
Before you can even roll the camera, you’ll have to submit both a budget and copy of your screenplay to SAG for approval.
The following will give you a general idea of what to expect to budget for your cast but you should also check out https://www.wrapbook.com/essential-guide-sag-rates/ which has further detail and the rate cards at source https://www.sagaftra.org/production-center/
We’ve also included here, common rates for budgeting the marketing of your show with social influencers.
H2 What are SAG AFTRA rates?
SAG AFTRA rates are the minimum amounts of money the Screen Actors Guild https://www.sagaftra.org/ will allow its members to work for on a given project. 
Hiring talent on SAG weekly rates will get you a discount. However, that does mean you’ll have to pay for the rest of production on those days too.
It’s important to note that that you’ll have to pay an additional 18 to 18.5% on your SAG payroll for health and benefits, called “fringes.”
Additionally, you’ll need to ensure that you have an insurance policy that’s SAG-friendly. Luckily insurance providers like Wrapbook https://www.wrapbook.com/essential-guide-sag-rates/ can spin up a policy that is SAG compliant, while lowest cost to you.

H2 SAG AFTRA Theatrical Rates
SAG Theatrical Rates apply to actors performing in films across a variety of budgets and where the film has had an initial theatrical release. For a production destined straight for a streaming platform where the budget is over $1 million then you’ll need the New Media agreement https://www.wrapbook.com/essential-guide-sag-rates/#SAG%20New%20Media%20Rates
SAG breaks theatrical agreements down by budget. Within that, the rates vary according to whether the actor is a principal (lead) which carries the same pay weighting a stunt performer/coordinator or extra. As a couple of examples:
SAG Basic Theatrical Agreement https://www.sagaftra.org/files/20172020wagesthatrical11_28_18_1.pdf  is for feature productions over $2.5 million. The SAG day rate for main performers $1,005, and $3,488 per week.
It’s only a minimum. Agents will negotiate rates far in excess of the basic for leading talent on the biggest budget movies. Background actors are paid $174 a day.
The SAG Moderate Low Budget Agreement applies to non-episodic shows with budgets between $300,000 and $700,000 and has a day rate of $335 and weekly rate of $1166.  
The SAG Ultra Low Budget Agreement for films that are $250,000 or less. There is no weekly SAG scale for these projects, but the day rate is $125.
SAG Short Project Agreements cover films that have total budgets less than $50,000 and a maximum running time of 40 minutes. Unlike other SAG day rates, actor salaries are completely negotiable and you don’t need to ensure a theatrical screening. Films made under this agreement can be released at film festivals and on free-streaming sites like Vimeo.

H2 SAG AFTRA Television rates
Calculating your SAG rates for TV is by far the most confusing since rates here are determined by the number of episodes, and more often than not, the episode’s length. They are in detail here: https://www.sagaftra.org/files/20172020wagesTV.pdf.
Here’s a couple as guidelines.
If you need an actor for just one episode of your series to say a few lines you’re looking at $1,005 a day, $2,545 for three days, or $3,488 for the entire week.
SAG actors who appear in half or more of a season’s given episodes are paid weekly for their time, with $3,488 per week for appearing in every episode, $3,993 per week for appearing in more than half, and $4,656 per week for appearing in half.
Note that these rates are for performers of cable and streaming shows. For network shows, producers should plan to budget an additional 15 percent
H2 SAG AFTRA Commercial rates
SAG commercial rates depend on where and how many times the commercial is aired. Instead of a weekly or day rate, principal actors in SAG commercials earn $89 dollars an hour. However, a producer must pay a fee to air the ad, followed by additional charges each time it airs. However, SAG offers different agreements (marked A, B or C) that allow producers to essentially ‘buy in bulk,’ depending on where the commercial will be airing.
The vast majority of SAG commercials are Class A, meaning that the commercial will air in over twenty cities. This is your bracket if you are shooting a national commercial that will air on four major networks (FOX, NBC, ABC, CBS).
H2 SAG AFTRA New Media rates
If your project is going straight to the web and your budget is less than $1,000,000, even it’s for Netflix, then this is the rate sheet to look at. https://www.sagaftra.org/files/2017%20Special%20New%20Media%20Agreement%20Rate%20Sheet.pdf
For New Media projects less than $250,000 (but greater than $50k), expect to pay performers a minimum rate of $125 per day and background/stand-ins $96. Shows with budgets up to $700k should expect to pay major performers either $335 per day, or $1,116 per week and extras $130 a day. If your project falls between $700,000 and $1,000,000, the minimum you’ll have to pay SAG talent is either $630 per day, or $2,190 per week with extras $166 daily.
SAG Music Promo rates
Performer fees are negotiable (they will be the band members so would presumably wave a fee). The day rate minimum for dancers is $562 on promos costing $200k or more. The day rate for background actors is at least 10% above the applicable jurisdictional minimum wage. The rate card is here: https://www.sagaftra.org/files/2019MVRateSheet.pdf
H2 How much do influencers charge?
If you are going to put some heft into marketing your project you’d be remiss not to include social media influencers. Celebrities, professionals, critics, and commentators make waves connecting with legions of fans across multiple platforms and in an array of formats from simple Tweets to Insta Stories.
An influencer’s social media post is essentially an ad placement but there’s no standardisation around pricing. Indeed, it’s the wild west of advertising. Some influencers may be underpaid for their services, while others will over charge.  
Some actually charge nothing at all; you may be able to work out a quid pro quo kind of incentive that gives them something in return for free publicity.
A starting point is the one cent per follower rule (or $100 per 10,000 follower). From there, you can adjust and take other factors into consideration, such as engagement rate, budget, campaign length, and other partnership specifics.
According to one report, https://www.webfx.com/influencer-marketing-pricing.html on average influencers will charge:
Facebook influencer: $25 per 1000 followers
Instagram Influencer: $10 per 1000 followers
Snapchat Influencer: $10 per 1000 followers
YouTube Influencer: $20 per 1000 followers
You don’t have to book a Kardashian to amplify your audience. Influencers are usually placed into three categories based on audience size: Micro influencers, Power and Macro influencers. For a breakdown of these head to https://tinuiti.com/blog/paid-social/how-much-do-influencers-charge/.  
Regardless of what social media influencer rates are, however, it’s important to look at them from a purely financial standpoint – just like you would an ad placement. Good questions to ask are, what’s the potential reach and return and investment? Does their audience line up with your target market, Could you reach that audience another way? Have they done something similar and what have the results been?


5 Things To Consider When Shooting With The Phantom 4K Flex Camera



 copywritten for Ambient Skies


If you want super-sharp ultra-fast motion imaging for your next project then there really is only one camera to turn to. The Phantom Flex4K is the ultimate in slow-motion capture. Sporting a super-35mm CMOS with a full resolution of 4096 x 2304, this specialist unit from camera maker Vision Research produces highly detailed low noise 4K images at 938 fps.
Devised originally for medial and scientific work, the Flex4K and its output is bone fide professional cinema standard and is a regular part of high-end commercial spot productions. There’s a couple of catches though. It’s expensivethe camera can cost upwards of $100,000making it a rental option. And when each shot can average between 64GB and 128GB, you need to be prepared to be able to work with that much footage or your daily rental costs are going to mount.
Before you dive in, consider the following advice which could save you pain on the day.

H2 Know how to use loop recording
The Flex4K comes with a fixed amount of high-speed dynamic RAM. When the camera is in the pre-trigger mode (you've pressed ‘Capture’ in the user interface), the camera is continuously recording images into that memory. When it gets to the end of memory, it cycles back to the beginning and continues recording, constantly overwriting itself – until the camera is triggered. This is called ‘circular buffer recording.’
What you end up actually saving in memory is a function of how you've set up your trigger. It can be set so that only frames that occur after the trigger are saved (100% post trigger). In this mode, once the trigger is pressed any images already in memory are overwritten and you record until memory is full, then it stops. If you set the trigger to stop the recording (0% post trigger) and save all frames up to the time of the trigger, the camera will simply stop recording upon the trigger and all the frames in memory before the trigger will be saved. Or you can set the trigger anywhere in the middle, for example, having 90% of the recorded movie be what happens prior to the trigger and 10% after the trigger.
H2 Choosing between ProRes and Cine Raw   
With the Flex4K you have the option of recording in Cine Raw or ProRes 422 HQ and to maintain that option you’ll need to rent either a CineMag IV or CineMag IV-PRO).
The full- size images are delivered via 9.4 Gpx throughput in the Cine Raw file format. Superfast download to CineMags can be accomplished in seconds.  For example, a 10 second clip at 1000fps would take about 40 seconds to download although more often than not you’d have trimmed the clip in-camera which would reduce download time further.
Using ProRes recording, of course, saves storage and increases total record time.
When working with ProRes, Vision Research advises the camera be set to full sensor resolution (4096 x 2304). ProRes files can be saved to the CineMag as 4K or scaled 2K resolution.  The CineMag IV will not support any other resolutions to record when set to ProRes.
In Run/Stop (RS) mode the camera will allow up to 30 fps direct to a CineMag IV, and 120 fps with a CineMag IV-PRO.  2K ProRes recording at higher frame rates is also available on the CineMag IV-PRO (in fact you can record in up to 1,775 fps in 2K).
In Loop mode, the camera will allow up to 938fps to RAM, before the file is saved to the CineMag.  Saving in ProRes HQ mode takes about three times longer to the CineMag IV than saving RAW.  CineMag IV-PRO mags are much faster, and actually the save time is equal to saving RAW.  The files in the mag are about 2.5X smaller than the un-interpolated RAW files, and take that much shorter to save from the camera or CineStation IV.  
Over five hours of 24fps ProRes HQ footage can be stored on a 2TB CineMag IV.
It’s worth noting that the camera maker has no plans to add other ProRes formats, feeling that, if higher quality is required, then Cine RAW is a better option. 
H2 How long can you record with the Flex4K?
The record time is completely dependent on the camera’s resolution, frame rate, and the size of memory that is being recorded to.  At the camera’s maximum resolution and frame rate the camera will capture 10 seconds of video to 128GB of RAM. 
A record time calculator can be found in the 'Support / Resources & Tools' section of the Phantom website, as well as in the ‘Phantom Tools’ iOS App, which lets you estimate the maximum frame rate and record time at any given resolution.  With the Flex4K selected, choose the appropriate CineMag and memory size in order to simulate recording directly to that CineMag.
Incidentally, there is also a frame rate and exposure calculator and a lens calculator to help you select the correct lens for a Phantom camera based on some details about your shot.

H2 Avoiding Light Flicker
High-speed cameras can pick up the flicker that is otherwise undetectable to the human eye creating unsightly strobe effects when played back slowed down. If you’re going to be shooting indoors at super high frame rates we recommend using at least a 2,000-watt fixture to avoid seeing the filament in the light at that wattage. In this video https://youtu.be/Ztumota98ZA, the scene is lit with a HIVE WASP 1000 watts Plasma Par with an output of 75,348 lux, which is close to a 2,500W HMI.
The base sensitivity of the Flex4K is ISO 250T, and the exposure index can extend the image up to over 1000 (ISO equivalent) without significant loss of image quality. The Flex (2K) has a base sensitivity of ISO 1000T, but we don’t recommend pushing the exposure index on that camera in order to maintain the optimum image quality.

H2 Alternatively, shoot with the VEO4K
The Flex4K is a high performance highly specialist camera and it does not come cheap. A couple of years ago Vision Research came out with a less expensive version of the camera which may tick your boxes if budget is really tight.
The Phantom VEO4K is still capable of recording 1000fps in 4K, since it has the same image sensor and codec and will record in the same CineRAW format as the FLEX4K.  The workflow is the same too. Plus, its body is lighter and its cheaper to rent. So, what’s not to like?
Well, with the VEO4K you have to offload media onto C-Fast 2.0 cards not CineMag, a workflow that is undoubtedly slower. Rather than seconds, you could be counting minutes, and when you want to be turning media around super-fast, and probably on a time limit given you’ve only got the camera for a day or so, every second counts.
The compromise could be worth it; but that’s one you’ll have to weigh up.

Thursday, 11 June 2020

Editing: Mrs. America's Todd Downing

POST Magazine

Through the eyes of women, Mrs. America portrays the feminist fight for legalized gender equality in the 1970s and its unexpected female-led backlash. The hit FX on Hulu drama has a stellar cast of women in front and behind the camera, with creator and showrunner Dahvi Waller (Mad Men) intent on grounding the story in the toxic culture and politics of the era.


The nine-part series partially fictionalizes scenes and characters for creative purposes, and uses archive material to punctuate the drama and remind viewers of the story’s veracity.


“In my early discussions with Dahvi she was keen to use archive to ground the story and I think she liked that I had a documentary background to help flesh the concept out,” explains Todd Downing (pictured), ACE, one of series’ three editors. “In documentaries you get used to working without a script, so your mind is open to what you can do in the edit. I don’t necessarily put Scene A together with Scene B, but question where any scene can go or be removed or if entire sequences can be reshuffled?”

Coincidentally, Downing had just spent several months on a documentary project sourced from the Ronald Reagan family archive. It was useful research for Mrs. America, which, though it doesn’t feature the former President, does explain how a campaign to derail the Equal Rights Amendment (ERA) paved the way for his election in 1980.

“Archive researcher Deborah Ricketts did a tremendous job in finding us a selection of material,” Downing explains. “We used some footage to establish location, some to move the story forward and some to add colour. It was tricky since didn’t want to use archive showing an actual character in case it bumped against our drama. Also, there were strict license rules from some archive holders about what we could and couldn’t edit.”

Much of the focus is on Phyllis Schlafly (Cate Blanchett), a conservative activist and prominent opponent of the ERA. Downing explains that scenes featuring Schlafly tended to be shot on tripod or dolly and center framed whereas the energetic and socially progressive force of the liberation movement is photographed handheld.

“We had a visual language from the filmed material to start with and that informs the way you cut. It does feel like a different rhythm that naturally comes out of the way we tell Schlafly’s story versus the ERA’s. It’s like they are two different worlds.”

Another signature for the series is the use of split screens, a stylistic grammar that directors Anna Boden and Ryan Fleck with editor Robert Komatsu first deployed on the pilot.

“The wipes and corners have a very ‘70s feel to them,” says Downing, who cut his own split screen segment in Episode 6 ‘Jill’ for director Laure de Clermont-Tonnerre. “It’s a means of compressing story into a few seconds of screen time. My references were from Brian De Palma (Sisters, Dressed to Kill) with two different views of the same thing, one a little bit ahead of the other so you are playing with it rather than being prescriptive.”

A similar sense of experimentation lay behind Episode 8 ‘Houston’, an Emmy Award submission for Downing’s craft and a tour-de-force for actor Sarah Paulson, who features in every scene. Paulson plays Alice, a fictional character and cheerleader for the Stop ERA posse, whose experience at the National Women’s Convention in Houston is the bold choice of director Janicza Bravo.

“Sarah is in every scene and it’s all about getting in her head,” Downing notes. “Alice ends up taking a pill and downing some Pink Ladies, a combination which leads to a hallucinatory sequence.  The pill is undefined, but we wanted everything she experiences to seem new to her when she’s tripping.”

The introduction to the sequence is a two-minute shot, which holds on Alice’s face while she makes a routine phone call to her mum. 

“This is everything you are not to supposed to do — a long single shot talking about something really boring, in this case a recipe,” says Downing. “But when the pill kicks in you are cueing the audience that something is off. 

“It’s a really brave choice by Janicza but we were far enough into the episode to give this a try. It wouldn’t have worked had we not had the performance from Sarah. A lot of times you are fixing performance in the edit but not here and that goes for performances right across this show.”

In order to display her drug-/drink-induced perspective, Downing employs jump cuts, shifts the scene order and plays with audio to enhance Alice’s heightened and erratic sensory experience.

“There was an order to how the scene played but we jumped around to find where the beats worked. We’re trying to stick with character and match her disjointed attempts to follow a conversation and not get tempted to do anything obviously psychedelic or flashy.” 

The scene is a microcosm of the drama as a whole in that it always transitions fluently between comedy and gravity with the drama driven by character.

“For me, that comes naturally, whether I’m working on Russian Doll (for which Downing was nominated by ACE and HPA) or a documentary (Escaping ISIS, 2015; Secret State of North Korea, 2014). I try and look for the comedy amongst the seriousness. It’s better storytelling to mix things up rather than just have one flat emotion.”

A scene in ‘Jill’ where Schlafly visits the home of religious conservative Lottie Beth Hobbs (played by Cindy Drummond underlines this. 

“It’s set-up as an absurd comedy with Schlafly sitting on a sofa directly opposite Dadie and we cut it almost like a sitcom, shot-reverse shot. The scene also has echoes of the Coen Brothers. It feels almost surreal, so gentile and yet there is this dark undercurrent.”

The final shot has Hobbs ripping the head off a red rose to symbolise her thoughts on abortion.

“The full-on comedy in this scene makes a later scene so much stronger when we find out quite how extreme Hobbs’ views are, such as that homosexuals be burned at the stake,” Downing says. “The comedic and the violent are two side of the same coin.”

Downing worked for seven months on the show, with Komatsu and Emily Greene, who each cut three episodes. “We had lunch almost every day and talked about everything from character development to music. We’d screen each other’s cuts for feedback and for knowledge of character through lines. Even though I’d read the script I needed to know exactly how we got from Episode 6 to Episode 8 and realised by viewing Episode 7 that I didn’t need to concentrate so much on certain aspects because they’d been previously established.”

Music supervisor Mary Ramos, who regularly collaborates with Quentin Tarantino, provided a vast mid-‘70s playlist, which the editors augmented with their own tracks. Downing proposed the lesser-known Abba track “Eagle” for the ‘trip’ scene but found it played too much like a music video and used Kris Bowers’ score instead. 

In another piece of serendipity, Downing says he grew up in downstate Illinois similar to where Phyliss Schlafly lived and ran her populist campaigns.

“The world of Mrs. America and in particular of the districts where Schlafly lived was very familiar to me,” he says. “I’ve since become obsessed with all the characters, even those with as few as ten lines in the series. I’ve even researched some supposedly Christian books to find out if I’m going to go to hell or not.”

Tuesday, 9 June 2020

Library and storage management

pp52-53 June issue InBroadcast
Remote working, hybrid on-prem and cloud migrations and smarter allocation of storage resources all make production more performant and cost-efficient
With production teams worldwide leaving their studios to work from home, organisations urgently need tools and architectures that will enable them to continue running specialised workflows like editing, rendering and post-production effects.
Among them is Qumulo’s CloudStudio which is deployed on AWS or Google Cloud, allowing local artists to connect from anywhere (including their couches), while experiencing seamless 30+ frames per second video playback.
Qumulo CloudStudio works with software tools including Premiere Pro, Adobe After Effects, Cinema 4D, 3ds Max, Nuke and Maya, and others. Combined with ultra-low-latency remote PCoIP clients such as those by Teradici, remote artists get the same level of application responsiveness they’re used to, the company explains, because applications get the same enterprise-grade file storage they need to run properly.
“Qumulo running in the cloud provides best-in-class NFS and SMB protocol support, automatic cross-protocol permissions handling, and full metadata fidelity,” it says. “Some of the world’s largest movie studios, streaming content providers, and gaming companies are using Qumulo in the public cloud to run their customized pre- and post workflows at scale.”
“Remote workforce enablement for those working with documents, images, and online meetings is one thing,” says Jeff Kazanow of Levels Beyond. “But those working with high-volume, high-throughput, video content are quickly learning that standard remote workforce infrastructure and dependence on VPNs will not cut it.
To help media companies transitioning creative production and video distribution environments to a work-at-home model Levels Beyond has developed a Proxy-in-the-Cloud solution. This enables secure access to media over AWS through its workflow engine called Reach Engine. The solution, which integrates with Aspera and Signiant for file transport and with AWS Elemental file processing, is currently in use by the NHL and Amazon Studios.
Smarter asset management
With an industry traumatised financially by the crisis the need to save cost is ever more pressing. Several solutions help optimise the storage of assets so that content is found easier and held on the most economic storage tiers.
Tedial’s Evolution MAM system includes aSTORM, a content management solution that manages various tiers across departments, locations or in the cloud. This includes on-prem live storage, nearline, deep archive tape libraries or public cloud storage such as AWS S3 or AWS Glacier. 
Using logical storage groups and rules defined within each group, Tedial explains that aSTORM can move, backup and restore content when and where required. Live content can be kept on online storage for a period of time depending on its genre.  aSTORM can immediately archive content to tape whilst storing it online for a certain period of time depending on the logical storage group. News content can be kept online for 48 hours while live sport, which might need to be kept for editing throughout a whole week, can be kept on online storage for seven days, for example. Similar rules can be set using a public cloud.
Spectra Logic’s storage management software StorCycle is designed to reduce primary storage costs by migrating and storing inactive – non accessed – data on cheaper storage devices. Spectra promotes a two-tier model with a first tier being the Primary Tier for production data and the second Perpetual Tier with various devices and entities between tape libraries, NAS, object storage and cloud storage. The software is smart enough to identify and migrate inactive data from the primary storage tier to a more affordable Perpetual Tier which is for secondary storage, distribution, archive, and disaster recovery.
Spectra explains that organisations typically store their growing banks of data on the more expensive Primary Tier and says to 80 percent of this data is inactive, costing both space and money.
“IT professionals know they can’t keep adding costly flash and disk drives to their storage architectures when capacities are maxed out,” says Nathan Thompson, CEO. “The storage industry hasn’t delivered the right tools that can easily and optimally manage data. StorCycle’s unique ability to scan and migrate inactive and project data from a costly Primary Tier of storage to a Perpetual Tier, consisting of less expensive storage targets, benefits data creators, IT professionals and organisations everywhere.”
Masstech’s storage and asset lifecycle management platform, Kumulate enables operators to search and browse assets located on any storage tier of any site.
For users looking to migrate content, the Kumulate Orchestrator module has automated tools for migrating content across hardware versions, storage tiers (e.g. tape to cloud) or from legacy systems to other storage platforms. Other standard workflows include publishing to specific formats (e.g. YouTube and Facebook), bulk transcoding, and auto-archive for iNEWS, ENPS and other newsroom systems. AI and ML services such as facial or object recognition, or speech-to-text transcribing can be plugged in to generate metadata that is stored alongside the assets and searchable from anywhere.
Kumulate also integrates into Avid and Adobe production environments, allowing editors to manage their asset storage from directly within editing applications. Efficiency of local disk and cloud storage is maximized with features such as partial file restore of sequences.
Marquis Broadcast’s Project Parking tool for Avid storage management supports the new OP1A workflows as found in Media Composer. Project Parking and Marquis’ Workspace Tools also integrate with Avid Nexis Cloudspaces making it simple for disaster recovery, workflow sync and cloud backup.
“If you’re not running Project Parking regularly, you may be wasting around 20 percent of your Avid Nexis system, which has huge cost implications,” explains Paul Glasgow, MD, Marquis Broadcast. “Project Parking’s dashboard feature provides site-wide visibility to monitor usage, allowing companies to ensure that the whole facility is running effectively, with analytics to prevent duplicate and orphaned media propagating into the Workspace Sync or Backup. This saves significant amounts of storage whilst also accelerating time-critical sync backup and recovery processes.”
A new Edit Bridge panel enables custom searches of content held on Avid Interplay systems for loading directly into Adobe Premiere Pro or After Effects. Marquis says Edit Bridge is ideal for large broadcasters, enabling users to access a choice of PAM system. This is especially important when a promo department is creating promos for multiple production genres, spread across one or more Avid PAMs. Given appropriate security permissions, promo editors can directly access projects shared across Avid storage systems to accelerate production.
New functionality now included in the VSNExplorer MAM uses semantic searching (in which the search engine understands the meaning of the enquiry, rather than looking for literal matches) to improve how metadata is associated with content in turn making search much more productive.
“For any organization looking to migrate its existing archive systems, or to bring its elderly archive system into line with what’s possible with new technologies, this new functionality will be extremely appealing,” says Patricia Corral, marketing director. “It can exponentially increase the speed with which content can be catalogued, located and retrieved.”
VSNExplorer MAM can syntactically catalogue all the terms and sentences used in the description of scenes in order to achieve “formal, unambiguous, accurate and user-agnostic metadata for content in any language”. Qualifiers – such as colours, mood, commercial names and types of objects - can be added to metadata in order to achieve more precise results.
News from Crispin is its integration (last year) with Sony’s Media Backbone NavigatorX for proxy view and prep of on-air schedules. Functionality includes easy searching and comprehensive browsing of content and the ability to review, trim and prep content in a proxy view.
News packages created in production can be reviewed before air; sales teams can search, view and download customer content and the marketing department can access promo and event content. Schedule needs are taken care of through automatic updates to the playlist. All of this is accomplished without tying up expensive video server ports.
Flexible hybrid migration
The crisis is going to accelerate adoption of cloud base infrastructure but this won’t happen overnight. Hybrid storage and workflow systems are likely to dominate with a number of new innovations happening in this space too.
Whether you call it scale-out flash storage (SOFS) or software-defined storage (SDS), organisations are rapidly embracing new storage options driven by a smart software layers that are more scalable, flexible, performant and lower cost.
“SOFS solutions are tailored to modern data centres that are increasingly running data-intensive workloads,” Tom Leyden, VP corporate marketing, Excelero explains. “These data centres are often doubling the demand for storage and processing capacity every 18 months – while budgets are not keeping up.”
SDS and SOFS have changed the calculus of deploying storage solutions. With storage as the backbone of everything, this is what this new generation of storage enables or should enable: maximize RoI across the data centre by maximising NVMe Flash, reducing hardware to run applications, maximising GPU use.
Excelero NVMesh was designed with cloud as the primary use case. According to Leyden this means that for hybrid and private cloud deployments its product provides data path separation, no controller bottlenecks and a 100 percent software only, no hardware dependency storage solution.
The new F-Series line of non-volatile memory express (NVMe) flash drives from Quantum is for studio editing, rendering, and other performance-intensive workloads. The company says it’s the most significant product launch it’s made in years because it’s the first based on the Quantum Cloud Storage Platform.
The F-Series provides direct access between workstations and the NVMe storage devices and is capable of handling multiple concurrent ingest streams and playing out this content in real-time. By combining these hardware features with Quantum’s Cloud Storage Platform and its StorNext software the solution delivers end-to-end storage for post and broadcast.
This platform is a stepping stone for Quantum to move to a more software-defined ‘hyperconverged’ architecture, and is at the core of additional products it will be introducing down the line.
“StorNext improves collaboration through comprehensive multi-protocol access, protects data through advanced replication and copy functionality, and offers automated tiering of data to capacity-optimized storage,” explains Jamie Lerner, company president and CEO. “This allows customers to maximize the value of content across its entire life-cycle for workflows such as those found in postproduction for real-time editing of 4K and 8K content and sports environments with tens to hundreds of cameras generating content.”
XenData’s new Multi-Site Sync service for cloud object storage can create a global file system accessible via XenData Cloud File Gateways. The gateways each manage a local disk volume that caches frequently accessed files. It supports partial video file restore and streaming and scales to 2 billion files, unlimited cloud storage and up to 256 TB of local disk cache at each location.

“The solution optimises an organisation's productivity by providing global file sharing across multiple facilities combined with excellent local performance provided by the local disk caching,” explains CEO Phil Storey.
Each instance of the synchronised gateway runs on a physical or virtual Windows machine and allows the global file system to be accessed on each local network as a standard share using SMB, NFS and FTP network protocols. When a file is written to the cloud object storage via one of the gateways, it immediately appears as a stub file within the global file system on all other gateways.
The solution supports AWS S3, Azure Blob Storage and Wasabi S3 and works with multiple cloud storage accounts, allowing simultaneous use of multiple cloud storage providers within the global file system.
Multi-Site Sync is scheduled to be available in May 2020 priced from $150 per month for a system that manages up to 10 TB of cloud storage and has two gateways. 
EditShare’s latest version of its file system and management console is EFS 2020. Unlike generic IT storage systems, EditShare explains that it has written its own drivers for EFS, for use with Windows, MacOS and Linux. EditShare manages the entire EFS 2020 technology stack from the file system to OS drivers, which means enterprise-level stability and faster video file transfers for high-bandwidth, multi-stream 4K workflows.
It contains File Auditing claimed as the first and only real-time purpose-built content auditing platform for an entire production workflow. It is designed to track all content movement on the server, including any deliberately obscured change.
NOA is focused on efficient content preservation and believes broadcasters and production houses are facing the same challenges.
A systematic in-house digitization approach, for example, can efficiently manage more than 60,000 tapes a year compared to the mere 600 tapes a solely production-driven strategy typically achieves, NOA argues. It therefore plays a significant role in saving a much larger portion of an archive in the same period of time, helping archivists construct a bigger and more coherent audiovisual repertoire.
In order to sustain the effort of media institutions worldwide to protect their treasures in a sustainable way, NOA recently launched a mediARC user forum. The project is designed to support AV archivists and create interaction and cooperation between institutions around the world.

The Mix Room Finishes Audio Workflows With ClearView Flex

Sohonet


The Mix Room, Vancouver on using ClearView Flex to collaborate with their clients in real-time from home as well as the facility.
Owned and operated by Jamie Mahaffey and Marty Taylor, The Mix Room is a full service, award-winning, audio-post completion service located in downtown Vancouver with a socially distanced workflow.
Long before most of the industry was forced into distributed collaboration, Mahaffey and Taylor had made work from home workflows routine. “We have both built out home studios which are the mirror of the set-up we have downtown enabling us to perform everything up to and including the final mix without having to be in the office,” explains Mahaffey.
It offers 5.1 mix studios and two editorial/premix suites along with ADR, foley and voiceover. All rooms can also monitor Dolby Pro Logic and adhere to the Dial Norm protocol.
Presciently too, The Mix Room had invested in a Sohonet solution in early 2020 to facilitate every stage from dialogue edit to final broadcast deliverable. “We were using other systems and found them clunky, difficult, and lagging behind real-time,” Mahaffey says. “For these technical reasons, work wasn’t much fun, and we thought there must be a better way to do this. 
“We cast the net around with folks we knew in the industry and the feedback we got was to try Sohonet,” Mahaffey adds. “Once we’d demoed ClearView Flex it was immediately clear that this was the way to be going.”
The Mix Room began utilising Sohonet’s real-time, remote review tool ClearView Flex to perform work in progress, reviews, and final presentations with clients remotely. “We share files with clients, they send us back notes, we address those notes live with them over ClearView Flex and use real-time teleconferencing. It’s as close as possible an experience to us all being together in a studio. From that point of view, ClearView Flex is the obvious solution.”
He explains, “We recently finished a session on an animated feature streaming media between ourselves and the client in New York for eight hours solid without a single blip. The lag between pressing the play button here and the person on the other end seeing it is just three frames or 3/30th of a second. That’s quite amazing.”
The Mix Room has established a strong reputation for its work in sound designing, mixing and mastering episodic and feature animation for clients including Mainframe Studios in Vancouver and SilverGate Media in New York with shows destined for Nickelodeon and Netflix. Series such as Lego Jurassic World and Octonauts might be ordered in dozens of episodes. Typically, the senior production team would visit The Mix Room in Vancouver to supervise the first few episodes “to get the show on its wheels” and use remote reviews to sign off on the rest of the run.
“One of the major factors of the Sohonet system that appeals to clients is that it does not require a third-party server to store the media,” says Mahaffey. “It is a portal from which we access the media and which ClearView Flex streams as an encrypted file. That’s the same whether we are in Vancouver or at our home studio. Clients want the guarantee of watertight security for their high-value media and Sohonet’s studio-grade locks and keys provides it.”
While social distancing has temporarily precluded the physical presence of more than a couple of people at The Mix Room, the team have been just as busy during lockdown. “Animation is one of the few genres able to continue pretty seamlessly as a remote production,” says Mahaffey. “Animation was on fire prior to Covid-19 and this unexpected situation has only created more demand.”

Storytelling and the Era of the Empath: Poppy Crum

HPA
“I am passionate about creating technology that enables us to translate the creator’s true intent,” technologist and neuroscientist Poppy Crum told the TR-X audience at the 2020 HPA Teach Retreat. “For me, there is a truth that the content creator wants me to feel – whether that’s fear, joy, disgust…. But my environment – and technology that assumes we all look and react the same – doesn’t always allow that. That there are better ways of ensuring that the intent reaches every viewer or listener in the richest way possible is, I believe, a goal worth striving for.”
Crum is deep into the future of what happens when technology knows more about us than we do, and she believes that although it’s easy to jump to unsettling thoughts of the cautionary or dystopian tales of Mission Impossible or 1984, implemented ethically it’s not a bad thing. In fact, she feels, “it’s probably the most empowering opportunity we have to enrich and elevate experiences of storytelling and technology capacity for individuals of all demographics and biological composition.”
Today, she says, AI algorithms can detect our slightest facial microexpressions, differentiating between a real smile and a fake one, predicting or diagnosing our mental or physical health from the patterns of our speech, or even knowing whether we may have early signs of illness or are feeling emotions such as joy and suspense solely from the chemical compositions of our breath. Whether we like it or not, we’ve been sharing a lot about our internal states long before we made it common place to wrap ourselves and our spaces in digital devices that track our every move, exhale, or beat. But now we do, and there is a lot that can do for each of us.
Formerly Research Faculty in the Department of Biomedical Engineering at Johns Hopkins School of Medicine and now Chief Scientist at Dolby Laboratories and Adjunct Professor at Stanford University, Crum has spent a lot of time studying the circuits of the human brain that create the unique perceptual realities that we all possess.
“Each of us hears and sees the world differently as a result of the interaction between our individual biological capacity, and the environment around us,” she said. “The distributions of colors, contours, sounds we surround ourselves with in our urban or rural environments are vastly different. These shape our unique perceptions, and through neuroplasticity impact the way our brain allocates its resources. For example, if I have spent a lot of my life in a rural desert environment without a lot of difference in hue, fewer sharp edges, and more need for me to identify subtle shifts in shading and contour, my brain will allocate more resources to decoding the more limited set of hues and shifts in contours in order to be effective in that environment. In contrast, years spent in noisy cities will shape how we are each able to react and effectively attend to resolution across a wider color gamut, sharper edges, and the intensity and cacophony of sounds that surround us”
Our relationship with technology and, by extension, the context we experience it in also shapes us. For example, someone who has just played their first forty hours of Call of Duty may be forever changed. They can be expected to have heightened visual acuity and faster and more effective probabilistic inference critical to strategic planning.
“Any time you build a new way of interacting with content or technology you are affecting human capability,” Crum said. “The point is that we can do this by design.”
This is already happening. Netflix, for example, uses data about its users to tailor film and TV recommendations to individual profiles. Its algorithms even adapt the color of artwork and font size. Audio playback systems can position the sound of an object in space in accord with the creative intent. Emerging technologies like object based broadcasting allow the content of programs to change according to the requirements of each individual audience member. Silicon Valley and Hollywood studios make use of electroencephalograms (EEGs) to understand how our moods affect the content we watch.
But all of this merely scratches the surface of the possible.
At present, Crum contends, most of our technology is built as ‘one size fits all,’ geared towards pretty much one demographic – typically a white male. It is not personalizing the way content is perceived at the individual user level in a way that delivers the true intent of the artist or the technology.
“I’m not talking about avatars replicating emotions. I am saying that technology is not responsive to my internal state. Even the most intelligent thermostat on the market does not know whether I’m hot or cold, or what I’m trying to do at that moment. If it takes even a small amount of information learned through signatures of combined sensors from the environment or wearable technologies into account then suddenly it is remarkably more effective at facilitating the goal of translating the technology or creator’s intent and improving the user’s experience”
This goal is tantalizingly in reach.
“We already record detail about the creative intent in metadata. You can imagine extending that to capture more information about the emotions and feelings intended by a creative scene or effort. In addition to which the ubiquity of sensors and the ability to amalgamate our personal and biometric signals offers a way of closing the loop.”
She has been able to show how changes in the density of CO2 in a space – such as movie theaters – can correspond with changes in emotion and stress of individuals in the room.
One demonstration involved a screening of National Geographic’s Oscar winning rock climbing documentary Free Solo. From special tubes installed throughout the theater, scientists in Crum’s team were able to measure, in real time, with high precision, the continuous differential concentration of carbon dioxide. But what the trace presented to the HPA audience really showed to Crum was “the entire room and audience in the theater going on the creator’s journey”
“It’s our collective suspense driving a change in CO2,” Crum explained. “You can see where Alex [Honnold] summits and where he abandons the climb, you can trace the character’s love story. The audience is broadcasting a chemical signature of their emotions. It is the end of the poker face.”
Combine this with input from other sensors, such as heart rate and thermal cameras, and paired with machine learning and AI assessment, and it’s possible to show that changes in the thermal signature correspond to shifts in an individual’s (or a group’s) engagement and attention.
In a recent talk for TED, Crum calls it the era of the empath.  “If we recognize the power of becoming technological empaths, we get this opportunity where technology can help us bridge the emotional and cognitive divide. When technology is empathetic, it modifies its state by the response of our internal experiences. And in that way, we get to change how we tell our stories.”
Crum presents exciting food for thought. Are we capturing the right signals to preserve and transmit the intent of the creator? How can knowledge that our spaces and technologies know what we are feeling feed into ways we create, deliver and consume content in order that we might better experience the intent of the artist?
“We get a chance to connect to the experience and sentiments that are fundamental to us as humans in our senses, emotionally and socially. But regardless of whether it’s art or human connection, today’s technologies will know and can know what we’re experiencing on the other side, and this means we can all be closer and more authentic.”

Monday, 8 June 2020

Editing “Da 5 Bloods:” How to Mix Materials and Deliver a Message That’s More Relevant Than Ever

Creative Planet
“My conscience won’t let me go shoot my brother, or some poor, hungry people in the mud, for big powerful America. They never called me nigger. They never lynched me. They didn’t put no dogs on me.” 
Muhammad Ali’s politically super-charged call to arms opens Da 5 Bloods and clearly sets out director Spike Lee’s intent to set the record straight on the white-washed reporting of the Vietnam War.
“Over the years Vietnam war films have exhausted a lot of the politics and stories but the black experience has not been one of them,” explains the film’s editor Adam Gough. “Spike Lee is always asking the audience questions and there are a lot of hard truths about war today revealed about the events half a century ago.”
“I got a call from agent in December 2018 and he put Spike on the phone,” Gough says. “I had no warning, which was probably to my advantage since it gave me no time to mess things up in my head.”
With regular collaborator—Barry Alexander Brown⁠—busy directing his own project, Lee was looking for a new partner in the cutting room. “He said he was a big fan of Roma [also edited by Gough]. We chatted and he sent me the script,” Gough relays.
“It was exactly what you’d expect from a Spike Lee joint. Politics and seriousness mixed with light hearted humor written with the same energy you find when you meet him in person.”
Da 5 Bloods is the story of four black veterans who go back to Vietnam to find the remains of their fallen squad leader (played by Chadwick Boseman), as well as a trunk full of gold that they buried in the jungle during the war.
Early discussions included references to Lawrence of Arabia. “Spike wanted to make an epic journey, not in the visual style of Lawrence of Arabia, but about the internal journey of Vets revisiting their experiences of decades before.”
“I prefer to avoid the set in order to view material objectively but just being in proximity to the director was important,” Gough says.  “Every day Spike would come in [to the edit room] and watch dailies and give me a heads-up about what was coming tomorrow. I’d have cut the dailies from the day before and we’d watch that together. This was great since by the end of the shoot we not only had an editor’s cut but close to a director’s cut.”
He continues, “As a filmmaker, Spike is very organic. He will change his mind on the day. If there’s something he likes he will reset the scene accordingly. The actions scenes for example were storyboarded but the footage didn’t match any of it. The boards were an idea of the intent which he freely developed on location.”
From Malcolm X to BlackKklansman, Lee has incorporated archive footage in his features and Da 5 Bloods is no exception. The opening three minutes to the film is a montage taking the viewer on a quick tour of the Vietnam war to explain the events that have shaped the character’s lives.
The 18-hour documentary film series The Vietnam War directed by Ken Burns and Lynn Novick provided a body of material for Gough to “get up to speed” on the history: “Part of my research was watching [this documentary] and noting any visually striking imagery. The idea was to either use the same imagery or find similar in the archives.
“Our incredible archive researcher Judy Aley combed hours of footage to uncover just what we wanted. The problem we had—which is part of the reason for telling this story—is that most of the material showed white GIs. News footage of the time was very white oriented. When we needed to show the experience of black Vets coming home and being hugged by their families, we had to dig deeper.”
The filmmakers combine some of the era’s most iconic images—a Vietcong being shot in the head at point blank range, a young girl running naked from a napalm attack—to contrast with lesser seen but equally revealing images.
“We tried to avoid too many obvious images but using a few of them helps an audience know exactly where they are and what the mindset of the story is,” Gough says.
Lee goes as far as to nod to Apocalypse Now in one scene. Instead of The Doors, though, the soundtrack includes psychedelic classic “Time Has Come Today” by Chambers Brothers and the anti-war songs of Marvin Gaye: “Music is his thing,” Gough says. “Spike hand-picked all the tracks for the source cues.”
Cinematographer Newton Thomas Sigel shot on ARRI Alexa LF for the film’s present day story and 16mm for flashback sequences mixed with some 8mm presented in 4:3 aspect ratio, in accordance with real footage from the time. Where possible, they went back to the original archive source and scanned it to 4K.
“The most difficult element was the use of archive,” Gough reveals. “The narrative backbone was strong but finding a way of punctuating the story with the archive without breaking the flow was tricky.”
A pivotal scene set in 1969 depicts black soldiers learning of the assassination of Martin Luther King.
Lee tells Vanity Fair that the US Armed forces came to close to being torn apart by the event: “They also heard that their brothers and sisters were tearing shit up in over 100 cities across America. The tipping point came very close; the black soldiers were getting ready to set it off in Vietnam—and not against the Vietcong either.”
To create the scene, Gough found archive footage of protests and riots in LA and New York, buildings burning, police beating people all of which would have been easy to show but the filmmakers were trying to present the wider spread of unrest across the country.
“That meant researching local news stations and city archives and not being content to use the most low hanging fruit.”
The archive is superimposed on film of GIs back in Vietnam. “We’re trying not to lose the connection between the soldiers and events at home. Finding the images that worked together and reworking it for composition and to retain the high emotion of the scene and the strong performances was a dance that took a long time to fall into place.”
Gough was assisted by first assistant David Valdez and assistants Veronica Vozzolo (who supervised the archive tracking in the edit) and Pilar Gómez-Igbo with vfx editor Luftar Von Rama and trainee Panupan (Ong) Kanchanwat.
Locating to Brooklyn after the 10-week shoot (which also included photography in Ho-Chi Minh), Gough attended weekly screenings of the film in progress with Lee. The film is released at 155 minutes, down from an initial cut over three hours.
“We’d play the film all the way through and talk about it. He is very easy to edit for because he wears all his emotions on his sleeve. He’s also open to experiment. Even if I went down the wrong path and it turned out to be a bad idea, he understood the reasoning and encouraged trying something different.
“Many directors are like that, perhaps, but only Spike will jump up and celebrate like the Knicks have scored when it’s a great edit.”