Tuesday 30 May 2023

BTS: Fast X

IBC

article here

The look of the multi-billion dollar Fast franchise is as much a part of its DNA as star Vin Diesel, from the perspective of colourist Andre Rivas.

The Fast and Furious series has burned through a number of directors in its high-octane path to $7 billion in box office receipts. John Singleton, James Wan, F Gary Gray, Rob Cohen and Justin Lin have all come and gone with Louis Leterrier in the director’s chair for the latest instalment Fast X.

A constant throughout the Universal Pictures franchise since The Fast and the Furious in 2001 has been its colour popping look which is as much a part of the brand as Vin Diesel and pimped up cars.

It’s also fair to say that the key architect of the series look is Australian cinematographer Stephen F Windon ACS ASC who has lensed all but three of the ten blockbusters including the last six.

He would have been a huge help to Leterrier who, when offered the chance to direct in April 2022, had just two days before getting on a plane to London to take charge of the $340m production.

“Initially, the big challenge was that I came onto the project very late [but] that also gave me the opportunity to go on instinct, and not second-guess my decisions. That was quite refreshing in a job like this where you’re always told what to do and you’re getting notes from the studio all the time,” he told postPerspective.

“I love the entire Fast franchise,” Leterrier added, “but wanted to give this film my own creative stamp rather than just pay homage to the previous ones.”

Windon brought the all-important digital intermediate phase of the production back to Company 3, where he reteamed with colourist Andre Rivas who had graded F9 and served on the franchise as senior colourist Tom Reiser’s assistant colourist on installments six through eight.

Behind the Scenes: Fast X – adding spin

“Louis wanted to preserve everything that makes a Fast film a success but also to put his own spin on the story,” Rivas told IBC365.

He cited one example: In a scene where Dom (Vin Diesel) is with Isabel (Daniela Melchior) Dom’s face is half lit in red and half in blue light. “It’s almost expressionistic,” Rivas recalls. “I said to Louis, it reminded me of [director Dario Argento’s 1977 psychedelic-looking feature Suspiria] which, it turns out, is one of his favourite films, too. The effect is not strictly realistic, nor does it need to be.”

The idea of colour separation also meant making sure no one hue overly dominate the frame. “The intent was for an overall warm look but without being washed in warmth,” said Rivas, “so that I could always isolate individual elements that gave off a cooler light and make sure they retained that cooler look.”

In a scene set in Rio with rivals Dom and Dante (Jason Mamoa) facing off, Rivas retained the existing warmth while ensuring a number of different colourful elements also pop. “Dante has purple as his character colour,” Rivas elaborated. “He’s often dressed in purple, drives a purple car – we needed to ensure that the colour is not contaminated by any of the warmth of the scene overall. If there’s a dominant colour, we also wanted to make sure that an opposite colour is also clearly identifiable.”

Similarly, when Tess (Brie Larson) and Dom meet in a dark bar, the set is lit with both warm and cool practicals both playing in the frame. Rivas’ job was to accentuate the colour contrast, so the scene maintains that interplay and is never washed into a single colour.

Like previous recent entries, Fast X is principally shot on Alexa cameras augmented with additional footage including from RED Komodo which is smaller and lighter for mounting inside vehicles and has a global shutter suitable for capturing action scenes without blurring. Several aerial shots were filmed with first-person view (FPV) drones piloted by Johnny Schaer.

Company 3’s internal colour science department created a single VFX colour pipeline, which involved transcoding all the material shot among different cameras into linear EXR files prior to commencement of Rivas’s grading. Linear EXR is a common format for VFX heavy shows and allows the effects vendors and the final colourist to work with material captured from a variety of different cameras in different resolutions and formats, all mapped into a single container.

“These shows are shot all over the world by multiple units and multiple cameras,” Rivas explained. “You’ve got one shot cutting to the next which could be taken weeks apart with different weather at the location. So, my first pass is really to balance it all and make sure it’s flowing. I try to get it roughed in as nicely as possible then I sat down with Louis to get his first reaction.”

The DI and the conform (by Company 3 finishing editor Patrick Clancey) were both performed in Blackmagic DaVinci Resolve.

Rivas explained that his approach always involves starting with a fixed node structure. “I find it to be a good way to keep things neat and organised,” he said. “I’ll have 28 or so nodes set up for the entire project and I will have a specific purpose for each one. I don’t use them all shot to shot. I try to keep my grade as simple as possible and will only add on as needed per the creative discussion. But to know that node one is always going to involve the same kind of operation or node 12 or node 15, it makes it very easy to ripple changes across an entire scene without having to fear you’re going to obliterate some unrelated correction. I know exactly where that particular effect will be in every shot, and it makes it very easy for me to turn a specific change on or off.”

Action sequences on blockbusters like Fast X could have as many as 800 shots in a reel, roughly double the number in a less intensely action-packed film, but Rivas will have the same amount of time to finish it. “That’s one of the big challenges,” he said, “the sheer quantity of material.”

Rivas has found that some of the new AI tools in the most recent iterations of Resolve can help with some of the time crunch. On Fast X, he applied one of the colour corrector’s AI tools, Magic Mask, to quickly finesse a couple of shots that might otherwise have involved the time-sucking process of drawing multiple windows.

“I didn’t use Magic Mask extensively, nor should anyone have to,” he offered, “but it’s great to have tools available for unique situations. One such instance was when we were talking about how to adjust the grade on a couple of Vin Diesel shots, where we really wanted to be able treat him and the background in different ways.

“So, I grabbed this Magic Mask tool,” he added. “You can select something like a person in the frame and it cuts out an outline and animates it using AI. It took only minutes to cut him out from the background and then grade the foreground and background separately. That was really useful and gave a bit of wow factor to the clients in the room. It’s something I’d recommend that people to try out.”

On the encroachment of AI more generally on the colourists’ craft, Rivas said he isn’t fazed. “Any new tech has a bit of edge you can cut yourself on, but I think these AI functions are a tool in our hands and it’s up to us to do what we want with them.”

 


Traditional Media Companies Aren’t “Computable,” and That’s Where AI Actually Poses a Threat

NAB

The single biggest impact of generative AI for large content producers and distributors isn’t about disrupting the media-making process. It’s that it gives its fiercest competitors — content creators on YouTube and TikTok — more tools to eat further into everyone’s daily video consumption that the media industry is battling for.

article here

According to a fresh report by studio-funded thinktank ETC, “AI and Competitive Advantage in Media,” generative AI “potentially disrupts the already unfortunate economics of the media business: stable demand (never more than 24 hours in a day) and exploding supply.”

In the report, Yves Bergquist, ETC’s resident data scientist and AI expert, argues that what’s happening in the media industry is proximate to what already happened in manufacturing: automation of the craft of making a product (i.e., making the product computable).

By computable they mean that content is produced in volume and is “machine readable” in terms of every aspect of its creation to distribution to feedback from audiences being data and therefore available for dissection

Traditional media companies currently are not “computable” in the sense that they produce products linearly, one at a time. It is scarce, whole, long-form (not conducive to being sliced and diced by an online audience) and unstructured (its narrative DNA is not yet machine-readable).

This is going to have to change if studios and streamers want to part of the bigger picture in a few years’ time.

ETC divides the creative process into three parts. Bergquist dubs the ideation part, where creatives “sense” what an audience wants to see, “zeitgeist intelligence.”

Then there’s the core of the creative process, where creatives define their voices and make strategic decisions about what product will be crafted.

Finally, the product is made.

AI’s immediate impact is on that final phase. But by automating production, “Generative AI not only puts more emphasis on Zeitgeist-sensing and creative decision-making, it gives creative decision-makers tools to quickly and cheaply tinker, experiment, and prototype.”

At the same time, traditional media companies “risk losing their monopoly on the craft of high-quality content.”

Generative AI empowers social creatives to quickly and cheaply craft “studio quality” content threatening the status of traditional media. They can do this because their knowledge of what the audience wants is crowdsources by links, likes and recommendation algorithms. The content produced is computable in the sense that it can all be digitally mined. And the scale of content production means there’s enough supply to fit cater for every audience whim.

But ETC spots a weakness. Social media platforms and content creators reliant on those platforms lack any real understanding of their audience, claims ETC. It is just “basic content match-making”.

Instead, studios and especially streamers, can strike back against pure AI content generators by using the data they have at their disposal more intelligently.

“Programmatic content distributors like TikTok match content with audiences without any semantic understanding of why this content resonates. It’s just a programmatic marketplace that computes the content de facto.”

With generative AI bringing high production value tools to social creators, we can expect a new category of “short-form linear content” to emerge on social platforms.

Studios, on the other hand, “have the longest experience and the largest dataset available to not only develop an intelligence go their audiences, but to draw them into a deep relationship with their franchises.”

Media organizations, “especially those with a streaming service,” have both the data and a unique capability to understand the cultural zeitgeist. They can use AI to better “know” what audiences want, Bergquist says.

ETC also suggests that it’s the large media organizations that have the financial backbone “to create highly integrated and replicable AI-driven virtual production workflows.”

It contends that traditional media players will need to differentiate through immersive, multi-platform, world-building franchises, a trend they are already pursuing of course.

This, says ETC, “is the greatest opportunity for large media organizations to leverage virtual production and generative AI together to quicken and cheapen the cost of producing these multi-format immersive pieces. This new form of computable content will run on game engines.”

In so doing, this “revolutionizes the way stories are told,” with integrated narratives spun across linear and immersive media products.

There are warnings, though.

“Media organizations don’t have a software culture, nor can they support large AI R&D assets. They could partner with (or acquire) key AI research organizations to leverage their data to create their own proprietary content and audience intelligence models, but this is a heavy lift.”

ETC also identifies a need for intuitive “human-ready” and “business-ready” interfaces for AI models, which continues to be the greatest bottleneck for AI in enterprise. Too often, says Bergquist, organizations can’t connect models and business needs.

“Whoever can redesign their organizations and workforce needs to best create a ‘culture’ of AI and data will move faster than its competitors.”

Education, insists ETC, is the largest opportunity in AI today.

While everyone seems to agree AI represents a big financial opportunity to automate some production and postproduction workflow it begs a question: Does taking knowledge of the craft out of creative work affect creative decisions and creative output overall? Or, put another way, does knowing the craft make a creative a better decision-maker? ETC has no answers for this, and perhaps we’ll only find out in time.

More globally, what the media industry needs right now is a distinct and actionable AI vision.


TikTok x Brands x Consumers: What Content Producers Should Know

NAB

TikTok has become the biggest user-generated content creation platform in the world and at the 2023 NAB Show, TikTok’s global head of creative lab Kinney Edwards and Krystle Watler, head of creative agency partnerships in North America for TikTok, took to the stage for a session titled “TikTok x Brands: How to Effectively Engage Consumers.” Their talk was designed educate producers, brands, agencies, marketers, and anyone interested in really diving to the platform to get the best creative out of it. 

Article here

They sought to dispel some myths. For example, that the 18+ Gen Z audience is the predominant age group on the platform.

“TikTok is actually a multi-generational platform,” Edwards explained. “We’ve actually seen the strongest growth among 35+ users. Our most impactful rate of growth came during the pandemic and this is probably because we had a lot of cohabitation happening with Gen Zs returning home to their parents and people were looking for something to do to see the outside world.”

Contrary to some opinions, the TikTok algorithm does not favor content from well-known creators versus unknown creators. They cited a user who had just 70 followers until she posted a video of her skincare routine with one of her favorite products. The video went viral, racking up 43 million views, “not because she was popular, but because what she had to say in what she shared really resonated with people in an authentic way.”

The biggest myth, they claimed, however, is that TikTok is a social media platform.

“It is a next-generation entertainment platform,” they said. “This is all because TikTok operates on a content graph and not a social graph. On other social media platforms, it’s really about likes, and who you’re following in terms of delivery of content to you. The reason why TikTock has become the biggest social network in the world is because the algorithm is based on the interest graph, not on the social graph.”

Expanding on this, they said that if you are following people on Twitter or Facebook then you are following people as they change over time instead of being served content that you personally are interested in over time.

After the myth busting comes some “universal truths” for brands wanting to make the best use of audience engagement on the platform.

Harking back to the viral hit, the chief message is to create content that is authentic to TikTok — meaning ensuring that the content doesn’t just have an authentic voice, but is produced using TikTok tools and formats.

“The visual language of TikTok normally varies by different community [audience],” Watler shared. “But in summary, the most basic tips that we can give you in terms of how to go native is shoot on a mobile device, that’s all you need. You can change the settings so that you can shoot 4K to give you an even clearer camera capture and use native editing tools. Make sure that you’re thinking through how the content is going to interplay with the actual user interface.”

Editing techniques seem important to TikTok creators in order to entertain audiences. Edwards advised, “When you’re thinking about that story you want to tell and you’re thinking about the structure, we also want you to think about how you are transitioning to the different beats.”

And don’t forget the audio. “Every brand has been working on leveraging their sonic IDs. TikTok is a place where you can play around with those sonic IDs and run with it,” Edwards said, citing Microsoft’s Window’s chime turned into a song. “There are many ways that you can play with your sonic ID and your brand with sound on the platform,” he added.

“What’s really important with production on Tik Tok is to let the content be the star. Don’t overthink it. Don’t overdo it. TikTok is about authenticity, it’s about that glance into the imperfect, and the real. So let that drive your production.

“It doesn’t mean that TikTok can’t be beautifully crafted. We see a lot of luxury brands, using high end production technologies to create both on and off platform experiences. It’s up to you in terms of like what you need to actually tell the story.”

Put all that together, and brands might see a 25% increase in attention (they say) with 64% of users on the platform saying that they would be interested in buying a product that they see on TikTok.

Thursday 25 May 2023

Live from Liverpool: disguise debuts digital spikemark to speed Eurovision Song Contest stage workflow

SVG Europe

TV viewers may not have seen them but Eurovision’s backstage team were the real heroes of the song contest that saw Sweden’s Loreen (pictured above) crowned winner (again) watched in the Grand Final by a record viewing audience. Technicians and stage crew were in charge of 23,700 light sources, 482 costumes, 150 microphones, 100 wigs, 3,000 makeup brushes – and scene changes between acts of just 50 seconds enabled by a disguise custom-designed digital spikemark feature which debuted at the Song Contest.

article here

Split-second timing and precision positioning were the key to the fluid and swift change of instrumentation, props for multiple artists in each of 26 different country performances and plus interval acts in the Grand Final. In the week prior to the final, there were another two live broadcasts of the semi-finals and a further three dress rehearsals before a capacity 14000 audience at the M&S Bank Arena, Liverpool.

While Eurovision has yet to release official figures it has tweeted that the final for 2023 was the most watched ever. It’s probable that the 2023 edition will smash previous records which estimates that 160 million people worldwide tune in for some part of the three live broadcasts and that 80-90 million watched the final in 2022.

disguise spent three months developing a digital spikemark solution for the event that illuminated the precise marking for every change of set, instruments and artist on the LED stage floor.

“With dozens of acts with multiple artists to be precisely positioned on stage, the floor would not only be a mess of tape, the potential for mistakes huge, and the fluidity of production would be challenged,” explains Peter Kirkup, Solutions Director, disguise. “Each act in Eurovision has big ambitious set pieces. Almost every single artist has some sort of set piece on the stage and yet still need to hit these incredibly tight turnaround times taking kit off and setting up the next performance.”

He continues, “In order to do that they have to hit very precise marks because sometimes they’ve got a camera shot that’s lined up to coincide with a lighting effect or some pyrotechnics. The performers need to stand on specific mark points.

“If you were to do that for a traditional show, you would use tape on the floor. But at the level of production demanded of Eurovision and because of its scale we wrote software that had to be non-destructive and need minimal interference from the main production crew.”

Kirkup explains that there have been solutions for this in previous Eurovision using external systems. “What we’ve done is basically moved all of that capability into the disguise services so that they can actually drive it as part of the show and link it into the rest of the show in a much more integral way. Previously we’ve never had this natively in the server.

All the graphical rendering happens from within the disguise server controlled via iPad. The iPad displays the location of all stage cues per artist with the ability to manipulate, scale, zoom and control them on-the-fly as creative and production needs dictate.

The graphical annotations, which were rendered within disguise hardware and controlled wirelessly via an iPad, were created in the venue by the stage management team based on plans submitted by the delegations. The software was also used by individual artist teams pre-show to stage manage performance. In all, 1250 stage marks were used throughout the show with the help of the digital annotation feature.

“Immediately stage crew doing the scene change can see, for example, a big yellow box illuminated on the LED floor and they’re carrying something that’s the same size as the big yellow box and they put it down in the big yellow box. It makes transferring the items onto the stage really, really, really fast because they don’t have to search for a tiny little tape mark on the stage. It’s just really obvious as they walk 20 metres across the stage, they know where they’re going.”

A team of five programmers, one stage projectionist and two system engineers were responsible for delivering the entire experience, all led by Head of Video, Chris Saunders at Ogle Hog.

“One of the early considerations and really important parts of this was that Eurovision didn’t want this to be something that the operators of the video of the show had to care about. Their job is making all the video work for the show, working with the creative companies. When you have something on the scale and ambition of Eurovision the workflow in terms of stage and performance needs constantly updating. You’ll get halfway through a rehearsal and somebody will decide that actually a piece of scenery needs to move a little bit to fit better with the storytelling of the song.

“If they were to work in a traditional workflow, all of that work would have been put back onto the disguise operators who are managing and editing the video for the event. Making all these scene changes in a conventional manner would not be acceptable because it would have been too disruptive to the main video part of their job.

“Instead, stage management can load up different marks on the stage display in a drag and drop way. They can scale them, move them around. Zoom in, zoom out, work on it just like working in Photoshop or After Effects all on iPad. All of this abstracted from what the video team are doing but it’s all running through the same system.”

Eurovision 2023 stage designer, Julio Himede said, “We used to mark the floor up with tape, so it got pretty messy. Now it’s literally just an operator with an iPad pushing a button. The artist even gets a little ‘T’ mark on stage to show them where to stand, and then we know the spotlight will hit them exactly.”

“The speed is the hardest part, because everybody’s requests were pretty far out there but we’re here to make their dreams come true,” added event Lighting Director, Tim Routledge.

This wasn’t the only piece of disguise technology involved in the show. The stage featured LED screens (supplied by Creative Technology) on the wall, floor, sides and ceiling, each capable of mapping different animations and video and all managed, controlled, synced and played back through six disguise vx 4 (director) and four gx 3 (editor) media servers (all supplied by QED Productions) with storage expansions to increase the drive sizes to handle the volume of content.

“The complexity of catering for more than 37 different acts and several interval acts all with bespoke video content arranged in a multitude of configurations, means the ESC is one of the biggest live events disguise is used for and among the most technically demanding. Each of the artists makes extensive use of the editing workflow within our workstations to design, layer and process their video performance.”

The visuals for the entire show are driven by timecode which is synced within the disguise servers with other production elements.

Kirkup reveals, “There is an extraordinary amount of coordination that goes into Eurovision because the time on the stage per performance is absolutely minimal compared to the amount of time you need to prep it. Imagine something like a mega spreadsheet where everything about Eurovision is stored which is slightly scary because each country will be turning over new versions of the content as artists finesse their performance. So making sure that the servers are up to date with the version that’s been delivered is one challenge. It’s a huge logistical challenge, getting it all pulled together.”

disguise is yet to decide whether the new annotation feature will be made publicly available though Kirkup says there’s a possibility they might release it as an Open Source Project. It could also be adapted for virtual production scenarios.

“The first application is Eurovision but everyone’s kind of gone into this knowing that this is quite a useful thing that could be applied to other markets as well. In virtual production, the common request is for light cards for lighting a chunk of the LED in a flat colour, just to use it as a light source for the scene outside of what the camera would see. You can do this today in disguise, but it’s all work for one operator, whereas actually in this workflow you want to enable a separate person on the lighting team to operate it because they are the ones needing to constantly adjust for reflections and scene illumination.

“It’s not just about plugging in some features and delivering it. There’s really is a need for that holistic understanding of how a show comes together.”

Speaking their language: Atlas anamorphics deliver for Phoenix Rise

copy written for VMI

TV drama set in high schools targeting young teen audiences are often a risk-free zone but the producers of Phoenix Rise weren’t going to dumb down their approach. 

article here

Commissioned for iPlayer by BBC Studios, Phoenix Rise is set in a high school in Coventry and revolves around a diverse group of teenagers. Debuting in March on iPlayer then BBC Three the series is already a hit with a second series of 8 x 30’ airing later in 2023 and a third and fourth series also in the works. 

“There’s a stigma that some kids’ shows can have which is about tending to have the same quite high key look and not affording that many creative opportunities,” says Claudio Cadman, the show’s director of photography (Cunk On Earth). “Right from the start, executive producer Ali Bryer Carron and director Claire Tailyour were taking a very interesting approach where they wanted Phoenix Rise to feel different to any other kids’ show.” 

Cadman explains that the brief was to make it feel like an indie British film feature, more Mike Leigh-style kitchen sink drama than M.I. High.  

Their references included Sarah Gavron’s 2020 crowd pleasing and gritty Rocks about teenagers in London and American indie features Juno and Me and Earl and the Dying Girl

Given those aesthetic references Tailyour and Cadman felt strongly that the show be shot anamorphic but they had to convince BBC executives of the case. 

“Everyone was supportive of everything we did down to having long conversations about anamorphic lenses,” Cadman reports. “I wrote papers discussing what anamorphics were and why they were good for this show and why they wouldn’t slow us down. We were making something that looked completely different to what CBBC was used to.” 

There was also the practicality of working with inexperienced young actors to consider. “You don’t want to tell them that this particular lens only works if you’re not exactly positioned 6ft away. We wanted the flexibility of setting up shots quickly.”  

Cadman turned to VMI to set up a series of lens tests, which in turn dictated the choice of camera. 

“Principally the lens test were to show the execs the difference between spherical and anamorphic. It’s not something you can write down in a paper. We cropped clips shot on spherical and anamorphic to 16×9 and 2.35 aspect ratios to show them what the actual feel of it was.” 

Cadman selected a series of Orion Anamorphic Primes from Atlas and paired them with a Sony Venice, a camera he was familiar with from operating on family drama Dodger.   

“The Orions are affordable, you can shoot at any exposure on them. Stop-wise they don’t fall apart, as happens with some vintage anamorphic if you shoot wide open, and they afford good close focus. All in all they are just pretty bomb-proof regardless of situation.”  

He shot both series one and two back to back over eight months in 2022. “We wanted a handheld feel of constant movement to follow the kids moving from class to class along corridors and into and out of buildings. In those long school hallways I had to do broad stroke lighting and put fixtures in where we could but it was really important not to over light. It had to feel naturalistic and real.” 

The show was post produced in a 4K finish at Dock10 with Jamie Parry in charge of the grade. Cadman says, “Jamie helped design the LUT for the Venice and we had complete freedom in the grade to go where wanted to go with it.” 

Cadman has been a regular client at VMI, from prepping equipment as a focus puller and then returning when he progressed to shooting shorts.  

“They have always supported me and given me discounts on kit. It’s a lot more reassuring going to a place like VMI so you’re not searching for things. They will always have spares of what you need in stock or be able to suggest a great solution. With some other rental houses you have to find alternatives because they don’t have such a good selection. When you’re working on long jobs like this it is inevitable that at some point you’ll have kit needing repair so knowing you can rely on getting a quick replacement just takes all that weight off your mind.” 

Phoenix Rise was third and fourth season is planned to be shot over six months starting this summer with Cadman back as DoP. 

 

Commercialising automated mobility

TechInformed

Cabless HGVs, driverless buses and autonomous taxis are among several UK trials claiming to form the most advanced set of commercial self-driving operations in the world. 

article here 

While there have been many proof-of-concepts around self-driving vehicles, there’s been less reported on how these systems might be used by industry as part of a commercial operation, to speed up production processes and/or reduce carbon emissions.

To explore this aspect more, last year the UK Government, through its Centre for Connected and Autonomous Vehicles (CCAV) body, launched a Commercialising Connected and Automated Mobility (CAM) competition.

Match funding from industry raised the public grant to around £81 million and participating projects are expected to demonstrate a sustainable commercial service by 2025.

An additional £600K has also been allocated for feasibility studies to look at potential routes where automated vehicles might operate exclusively.

Through the fund CCAV – which was formed in 2015 to work with industry and academia –  hopes to stimulate growth and has predicted that connected and automated vehicle (CAV) technology could create nearly 40,000 skilled jobs by 2035.

This is predicated on forecasts of 40% of new UK car sales having self-driving capabilities, with a total market value for connected and automated mobility wort £41.7bn to the UK.

The government claims that self-driving zero-emission electric or hydrogen powered vehicles could also revolutionise cargo transport by reducing the cost of drivers, eliminating driver shortage and making routes predictable and consistent for ‘just-in-time’ processes (the logistics lingo for when goods are received from suppliers only as they are needed).

Here, TechInformed looks at three beneficiaries of the fund, which typically comprises of partnerships between academia and several commercial companies with common interests and complimentary tech specialisms.

Some of these firms are quite far down the line with earlier self-driving projects and the leaders of each project were clear about their business goals as well as the challenges they faced.

Hub2Hub

As well as delivering efficiencies in the logistics sector, it is hoped that autonomous HGVs could speed up the decarbonisation of heavy vehicles and reduce the substantial carbon footprint of road haulage.

The Hub2Hub Consortium Project aims to do just this. Led by retailer Asda, Glasgow’s Hydrogen Vehicle Systems (HVS) and automated driving systems provider Fusion Processing, the project was the recipient of over £13m in matched funding.

“We are engineering the world’s first autonomous hydrogen-electric powered HGV to demonstrate hub-to-hub logistics, to elevate public perception, showcasing the potential autonomy we can deliver thanks to increased safety and fuel savings, and develop new business models,” says HVS CEO Jawad Khursheed.

The autonomous software for the trucks will be provided by Bristol’s Fusion Processing, whose CAVStar system combines vision systems, AI and route planning.

The platform allows full autonomy to be swapped out with onboard or teleoperated driver control at predetermined points along a route, with the HGV self-driving between hubs and human drivers taking over when the vehicle nears its destination.

“Our market analysis indicates that the commercial vehicle segments such as haulage are where we will see autonomous vehicle technology first used in large scale deployments,” adds Fusion Processing CEO Jim Hutchinson.

Asda senior fleet manager Sean Clifton adds that the introduction of carbon-footprint friendly vehicles would have a big impact on the retail giant’s carbon footprint.

“We will continue to work with like-minded partners on projects such as this to reduce our impact on the environment,” he says.

V-CAL

V-CAL, led by the North East Automotive Alliance (NEAA), is part of an existing project to maintain business competitiveness in the region and, in particular, for car maker Nissan which employs 70,000 people directly and indirectly in the local area.

The project will run up to f zero-emission battery powered autonomous HGVs on private roads owned by Nissan Sunderland.

“The project builds on an initial connected and automated logistics (CAL) proof of concept and is part of a long-term logistics plan to retain competitiveness for Nissan,” explains NEAA chief exec Paul Butler.

“For the North East more widely, we are looking to establish an innovation centre for CAL to attract more companies into the region,” he adds.

The work, in partnership with Nissan, Newcastle University, logistics firm Vantec, autonomy provider StreetDrone, network provider Nokia, smart city network protector ANGOKA, and law firm Womble Bond Dickinson, has been awarded £4 million by government, matched by industry to a total £8m.

On one route, between the car plant and its main warehouse operated by Vantec, 40- tonne ‘freight tractors’ will operate without any personnel on board but will be monitored by a remote safety driver as backup – which, in terms of driving automation is classified as ‘Level 4 SAE’.

According to Butler there are already more than 300 indoor and outdoor auto guided vehicles (AGV) on the plant.

He says, “Critical success factors are to match the current provision which Vantec has in terms of flexibility, to integrate with Nissan’s existing AGVs, to have one remote driver supervise three autonomous HGVs and for a commercially viable system that gives Vantec an alternative solution for logistics.”

A potential challenge for Vantec, Butler adds, is a shortage of drivers: “With a ‘just-in-time’ operations this becomes hugely problematic if parts aren’t delivered in time,” he says.

The private road between warehouse and plant is 3km and contains few hurdles that autonomous vehicles travelling at 10mph (top speeds of 25mph) should easily be able to navigate.

Having trialled one vehicle for the PoC, the next step is to scale up to two by end of 2023 then three HGVs.

Newcastle University is leading the remote driver supervision including assessing attention, response and eye gaze in a teleoperation rig.

Drivers employed at Vantec are being retrained for the role and are far from redundant in this driverless project. “The drivers are key part of the process and are providing a lot of good insight into how teleoperation should be set up and types of feedback they require,” Butler maintains.

The scheme will use a private 5G network where StreetDrone’s head of product and partnerships Ross James explains that the avoidance of downtime is the key issue.

“So, the vehicles have redundant modem kits and other failsafe mechanisms which means that if one part of the network goes down, we have that assurance that we can remote operate that vehicle,” he says.

A fourth self-driving HGV is planned to transport finished vehicles from Nissan to its pre-sale depot over a route which is private but more complex – involving security gates, roundabouts, bridges and more site traffic.

According to Butler, a fully autonomous system will be deployed in this scenario where delivery is not production critical.

“The next natural step is to automate delivery from near side suppliers of which there are about 12 within two miles of Nissan and five of those have private roads. That is the scale of the opportunity for us, not just Vantec or the finished vehicles, but to automate on further private road settings,” he says.

Achieving this will be a major step towards deploying the technology on public roads and this is where Womble Bond Dickinson will play a key role in advising the V-CAL of legislation in this area.

Another local project to receive CCAV funding in the area is the Sunderland Advanced Mobility Shuttle which will increase connectivity on public roads for passengers between rail and bus depots to the city’s Royal Hospital and University Campus.

MACAM

Another CCAV recipient of £15.2m in funding is the Multi-Area Connected Automated Mobility (MACAM) project, developing driverless shuttles in the West Midlands area,

MACAM will establish passenger and logistics routes between Coventry railway station and the city’s university campus and another between Birmingham International Station, Birmingham Business Park and the NEC.

Both will be run by a single operations centre developed by a regional consortium and led by driverless vehicle company Conigital.

“These will be fixed routes operating a dynamic schedule depending on demand,” explains Tom Robinson, CTO, Conigital.

“This will be part of an integrated transport solution to reduce traffic and emissions and reduce dependency on professional drivers.

“Our system also provides information about other transport services. The intent is you could do an end-to-end journey and our system will tell you what you need to do for different transport loads and to book aspects of those journeys in an integrated manner.

“It’s a very distinct shift towards mobility-as-a-service that leverages self-driving vehicles.”

According to Robinson, there’s a 10% driver shortfall in the region which is impacting existing private fleet operators “significantly” in terms of operational performance. Operators also pay around £2,500 every time they recruit a new driver.

The scheme also looks forward to the arrival of the HS2 train line from London (due between 2035-2040) when an integrated self-driving solution will alleviate the anticipated 76% growth in private car use. That’s on top of the existing 14,300 daily trips made around Birmingham Business Park and the NEC.

MACAM will utilise 13 vehicles of two types: a 5/7-seater and a larger 15/16-seater fitted with Conigital’s automated driving software.

The Remote Monitoring and Tele-Operation (RMTO) service, run by Transport for West Midlands, will monitor the vehicles using 5G technology and take control if necessary.

“The RMTO provides the pathway to Level 4 self-driving capability [self-driving in defined operating design domains without human supervision being required] and will eventually remove the need for an onboard supervisor entirely,” Robinson explains.

This will be tested in three phases, beginning next year, culminating in an extended trial across both locations, transitioning from having engineer supervisors on board to eventually operating without a driver in the vehicle.

“We are strongly targeting a 5G sliced public network which means we have the assurance and performance of the 5G. You don’t want the uncertainty of high latency. With a less than 100ms round trip you can safely monitor and control remote driving,” says Robinson.

Other partners on the project include Direct Line Group, the University of Warwick, Coventry University, dRisk and IPG Automotive.

“We know there’s a commercial business case for operators,” says Robinson. “The extra cost of putting technology in the vehicles is offset by reducing human operator cost. We will have a 5-1 monitoring relationship which we know we can achieve, based on work we’ve already done.

“Following this trial, we hope to expand from these two locations and continue services beyond the project on a fully commercial basis,” he adds.

Coventry City Council has said the technology would be transferable to its battery powered ‘very light’ railway scheme in development.

Conigital, autonomous driving safety firm dRISK and engineering firm IPG Automotive are also involved with Stagecoach in a scheme to pilot 13 on-demand self-driving taxis within parts of Cambridge University’s campus, including the Biomedical facility.

“The aim is to provide additional passenger carrying capacity to connect two Park and Rides with the Biomed campus,” explains Robinson.

“The Cambridge Connector and MACAM might seem superficially similar, but the operational design domain is different. Although the Cambridge Connector operates within defined areas, you need to plan for all sorts of public interactions ranging from student going to school, to passengers at bus stops, ambulances, wheelchair and walking frame users.”

In its drive to advance autonomous driving systems in industry and smart cities, CCAV has announced a second funding round which launches this week (25 May). UK-registered organisations can apply for a share of up to £900,000 for feasibility studies into the use of connected and automated mobility as a mass transit solution.

Other CCAV funded projects include the UK’s first self-driving bus service from Ferrytoll Park & Ride in Fife to Edinburgh Park interchange, which we reported on last week.

Digital Worldbuilding: The Next Data-Driven Experiences

NAB

Virtual and augmented reality will eventually merge the metaverse with our everyday surroundings, but we can get a glimpse of how that might look in emergent location-based experiences.

article here 

A number of them are popping up, notably in Las Vegas, where the mother of all venues, the MSG Sphere, is gearing up for an autumn launch.

It’s fair to say that mixing the 3D internet populated by avatars with tactile, IRL-populated experiences is both experiential and experimental.

We are in the foothills of what is possible and getting the mix right involves considerable trial and error.

“Probably one of the largest drivers of the technologies that we’re working with was COVID,” MetaCities co-founder Chris Crescitelli explained at the 2023 NAB Show. “The pandemic definitely accelerated the timeline.”

The NAB Show session, “The Future of Data-Driven Hybrid Experiences: Bridging Digital and Physical Realms,” explored how digital and physical experiences can be combined to create engaging and immersive storytelling experiences using mixed reality technologies. (You can watch the full session in the video at the top of the page.)

MetaCities is a startup focusing on recreating real-world locations in the metaverse, and then providing experiences in those virtual spaces that include musical performances, avatars and holograms.

It’s producing more than 50 virtual events a year, selling tickets and advertising/sponsorship alike on behalf of its clients, which includes Las Vegas-based StarBase.

StarBase is a 8,000-square-foot live and virtual entertainment event space that is among the first to use holoportation technology to operate as a hybrid real world and metaverse event venue. StarBase has a “digital twin” built by MetaCities using Microsoft’s AltspaceVR to replicate the physical venue as an interactive one online.

“MetaCities teamed up with StarBase to create what we think is the country’s first digital twin hybrid live venue installation,” Crescitelli said.

He described the core fundamental building blocks of the shows as using “proto-holograms in the real world.

“If you haven’t seen or heard of proto-hologram, it’s a seven by four-foot display in which people stand and their video is transmitted and displayed as a proto hologram anywhere else. Traditionally people beam from one proto hologram into the other. We’re using it for the live avatar projection as well,” he said.

“The second element is a robust metaverse platform, and the third element is the live tech in the building from camera installations and projection to sensors in the right places for the live audience to see each other. Technicians glue all that together.”

The success of these entertainments rely not just on technology but on utilizing the personal data of guests. Clearly there are issue of privacy but if those can be overcome then it is possible to curate experiences which are shared and personalized at the same time.

Melissa Desrameaux, venue director at StarBase, explained, “The way our space is laid out there are a lot of different rooms that guests can freely flow into. We encourage them to create micro experiences throughout the event. Not everyone has to have the same experience at the same time. A lot of times they’re asking for just really experiential ways to bring food and beverage to life and we’ll have fun creating different stations, building props for them.”

The ability to track guest eye movement would benefit the experience, but most people are not yet comfortable giving permission for this.

“Everybody in those upper executive offices would love to have all that eye tracking information. And it’s so logical to do that. But it definitely borders on privacy issues that some people don’t want to cross. But the tech is all there for sure. In a lot of virtual worlds that are headsets that are enabled with eye tracking and those will be more prevalent in the newer models.”

It is early days of course but there is potential for content owners to license their IP to appear in these virtual worlds and as avatars in the real world. Disney might seem the logical first mover though the panel thought owners with more flexible arguably less well known IP — such as Netflix’s Stranger Things — might be more suitable for exploitation.

“In the same way that you have SoundExchange for music (a collective rights management organization that collects and distributes digital performance royalties for sound recording) eventually we’ll have a similar exchange for images. So you’re using Mickey’s pants in this and Disney’s gonna take a piece of your sales…”

Tuesday 23 May 2023

Creators Prepare to Fight for and Flee TikTok as Bans Take Effect

Streaming Media

Although governments around the world have restricted or outlawed the use of TikTok, the state of Montana last week became one of the first jurisdictions to extend the ban to consumer users as well. Other states may well follow and while there are challenges to the proposed law creators reliant on the platform would do well not seek out other options should the noose tighten in the run up to the 2024 presidential election. 

article here

“Naturally, content creators across the country who make their living on TikTok are watching to see how this ban plays out,” says Ben Trevathan, President & CFO of cloud-based video platform FanHero. “Many creators are exploring alternative streaming platforms to hedge their risk. Obviously, they want to be able to continue to monetize and maintain a steady income. We encourage them to explore all of their options to understand which platforms truly have the capabilities and monetization systems in place to support them.” 

With north of one billion monthly users and annual revenue in excess of $10bn, plus a reach which touches virtually every aspect of business and culture, TikTok is arguably the most influential social video app on the planet. 

The bill targets app stores such as those run by Google and Apple, prohibiting them from hosting TikTok.  Penalties include fines of $10,000 per violation per day, where a single violation is defined as “each time that a user accesses TikTok, is offered the ability to access TikTok, or is offered the ability to download TikTok.” 

The Montana state-level ban is the latest in a series of restrictions to the app applied by administrations concerned about back door data breaches to Beijing. 

Montana’s governor, Greg Gianforte, said the move was “to protect Montanans’ personal and private data from being harvested by the Chinese Communist party”. 

These concerns are not without foundation. Owner Bytedance sacked four workers and apologised for accessing personal data of journalists who worked for the Financial Times and BuzzFeed. 

It has begged the question about what would happen should TikTok be forced out of business in the west. 

Guillermo Dvorak, Head of Digital at media and behavioural planning agency Total Media thinks the biggest winner of a potential TikTok demise would be YouTube, reasoning that it be able to gain back some of its lost market share. The biggest losers on the other hand will be creators. 

“If TikTok were to either haemorrhage users or be outright banned, the impact on the creator community would be devastating. For many, TikTok was an opportunity to find their voice for the first time. If TikTok were to disappear, or if it were forced to change its format so drastically that it became unrecognizable from its current incarnation, it would be a huge loss for creators. They may have to seek new platforms to distribute their content on if they don’t want to lose out on all that sweet, sweet ad revenue.” 

We are not at this stage yet. The Montana ban comes into effect on January 1 2024 and there are already a number of lawsuits challenging its efficacy. 

A group of Montana-based creators are suing the state arguing that “Montana can no more ban its residents from viewing or posting to TikTok than it could ban the Wall Street Journal because of who owns it or the ideas it publishes.” 

The American Civil Liberties Union accused the state of having “trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business in the name of anti-Chinese sentiment.” 

NetChoice, an industry trade group that counts TikTok as a member, said the bill “ignores the U.S. Constitution.” 

“The government may not block our ability to access constitutionally protected speech – whether it is in a newspaper, on a website or via an app,” said Carl Szabo, NetChoice’s general counsel. 

Olexandr Kyrychenko, a partner at the IMD Corporate law firm, told The Guardian that no TikTok users would be fined because the bill targets companies rather than individuals. Yet he thinks the “eyewatering” $10k a day fine would encourage companies to comply. 

“It would certainly be a costly gamble to keep download options available once the bill comes into force and app stores would be well advised to comply,” he said. 

Then there are the logistics of enforcing a ban on app stores which operation nationally, not state by state. 

TechNet, a trade organization that counts those companies as members, told Montana lawmakers at a hearing in March that “app stores do not have the ability to geofence on a state-by-state basis. It would thus be impossible for our members to prevent the app from being downloaded specifically in the state of Montana.” 

Change is inevitable, especially in tech and perhaps TikTok has had its Icarus moment. Perhaps creators reliant on TikTok should be looking to diversify to other platforms. 

“The key factors to future growth will come from platforms that are distribution channel agnostic, are economically attractive and deliver data-backed insights to creators,” reckons Trevathan. “Disruption in an established platform is painful, but it presents the opportunity to review the full range of available alternatives. 

His advice to creators is research the options to understand how much money is being captured by the platform versus going into the creator’s pocket.  

Naturally he points to FanHero which “offers the exact same streaming capabilities of platforms like YouTube but with better monetization and customization to support smaller creators or niche businesses." 

While the drive for innovation may remain, the absence of TikTok would create a void that other platforms would vie to fill. 

“A ban on TikTok could bring about the birth of a new generation of short-form video platforms, ones that learn from TikTok’s success and bring something fresh to the table,” Gilbert told IBC. “Regardless, a ban on TikTok in the US would be a game-changer for the future of short-form videos and could have ripple effects throughout the tech industry.” 

 

Monday 22 May 2023

Henry Braham BSC / Guardians of the Galaxy Vol. 3

British Cinematographer

Crazy space opera

Cinematographer Henry Braham BSC dissects Guardians of the Galaxy Vol. 3, including the film’s mocap magic and keeping the characters mobile.

article here 

Back for a third, and possibly final act, Marvel’s beloved motley crew assemble for another candy-coloured galactic romp that carries a darker storyline than fans of the previous chapters may expect. In Guardians of the Galaxy Vol. 3, the group unite to prevent their genetically enhanced friend Rocket from being exterminated by a Dr Frankenstein-type villain out to accelerate evolution with animal experiments.

With a cast including Chris Pratt, Zoe Saldana, Dave Bautista and Karen Gillan, principal photography began in November 2021 at Trilith Studios (formerly Pinewood) in Atlanta, Georgia, under the working title Hot Christmas. Director James Gunn recalled Henry Braham BSC to resume the work they began on Vol. 2.

Guardians is a crazy space opera but it is still fundamentally about the human condition,” the cinematographer says. “The space element is the spectacle, the wit and the fun but the core of the film is about family and especially about disfunction.”

The combination of action-adventure with a deeply personal human story was the starting point for discussions with Gunn about visuals. For Vol. 3 they selected references from Southeast Asian cinema, such as Park Chan-wook’s Oldboy.

“Because we wanted to make this a human story, we have to ensure we tell each character’s story very clearly, and particularly in the action sequences.”

This principal informed Braham’s intent to make the camera present in every scene.  “There are different ways of doing this,” he explains. “You can shoot on a long lens but by definition you are observing the scene. Or you can put the camera amongst the scene as we did here. If the camera is very present in the scene it makes a massive difference to the way performances are photographed and to the intensity of performances.”

While the argumentative and teasing interplay between the core characters is enriched by knowledge of events in previous chapters, the emotional heart of Vol. 3 is a backstory involving Rocket’s friendship with a trio of CG animals, an otter, a rabbit and a walrus.

Braham explains that these scenes began life with actors performing on a mocap stage. The performance of Rocket is by Sean Gunn with Bradley Cooper’s vocal performance added later.

“You’d shot list the scene but until you work with the actors in mocap you don’t understand exactly where the camera should go. Sean’s performance is remarkable and is baked into the animation. He assumes the physicality of Rocket which is very helpful to me in understanding how the camera needs to move in relationship to what Rocket is going to do.”

The tone and pace of scene involving the CG animals is taken from the mocap performances and from the way the physical set has been lit in the background plates.

“With Lylla the otter we know what size she is and James will have a view about where she will be in relation to everything and we’ll line that up with a proxy.”

Mobile mentality

As in previous collaborations with Gunn, Braham is keen to support his director’s desire to keep the characters mobile. That requires a mobile camera and, given the scale of the sets for Guardians, was not something they felt would work inside an LED volume.

“We looked at virtual production (VP) but for us the principal thing is the size of the stage is very small in comparison to shooting on giant sets. The Guardians’ spaceship is four storeys high, for example, and there’s a huge set build for the Knowhere town. Since James is interested in having mobile characters you need a lot of space for that movement. In addition, at the point when we were planning the movie, the VP technology didn’t handle fast pans so well. There would be a lag between camera and virtual background.”

Ninety percent of Guardians Vol.3 is shot stabilised handheld with the main storytelling camera operated by Braham. He feels this is the only way to achieve the accuracy and positioning of a camera that is close to the actors.

“Normal ways of mounting the camera are very intrusive,” he says. “However brilliant the people are operating [a rig, dolly or crane] tracking actors who have freedom to move within a scene will always be delayed by the physics of momentum. Therefore, you become very aware of the camera and it’s why we traditionally tend to back off a bit on longer lenses. When you go up front and personal and the camera is connected to the actors and their eyelines you have to be much more precise about how the camera moves.”

Notably, Guardians of The Galaxy Vol. 2 was the first feature to be shot at 8K resolution, on the RED DRAGON VV sensor inside the WEAPON camera. The filmmakers retain that spec, shooting 8K VV but this time have changed up to RED V-RAPTORs paired with the Leitz M 0.8 line of lenses (21mm to 90mm). Braham puts the decision in context by explaining why he shot Vol. 2 with RED.

“At the time, Marvel were keen to shoot more of their movies on digital large-format cameras like Alexa 65 which does produce stunning pictures, but for me the camera was too big and that was contradictory to what James wanted to do. Jarred Land (RED president) showed me a prototype of what was to become the 8K VistaVision camera the size of a Hasselblad. RED was great at welcoming our feedback to develop it to shoot Vol. 2.

“When it came to Vol. 3 there had been an evolution in technology. James always wants the tools to achieve his vision but also to improve on it, so RAPTOR was the natural choice.”

In the interim, Braham had lensed Gunn’s Suicide Squad (2021) using an array of RANGER MONSTRO 8K VV and WEAPON 8K VV and The Flash, also part of the DC Extended Universe, for director Andy Muschietti on MONSTRO.

“The V-RAPTOR has subtle differences over MONSTRO,” Braham says, “with one of the main ones being the capability to shoot a medium format negative at 120 frames. James likes to shoot high speed to have the option of speed changes or adjustments or as an aid to later manipulate the image so V-RAPTOR was a significant step forward in that regard.

He continues, “All digital cine cameras have spectacular picture quality. Quite frankly so does your iPhone. It doesn’t matter to me if it’s 4K, 10K or 20K. What is interesting is the physicality of the camera and the geometry of the lens in relation to the image size. That matters if you are interested in putting the camera close and wide to somebody’s face. It’s why the geometry of the lenses to the VistaVision negative area is very favourable. It means the focus drops off very beautifully in the background in the way that if you were shooting on a medium or wide lens on 35mm the background would still be sharp.

“The team at RED understood the importance of that. In building the RED VV camera they took existing lenses that cover the medium format negative and asked ‘what is the optimum area of the lens we can use?’ and then they designed the camera to that. That’s a very different way of thinking and why it has a very different photographic effect. Most of Guardians Vol. 3 was shot on wide lenses very close to people’s faces, which in more traditional formats you just can’t do.”

Adams’ influence

Braham shoots digital the same as he would with film – using Ansel Adams’ zone system and a light meter to determine the exposure, with no on-set monitoring.

“Photography boils down to a few essentials,” he says. “You need to know to see things. You need to know how to develop a visual idea and then how to realise that. Photographically the best way to realise the visual idea is to be very precise about exposure.

“Ansel Adams figured it out in 1938. It’s so simple. If you understand the exposure values of everything around you, you can make creative choices. Although digital cameras have phenomenal exposure range – cinema projection is limited in comparison – so you have to think about how your film is projected and make informed choices about exposure.”

Guardians Vol. 2 was framed for a 2.35:1 release, with additional scenes also shot 1.90:1 for IMAX theatres, but Vol. 3 goes further in having versions delivered in various aspect ratios to maximise screen real estate. Some have 45 minutes of the two-and half-hour film opened up to a flat 1.85 aspect ratio while the rest of the movie is letterboxed to 2.39 to match the mood and impact of the scene.

“There was a time when if you were shooting IMAX you had to use a massive IMAX camera but because we now have IMAX certified cameras like the V-RAPTOR which are the size of a Hasselblad the options really open up for you. In the end we decided to go the 2.40 route to retain consistency with the previous two movie as part of a trilogy but the truth is you need to be framing for both 2.40 and 1.85 all the way through.”

Braham’s schedule is busy, and he is booked repeatedly by directors such as Gunn and Doug Liman with whom he just shot Road House and The Instigators.

“There’s a misconception that cinematographers do the same thing on each picture,” he says, “when what is I think is the most interesting process is working out what ideas are specific to each movie. Not only is every director and every movie different but each film with the same director is different. One of the hardest things to do is to divorce myself from the last show and apply the intellectual rigour to start again with a clean sheet of paper.

That was the case when Braham found himself with less than a month turnaround between The Flash and jumping onto Guardians. “My process is to respond to the material and to drill into the director’s mind and join those two things together. Some directors are very specific and can articulate this and others are less so but have a very clear idea in their heads. The function of a cinematographer is to translate a director’s vision and personality onto screen.”

 To ensure that their shared vision is cemented in the final picture Braham advises having clear and consistent system of communication that establish the visual intent from the start.

“This is baked into dailies so that if someone is working on the image in New Zealand, London or LA everybody knows what the parameters are and what the intent is. You can’t micromanage everything but you can set the ground truth. It’s why, when we came to the colour grading with Stefan Sonnenfeld for Guardians Vol. 3 the process was very smooth. Everybody has followed the same guidelines all along, so all the pieces of the jigsaw came together. That is part of the conception of any film and to be honest the same approach is very helpful to any size of movie.”