Monday, 10 June 2024

Robert Tercek and Peter Csathy: When It Comes to Media and AI, Copyright Law Is Not an Open and Shut Case

NAB

If there’s one thing that media and Big Tech can agree on when it comes to AI, it’s that existing copyright laws are outdated and in need of an upgrade.

article here

There’s a gray area being fought over in the courts by artists like Sarah Silverman versus companies like OpenAI over the definition of “fair use.”

Lawyers for Big Tech might be able to pull this line of attack apart by showing that their large language models are not copying copyrighted works at all.

Others, like media and legal expert Peter Csathy, argue for a trickier to define case that it is ethically right to reward artists and that doing so is in the interests of the public good.

“It’s not about anti-tech, or pro-big media, it’s about taking the value that’s been created by new GenAI models and recognizing the contribution made by creators in the first place,” Csathy said during a debate with tech entrepreneur Robert Tercek on The Futurists podcast. “It’s about sharing in the pot of the opportunities in some kind of equitable fashion.”

A central issue is the unauthorized use of copyrighted content by AI companies to train their models. Major media entities like The New York Times have filed lawsuits against AI firms such as OpenAI and Microsoft, arguing that their copyrighted content has been used without permission, undermining the creators’ rights and economic interests.

Csathy argues that AI’s reliance on copyrighted content necessitates fair compensation for creators. He said the use of AI to generate content raises concerns about the loss of control and the potential devaluation of creative works.

His argument is that Big Tech, notably OpenAI, has gotten rich — billions of dollars worth of market capital rich — because its models have been trained on millions and millions of pieces of copyrighted works.

It’s only fair then for a licensing fee to paid to creators by developers of LLMs for use of their copyrighted works.

“The reality is that no generative AI would have any value whatsoever it unless it was ingested with copyrighted creative works. That’s what gave it value,” Csathy says.

He believes the outcome of the copyright lawsuits brought by the likes of Sarah Silverman and The New York Times against OpenAI and others will result in some kind of opt-in or opt out system “where a creator can decide whether they want to their work to be available to be licensed.”

With 500,000 open-source models available today, such a forensic tracking and payment system would need policing at a global level but Csathy doesn’t think that problem insurmountable.

“The goal is to create a system that is more equitable system, that can be such so much better for the end user that they will gladly pay something to use that rather than something that’s just free.”

Even if a new compensation scheme were agreed between Big Tech and creators going forward, Csathy questions the “billions and possibly trillions of dollars of value” in generative AI that have been already been accrued without compensating artists.

“It’s about money. It’s about control and it’s about respect for the artists. And part of being an artist is that you have control, over your creation.”

Playing devil’s advocate, Tercek laid out some of the defense that Big Tech will use to justify training on copyrighted work as “fair use.”

It’s an interesting argument, and goes like this:

“The AI reads all the books, or looks at all the images, or listening to all the music and then that model begins to build parameters. A big question about this is, is it fair use? Is it okay to look or read or listen? There is no law that prohibits reading a book. There’s no law that says, You can’t learn by looking at a picture.”

Tercek explains that what the tech companies will insist on is that they do not retain any copies they are reading, or in case of pictures, they are looking at images, in the case of music, they’re listening to. These are mathematical representations only.

“Once the LLM is trained, is just a mathematical representation of what they call ‘parameters.’ These are different values that are set based on looking at a lot of Van Gogh pictures for example. They start to arrive at kind of a extrapolation. There’s a pretty strong argument to say that all they’re doing is measuring facts. Facts like what kind of values and colors and hues does Van Gogh use.

It’s not replicating any of Van Gogh’s paintings but we have millions of incredibly precise, measurements about his paintings. Those are facts and facts can’t be copyrighted. What we can recreate in LLM is a factual representation of all those different values that go into creating that work.”

Further, he says that the cases brought by The New York Times and other artists against Big Tech are antiquated because copyright law needs updating to account for the age of AI.

Tercek argues that original copyrighted works like films or books or painting are “fixed works delivered in fixed media.”

“What LLMs are doing is transforming those fixed works into something that is participatory, that billions of people can interact with to build new creative things,” he says.

“So should the artists who created the original work get paid a royalty off of those things? These are huge open questions right now. But this notion of a fixed work that is copyrighted for a century because that’s what we have right now is an absurdly distended copyright term today.

“Maybe that notion is going to change and maybe what media has to turn into is something that is more open, more participatory, something that enables audiences to actually play with and mess with and participate in meaningfully.”

On this point Csathy agrees. He thinks it’s a massive opportunity for artists — so long as the artists themselves want to be a part of it. “They should have control of whether they want to enable their fans to build on top of their work. If they do, there’s a whole new opportunity.”

If they want to take advantage new AI tools that will it possible for anyone to find novel ways to express themselves, with new kinds of mashups, then creators need to get grips with the intricacies of the technology and of new monetization pathways.

“You really need to play with these AI tools, understand them, learn about them, or get people on your team who does understand them,” Csathy advises. “There’s not $100 out there in one bucket — there are little pieces here and there and everywhere. You’re delighting your fans more and maybe getting a chance to be closer to them.”

How ‘Slow TV’ watching moose migrate is accelerating live cloud workflow at SVT

SVG Europe

article here

Slow meandering herds of moose trekking across wintry tundra doesn’t have much in common with the turbocharged noise and control of rally cars but right now in Sweden they are virtually joined at the hip.

Swedish broadcaster SVT’s month long round-the-clock live broadcast of The Great Moose Migration (‘Den stora älgvandringen’) is the latest test of an organisation-wide digital transformation project that will soon encompass the next leg of the WRC Rally.

“It’s a slightly insane project,” says Dennis Buhr Head of Production Development, SVT of the remote production initiated for this year’s Great Moose Migration. “It’s not that common that your show is 500 hours long.”

About the Moose Migration

Every spring, for thousands of years, Sweden's population of moose (or European Elk) in Västernorrland in the north of Sweden migrates across the same tracks in the forest to get to greener pastures. Since 2019, SVT has live streamed the ‘action’ over the course of several weeks.

This ‘great moose migration’ has proved hugely popular with the public with viewers rising from a million in 2019 to about 9 million this year on SVT Play alongside more than 300,000 chat interactions - an increase of 30% on 2023.

The show has also been broadcast live on Twitch reaching nearly 15 million views around the world. Men aged 16-25 seemed to enjoy the respite from hardcore gaming.

The livestream is now a national springtime phenomena and a classic example of ‘Slow TV’ which this year is also being aired on Finland’s YLE and RTL in Germany.

“It’s always been a huge technical struggle to get this to work in a forest with a wild river and at a time of year that turns from winter into spring with huge ice melts,” says Buhr.

SVT lays thousands of metres of cable across the forest area to connect 30 camouflaged cameras and 28 mics. In the past these feeds were all sent to a remote production gallery built on site.

This year the Great Moose Migration has gone truly remote.

New cloud based workflow

“First and foremost we decided to keep all our production staff inside our TV building in Stockholm,” says Buhr. “Immediately that cuts down the costs of having staff on site for a month.”

In this new set up the cameras (a mix of Sony PTZ during the day and Axion surveillance cameras at night) and mics are converted from SDI to IP by VideoXLink and transported as H.264 streams over the internet to SVT’s northern hub at Umeå, a 3-hour 250km drive away.

Two cameras in this year’s production were positioned next to a wild bear spot. Unable to run either power or fibre to that location they used solar power and a Starlink satellite internet connection.

At Umeå the rest of the IP streams are captured into Agile Live, a software-based system jointly developed over a number of years by SVT and Agile Content. Image mixing and graphic overlay is performed by operators in Umeå with rendering done on-prem in Stockholm 650km away.

“The technical cloud workflow of this is not particularly difficult,” says Buhr. “The actual work is the shift in mindset we have gone through at SVT. We have had to rethink production.”

He elaborates, “You can’t stream high-res images everywhere as you are used to. You have to think who needs to see what, when, and in what quality. You need to understand and communicate the advantages of it rather than comparing this workflow to how it used to be done.”

This year’s production has brought together SVT broadcast engineering with its IT teams. Many, it seems, had never met before, yet here they are working to deliver the production together.

“All of a sudden you have meetings and you see people inside the organisation you do not regularly see. They might be responsible for computer services in whole other areas of the business. They are experts in networks, computer hosting and IT security. Everyone gathered to make a TV show. That was a new thing for me – and for everyone.

“I’ve been doing broadcast for twenty two years and this is the first time I’ve seen computer guys getting a credit on the show. It made me extremely happy.”

The culture clash between IT and broadcast worlds that are often mentioned as a management nightmare didn’t materialise. Or, if it did, everyone treated each other professionally and understood the overall mission.

“We have had our discussions shall we say, but it has not been a hassle - more of a learning curve,” says Buhr. “Early on we asked for a feature from the network department and gave them four days notice. They said, ‘You can’t have this short a time span.’ But when you work in live TV with the red light about to flash, four minutes is a long time, let alone four hours.  That was one cultural difference but people have genuinely been getting on very well and understand it is a joint venture.”

Having previously trialled the arrangement on a local sports production, the 2024 Great Moose Migration is by far SVT’s biggest proof of concept to date of its transformation project Next Generation Online (NEO).

NEO is an organisation-wide glass to glass approach to making production and distribution under the same roof. It is aimed to be 100% software, based on COTs servers and standard IT using the internet.

“It’s a huge scope but we started early, in late 2018, and now we are live with a working product.”

The same set up will be used by SVT to produce the ERC Bauhaus Royal Rally, part of the WRC calendar, on June 13-15 at several different locations in Värmland, near Karlstad. 

“We are also adding in a new product which is our re-invention of the intercom 4 wire system as an entirely software-based system web technology and standard IT hardware.”

The software- based 4 wire intercom is a collaboration with fellow Scandinavian broadcasters NRK, YLE and TV2.

“We still produce TV today the same way we did in the ‘60s but in a software environment if you want to be more efficient with money and cut the carbon footprint you must rethink who does what and why and when,” Buhr stresses.

“If you want to transform TV production from cables to software and you don’t think about how your staff will work in a facility you will only not have the cost of investing in new technology but the same staff costs as before if you adapting it to an old school way of making TV.”


Friday, 7 June 2024

Playing with light: 3 Body Problem

Definition 

A trio of DOPs discuss their roles in transporting viewers between real and virtual worlds in David Benioff, DB Weiss & Alexander Woo’ s sci-fi hit 3 Body Problem

article here

Before Game of Thrones became a global phenomenon, there was a feeling fantasy was something only a niche, if loyal, audience would lap up.

Now the makers of HBO’s dragon and sorcery hit aim to do the same for sci-fi naysayers.

As their source material, showrunners David Benioff and DB Weiss – along with Alexander Woo – chose The Three-Body Problem by Chinese novelist Liu Cixin and turned it into an eight-hour series for Netflix.

“We’ve all seen a thousand alien invasion stories,” Benioff says, “but this one’s different because it focuses on the human response to finding out we’re not alone in the universe — and the others out there are not necessarily friendly.”

The ambitious story spans different decades and planes of existence, from China’s Cultural Revolution in the sixties to contemporary Britain, New York City and a vast, epic virtual reality world.

Naturally, the showrunners turned to the experience of a DOP who had shot 17 episodes of Game of Thrones to help establish the show’s visual language.

“There are many elements to the first season and to what they hope would become subsequent seasons,” according to Jonathan Freeman, ASC, a Canadian DOP who won an Emmy for Boardwalk Empire and has also worked with directors Russell Mulcahy, Richard Loncraine and Robert Lepage. 

His first decision was determining the aspect ratio.

Game of Thrones was presented in familiar 1.78:1, but for a story at least partly set in outer space, they discussed using several framing formats.“I was keen on the notion of a wider aspect ratio because eventually this story will expand into space,” Freeman says. “A wider-screen format of 2:39 or 2:35 is used in many successful space odysseys to represent the vastness of space and the distance between elements, whether between two planets or an astronaut floating over a planet. A widescreen format also made sense for the VR game, where we need to convey the scale of individual figures in a landscape, Lawrence of Arabia style.” A storyline in episode 5 about a giant freighter ship also suggested a wider-screen frame.

They decided to stick with 2:35 shooting on ARRI ALEXA LF, but to divide earthbound scenes of relative normality from the hyper-real sequences of the VR world by using different glass.

“Given that the first couple of episodes jump from scenes set during the Chinese Cultural Revolution to contemporary scenes in London, and to lean into the cinematic language of films set in space, my preference was for anamorphic. I felt we needed glass with a vintage look but also a way of making it feel modern at the same time.”

That balance was found in the ARRI ALFA, a lens set purposely detuned for Greig Fraser, ASC, who was at the time shooting them for The Batman.

“It had almost exactly what I was looking for,” Freeman says. “A vintage feel surrounding the edges of the lens, but sharp in the centre so we could cut between any period from late-sixties China to modern-day London smoothly and effortlessly.”

In contrast, the VR scenes appear extraordinarily sharp – a factor of being filmed with ARRI’s spherical DNA range. “Technically, they are a bit purer, which was an advantage to the VFX team who needed to produce some very complex environments in the VR world.”

The VR world, which was engineered by extraterrestrials in the story, takes the player from the Shang dynasty in China to Tudor England to post-apocalyptic deserts. These were filmed against a large 180° wall consisting of ARRI SkyPanel LEDs filtered through and hidden behind a Rosco scrim.

“We spent a long time developing ideas for how to shoot these imaginary worlds,” explains Richard Donnelly, ISC, who also shot episodes 1 and 2, joining the project a little later after shooting The Nevers.

“Our board operators could control any kind of colour we wanted. It was fantastic as it enabled us to light the actors as we wanted to, for instance with the sun rising, instead of it being led by VFX. We augmented the set with many other lights, but that wall was us lighting the actors. It’s almost the reverse of volumetric capture in which you use plates filmed on location to light the live action. Here, it was the other way around.”

For example, a scene set in a VR desert in episode 2 introduces the AI child character Follower and the concept of dehydration. The characters hide under a rock to escape the rising sun. Donnelly lit the scene with the wall and a ceiling rig of Vortex lights. “All the long shadows are real. It’s almost like shooting back in the forties where you’re creating all these shadows in camera on-set and it’s not a heavy FX world.”

Swedish cinematographer Martin Ahlgren, ASC (The Plot Against America) lensed the three-episode block following Freeman and Donnelly.

This included the startling scene in episode 5 in which a container ship and everyone onboard is silently ripped to shreds by ‘nanofibers’.

“It needed a lot of figuring out from a storytelling perspective; how to build up the mystery of what was being done, then also revealing it happening as well as finding the right level of detail.”

A storyboard artist designed ‘some gruesome ways to be sliced’, but the showrunners and director Minkie Spiro dialled that down. “We’re setting it up for a shockingly violent way to die, but letting the imagination do the rest,” he says.

This scene, in keeping with the rest of the series, stretches the boundaries of our known physical world, rooting the fantastical elements in some level of scientific understanding.

Ahlgren plotted camera moves for the scene using LED pixel tape before moving to the backlot where production design had built a large, to-scale section of the tanker – complete with a helicopter pad.

“The idea is that the nanofiber technology is cutting at a molecular level, so unless gravity is doing something to the object or person, we don’t show its effect. We show cutting paper and when the ship hits the bank of the canal it topples like a stack of plates, but the technology itself is not revealed.

“We had to figure out at what speed the nanofiber would move, in relation to the ship’s movement and that of the camera. We decided that it moves slowly enough for someone to run away from it if they can, and that becomes a big part of the drama.”

The series was shot at Shepperton Studios over nine months, ending in August 2022.

Most was filmed in England using locations in and around London, as well as in Portsmouth, Kent, Oxford, Sussex and Bedfordshire.

Other locations included a mountain ridge near Cáceres in Spain, site of the Chinese radar station, and Cape Canaveral in Florida.

Director Derek Tsang (Better Days), a Chinese native, had drawn on his own experiences of hearing stories about people who lived through the Cultural Revolution in order to picture the series’ opening scenes.

“His own memory of that time period is from images that are shot on Ektachrome,” says Freeman. “We opted not to go full Ektachrome in our look since that would give us bright primary colours and everything else would be muted. We go in between to yield that period feeling, pulling back on the primaries, but without becoming a distraction for the rest of the story that follows.”

Europe drives new interest in anime, as Netflix, Crunchyroll boost awareness

Stream TV Insider

article here

Europe is set to drive the next content boom for Anime, the animated content genre majority produced in Japan.

Anime titles available outside Japan on SVOD services have doubled from 3000 to 6000 around the world since 2019, according to new figures from Ampere Analysis.”

International streamers, and in particular Netflix, have made a strategic effort to grow the market outside of APAC and are now reaping the reward as Japanese Anime is now ranked as the second most popular content on subscription streaming services globally behind US content.

This is a ranking unique to Ampere and based on key metrics such as volume of interest, web traffic and box office income from major services.  

“While the number of core Anime fans is small, casual Anime viewers are common across the world,” says Orina Zhao senior analyst at the researcher house. “In terms of growth, European markets have seen the fastest rise in those enjoying Anime over the last four years.

“Besides traditional linear licensing, global streamers and Hollywood studios have all tried to seize this opportunity by ramping up their Anime catalogues. This is particularly because Japanese Anime has a long-lasting lifecycle of popularity.”

She also highlights how cost effective licensing or producing Anime can be in terms of the content’s ability to attract and retain subscribers within a platform.

Long-running TV series and Ghibli movies account for the majority of the most-popular Anime titles.

For example, Wit Studio’s Attack on Titan was the most popular title on SVODs in 2023 despite being first released a decade ago. Studio Ghibli movies Spirited Away (2001), Howl’s Moving Castle (2004) and Princess Mononoke (1997) are also in the top ten.

Globally, Anime fans are typically young, skewing 18-35 year old with a relatively lower income. They are heavy SVoD users, and spend more time on smartphones and smart TVs.

Ampere says they are likely to be young adults or new parents and the gender split is about 50-50 outside of Japan. That differs from demos in Japan where the genre attracts a mostly male audience (the split is 58% male / 42% female).

According to Ampere’s latest survey of 30 countries, Asia Pacific markets such as the Philippines, Indonesia and South Korea still show the highest interest in Anime. However, seven out of the top 10 markets with the largest growth of interest in watching Anime are in Europe. They include Germany, Finland, Italy, the UK, France, Poland, and Spain which have seen a 3% to 9% increase in Anime enjoyment in the past five years.

The amount of Anime titles has been increasing too in these seven European markets, from 1,945 titles in 2019 to 2,755 titles in 2023, a 42% increase in the past five years. This has been driven primarily by Netflix, Amazon Prime Video, and dedicated anime platform Crunchyroll which launched into Europe when parent company WarnerMedia acquired Viz Media in 2019.

Sony has owned the platform since 2021 and earlier this year merged it with Funimation (closing the Funimation brand) to offer a combined 1653 titles to 15.6 million subscribers (who pay from $7.99 a month in the US, with Mega Fan and Ultimate Fan tiers increasing by $2 a month).

“Since April’s merger with Funimation Crunchyroll has become the single most powerful Anime-focused platform in the West in terms of both its Anime catalogue size and subscriber base,” says Zhao.

Between 294 and 481 new titles (titles produced in the past three years) were made available in the seven European markets in 2023, but this is around one-third the rate of new titles released in Taiwan, leaving ample capacity for importing new content into Europe, Ampere say.

“Crunchyroll’s merger and Netflix ramping up its distribution this will undoubtedly increase the visibility and popularity of Anime globally,” says Zhao. “We find European audiences under served at present by a good supply of Anime and think there is a substantive opportunity for growth.”

She advises, “European local and regional services should leverage the building appetite for Anime and the wide availability of content yet to be exploited in the region to gain a competitive edge and achieve long-term growth.”

Netflix strategic focus

Japanese content overall has become the second largest content type on Netflix and the streamer is now the most important platform globally for licensing and producing Anime. It has ramped up its licenced original titles from 602 in 2019 to nearly 900 in 2023 including an increase of Exclusive licenced titles from 45 to 86 and originals from 21 to 76.

It has broadened its Anime genres too. While Sci-Fi & Fantasy and Action and Adventure still make up 70% of the titles it has expanded to include comedy, childrens, horror, romance and drama.

“Crucially, Netflix has signed production line deals with a number of Japanese studios,” says Zhao.

This began in 2018 when Netflix first signed production deals with Production I.G. (including Wit Studio) and Bones. A year later it made co-production deals with three more leading Japanese studios, Anima, Sublimation and David Production. The three have so far co-produced titles with Netflix such as Altered Carbon, Dragon’s Dogma, and Springgan.

Netflix further expanded its partnerships with Anime studios by signing co-prod pacts with Naz, Science Saru and Mappa. It also signed a similar deal with Studio Mir in South Korea, which produced The Witcher: Nightmare of the Wolf for Netflix.

Two years ago, Netflix signed a film co-production deal with Studio Colorido to expand from TV series to movies. Some co-produced films have also premiered in theatres and on Netflix on the same day.

Moreover, Netflix has quadrupled the amount of adult animated content it has produced from outside of Japan from 11 in 2019 to 44.

Ampere also believes there is further scope for Netflix to produce live action adaptations of popular anime titles, as it did with One Piece which launched last year.

The exploding interest in Anime is explored at animation conference and festival Annecy in France next week. Crunchyroll’s SVP of Global Commerce Mitchel Berger will discuss anime’s impact on pop culture and Japanese studio Kasagi Labo will announce a financing platform for original anime.

Netflix is in force at Annecy where it premiers Tokyo-set animated superhero feature Ultraman: Rising produced by Netflix, Tsuburaya Productions, and ILM. Rising Impact, the first anime adaptation of Nakaba Suzuki’s manga of the same name, is a Netflix exclusive that premieres on June 22.

 

Monday, 3 June 2024

Broadcast Workflows, But in the Real World: Cellular Transport, Cloud Vs. On-Prem, Actual AI Contribution

NAB

Cloud-based workflows are becoming ubiquitous but on-premise solutions continue to play a central role, according to a new industry trends report from Haivision.

article here

The company’s “Broadcast Transformation Report” also found an increased adoption of cellular networks, with 60%of broadcasters using these networks for transport and 80% using the internet, showcasing the industry’s reliance on diverse network technologies.

The findings also underscored the increasing significance of 5G, with74% of broadcasters either currently using or planning to use 5G for broadcast contribution in the next two years.

When it comes to cloud, the report found 84% of broadcasters use at least some cloud-based technology, but only 22% use it for more than half of their current workflow elements.

While more than half (58%) of broadcast professionals have implemented IP and cloud-based broadcast infrastructure, a substantial amount (86%) of those surveyed continue to use SDI.

Respondents primarily use cloud-based solutions for encoding/transcoding (44%), stream routing (43%), and remote collaboration (39%).

Fifty-nine percent of broadcasters rely on cloud for less than 25% of their workflows, indicating that on-premise technology continues to remain critical to broadcast workflows. Haivision said this finding suggests that broadcasters expect to leverage both technologies for the foreseeable future.

As organizations consider workflows that include cloud and IP technology, those surveyed indicated network reliability (46%), budget limitations (46%), and bandwidth availability (40%) as the primary challenges in their shift to IP or the cloud.

Besides cloud-based workflows, respondents indicated the use of other IP technologies, mainly NDI (40%) and SMPTE ST 2110 (36%).

Cellular Transport is Now Mainstream

Sixty percent of respondents said their organization currently uses 3G, 4G, LTE, or 5G for live video contribution, making cellular the most popular network for transport after the internet (80%).

Nearly three-quarters of broadcasters already use or plan to use 5G for broadcast contribution. Forty-six percent anticipate using 5G with private networks. Additionally, 15% are using 5G only, which is an increase of 5% over last year’s survey. Satellite usage remained steady from last year’s report, and fiber increased by 4%.

Regarding 5G adoption timeframes, 74% of broadcasters already use or plan to use 5G for broadcast contribution within the next two years.

More specifically, 29% of respondents are already using 5G for broadcast contribution, a 9% increase from last year’s survey. Greater bandwidth (51%) and lower latency (48%) are touted as the top benefits 5G can offer within live production workflows.


Unsurprisingly, most broadcasters predict AI will have most impact on the industry of any technology over the next five years, with 49% planning to or already using AI in their workflows. 5G remained as a strong runner-up at 57%, however the number of respondents citing HDR dropped by 5% and ATSC 3.0 dropped by 10%.

The most commonly cited benefits of AI on live production workflows were efficiency gains through automation and automated translation and closed captioning.

The Visual Elements for “Sugar” Make a Different Kind of Hollywood Mystery

NAB

Apple TV+’s detective drama Sugar starring Colin Farrell has an unconventional visual style that includes footage shot on iPhones, multiple camera setups and classic film noir clips inserted into scenes.

article here

Scripted by showrunner Mark Protosevich with Farrell as executive producer, Sugar is directed by Brazilian filmmaker Fernando Meirelles, co-director of City of God and director of The Constant Gardener. The series is shot by cinematographers César Charlone and Richard Rutowski, ASC and edited by Fernando Stutz.

“We really work as one,” Meirelles told IndieWire’s Chris O’Falt. “If I can’t take César and Fernando, I won’t be able to [do the project], that was my only condition.”

As O’Falt explains, Meirelles’ approach to directing is to play a scene from beginning to end rather than dividing it into camera setups, while Charlone documents it with two to three and sometimes even up to seven cameras.

“The first thing we learn in film school is never jump the eyeline,” said Meirelles. “I tried on purpose to jump the eyeline just to test [it out], and it really worked. It gives a dynamic to the scene.”

“We have developed what I would call a documentary style,” Charlone said. “We leave the actors very free to move around, my crew does not put [down] any marks, the actors just move around and we follow them with the camera.”

After each take, Charlone changes the camera’s angle and movement. Meirelles said that after two hours of shooting, he will often have 15 to 16 different angles, each master of the whole scene. This approach often leaves them an hour or two ahead of schedule, which just encourages more experimentation.

This gives Stutz a lot of material to compose with. “I start by looking through César’s lenses because it’s the way he moves through the scene,” the editor told O’Falt.

“When he operates the camera I’m always interested where he’s going, what he’s looking at, and then from there, start to build the sequence.”

One of the most striking aspects of Charlone’s work on Sugar is his use of iPhones to not just capture reference stills but footage for the show.

“I’ve been using iPhones for some time now,” Charlone explained in an episode of the Kingdom of Dreams podcast. “They are incredibly versatile and practical, especially in confined spaces. For Sugar, we did extensive testing to ensure the footage could blend with that from the Sony VENICE cameras.”

The iPhones proved especially useful for filming car scenes, where their compact size and flexibility allowed for unique angles and dynamic shots.

“For all the car scenes, with Colin driving, the iPhone is very practical because I can put it behind the steering wheel, move it around easily, and I use VR goggles to see the image I’m capturing.”

This approach not only saved time but also kept the energy on set, allowing actors to perform naturally without the interruptions typical of traditional setups.

“I try my best not to interfere with the relationship of the actors on set,” Charlone says. “We avoid marks and traditional setups, letting them move freely and following them. This way, they don’t have to follow the camera, and it helps them perform better.”

Cinematographer Richard Rutkowski, ASC picked up the reins from Charlone for his block, working with director Adam Arkin on Episodes 3, 4 and 7. This included using multiple cameras.

“We’d plant cameras strategically around the set, sometimes hiding them to capture different perspectives,” he explained to Tara Jenkins at American Cinematographer. “This method was particularly effective for scenes involving security camera footage or social media posts.”

He also aimed to evoke classic film noir while incorporating the vibrant, sun-drenched atmosphere of Los Angeles.

“We talked about classic noirs, private detective stories, and films like The Long Goodbye. I also brought up The Constant Gardener because of its selective saturation and beautiful use of color,” Rutkowski recalls.

“We knew we’d be traveling with Sugar in a car through broad daylight in LA, and we wanted that LA blue sky without it looking too candy-colored.

“Digital can be unforgiving with overexposure, so planning the route and managing the iris was crucial. We used a low loader for some shots and relied on natural lighting for others, capturing the authentic look of LA.”

Rutkowski also talked about the “meta” nature of the show which actually inserts portions of scenes from classic neo-noir films (The Long GoodbyeThe GriftersLA Confidential) into the on-screen story.

“It’s sort of a semiologist’s dream that you have a lead character whose own identity is entwined with his search for others,” he told David Philips at Awards Daily.

He said he didn’t know when he signed up that this was going to happen. “It wasn’t known that there were going to be such explicit cuts to classic noirs. We knew we were sourcing visually and tone-wise from those films.

“I’m not sure when it became apparent that they were going to actually cut in scenes from Mike Hammer, although it made sense because we were treating LA very much as a character in the way that it becomes a character in those films.

“And all I can say is, while that would have given me hesitation if they told me we’re literally going to cut in these classic films to your work, it wasn’t there for me to worry so much at the time.

The show is one of several recent series that has used black and white photography to tell its story. Others include another Italian-set noir murder mystery Ripley, and the third episode of FX’s Feud: Capote vs. the Swans.

 Sugar begins with a black-and-white opening sequence before color seeps. The B&W was shot in color and converted in post.

“Fernando wanted to start that opening in Japan in black and white as kind of a throwback to [Akira] Kurosawa films, and for it to feel a little otherworldly as well,” executive producer Audrey Chon told Hunter Ingram at Variety. “It just added a whole other dimension to the show and to the character of Sugar.”

As Ingram observes, taking color out of the picture literally opens a whole new world of awareness for a viewer. In these extreme darks and blinding whites, more can also be concealed.

“With black and white,” adds Feud cinematographer Jason McCormick, “you can get away with murder in ways you couldn’t when you are shooting in color.”

 


Friday, 31 May 2024

Color grading trends: John Daro looks at AI, the cloud & remote workflows

interview and copy written for Sohonet 

article here

and in Post Magazine

John Daro, lead digital intermediate colorist with Warner Post Production Creative Services, has helped create and polish the look of feature-length live-action and animated films ranging from Space Jam: A New Legacy to Behind the Candelabra, Contagion and The Boxtrolls to Sea Beast. He began his career at FotoKem in 2001 and worked his way up to senior colorist in 2005. As well as a keen eye for color, John is also a skilled technician, who has invented technology to create the perfect look for each project. These include a technique using machine vision to auto-segment images into mattes, and MatchGrader, an AI tool that artistically color grades based on a given reference image. He is also behind SamurAI, which adds the right amount of detail back to the image based on the quality of the input.


He recently shared insight into the trends affecting the color grading business, including the cloud and AI.

How close are we to achieving MovieLabs’ goal of moving post production, including the grade, into the cloud by 2030?

“It’s the direction that the industry is going, and I definitely think it will be achieved. There's nothing really stopping my Baselight system from being a cloud instance and sending video compressed as a JPEG XS stream to properly-calibrated monitors to a client for remote approvals. It’s a pretty slick workflow, and it gets us away from needing the big iron to live on-prem.

“This transformation will happen organically as cloud economics works itself out. Right now, it's cost prohibitive to be working in the cloud with that much data. But the more people do it, the volume will go up and the price will come down. Then, all of a sudden, it will make sense for productions. The gains we will get from working in the cloud will eventually outweigh that cost.”

What are the principal gains of working entirely in the cloud?

“It’s really all about geography. Often, when I'm working on a project, the DP is already shooting another show. So having the ability to be anywhere in the world and be able to collaborate on the same project will free everyone’s time. And that includes my time too. I can be working or collaborating with colleagues at our Burbank location or in New York or Leavesden Studios in the UK. To be in any one of those places and handle media as if I was here in my bay is a natural progression. It also opens up the talent pool to the entire world. Clients will be able to get the best artists regardless of their location and that’s an exciting prospect.

“Finishing, however, is a different story and you're always going to want to be in the proper environment. For example, if you're working on a Dolby HDR version, there's no gain by doing a cloud Dolby version because you need to be in a Dolby-certified theater. But when you're talking about dailies and being able to make sure that color is maintained from camera all the way through to finish, then the cloud conversation starts to make a lot more sense.”


You are talented both creatively and technically. You program code as well as edit with color. Does that combination make for the most successful filmmaking? 

“I think there's really no difference between artistry and technology when crafting beautiful images. They have gone hand in hand since the inception of film. In the early 1900s, it was all kind of a science project. Chemistry was involved in film processing. So, from the beginning, it's always been a collaboration between the science and the art of trying to bend light.

“There's no higher technical position than the director of photography, but at the end of the day, the output of their work is to tell a story — visually and artistically. We’re hopefully creating a picture that makes you feel something. 

“I see technology and coding as tools in service of making better pictures. My whole goal, my mission statement if you like, would be, ‘Let me show you something you've never seen before.’ That's what gets me out of bed every day. 

“On that note, AI could probably show us things we've never seen before. But what’s the endgame with AI? Where does its functionality stop and human creativity takeover? With each new technology there are concerns that existing processes will be replaced, but it never quite happens that way. Technology is always a tool to be more efficient, more productive, to create projects at greater scale.

“I like to think of it like this: a high-end animation shot 20 years ago took two weeks to render. In 2024, it still takes two weeks to render. It just looks a lot more polished and a lot more photoreal because of the amount of data that it is being created with.”

Can AI replace the colorist? 

“Absolutely not. Clearly AI tools will evolve, but they will assist our job by removing the minutia of the process and freeing up time for more creative work. 

“I break down color into two different areas: color correction and color grading. Color correction, for example, is matching the light for continuity of scenes that could have been shot over a whole day on location, but are supposed to take place in five minutes in the story.  This type of work is necessary, but not stimulating. Color grading on the other hand is always in service of a story. It’s very similar to the editorial process, where we cut things that don't serve the story and enhance the things that are promoting the story.

“AI tools can save us time with color correction. Time can be passed along to the client. The introduction of AI should mean we're not watching paint dry in the theater anymore, and we can more quickly get to the more collaborative enhancements. It will give us more options and, importantly, more time to craft a strong powerful story. Speeding that process up and having it become more interactive will fuel creativity. In addition, it will make the process so much more pleasurable, not just for me, but for the director and DP to hit their vision faster.”

To what extent do you and your clients like to run sessions remotely?

“Four years ago during COVID, the industry was thrust into the best experiment ever — being forced to work remotely. A lot of tools were created to bridge the gap out of necessity. I primarily used ClearView Flex. It was really the only way that folks could have some interaction with the work at that time.  

“Flash forward and a lot of remote work has stayed with us because people are now comfortable with the tools. For many project notes, it is not critical to be in a correctly tuned display environment. We all know that once you release a project to the world, filmmakers have little control over how it will be viewed. For quick approvals, ClearView is great because all we need on the client side is a calibrated iPad Pro. But for final finish, the theatrical, Dolby and HDR versions, ultimately, you do have to come back in a properly-calibrated environment for no other reason than to ensure you are hitting your target and are all in agreement.”

What recent film or show inspires you from a color perspective?

“I think infrared (IR) is having a moment. Ad Astra was one of the first in recent time where Hoyte Van Hoytema, ASC, used a stereo beam splitter rig to produce an IR version and a RGB version of the same shot film. The whole moon sequence in that film blew me away. The Zone of Interest [DP Łukasz Żal, PSC] used IR in a really interesting way to present another aspect to the story. It’s very striking and super effective while being sensitive to the story. Dune: Part 2 [Greig Fraser, ASC, ACS] features a stunning IR sequence that captures the essence of the colorless planet and stark fascistic rule of the Harkonnens. They hit on a visually-immediate way to show that world without having to go into great detail describing it. You got that vibe really, really fast. It’s very cool.”