Wednesday, 6 March 2024

How Sun is the Color-Killer in “Dune: Part Two”

NAB

article here

In a film awash with frames of retina-burning golden intensity the striking monochromatic scene of the gladiator fight introducing the psychopathic Feyd-Rautha (Austin Butler) stands out.

Dune: Part Two director Denis Villeneuve wanted the aesthetic of the evil Harkonnen to signify the polar opposite of the sunlit faith of the desert dwelling Fremen.

Dune author Frank Herbert had never established much information about the Harkonnen homeworld, called Giedi Prime, other than that it had been industrialized into an almost complete wasteland.

“I love how Frank Herbert shows how the psyche of the tribes of the people are influenced by the landscape,” Villeneuve told Susana Polo at Polygon. “If you want to learn about the Fremen, you just have to learn more about the desert and it will give you insight about their way of thinking, their way of seeing their world, about their culture, about their beliefs, about their religion.”

But with far fewer Harkonnen details to work with, Villeneuve was forced to improvise, and like any filmmaker, he settled on using light to tell the story — specifically the light from Giedi Prime’s sun.

“I wanted to find something that had the same evocative power and the same cinematic power for the Harkonnens,” he said. “I wanted to be generous with their world and make sure that it will be singular, and it will inform us about where their political system is coming from; where their sensitivity, their aesthetic, their relationship with nature is coming from.”

In an interview with Hoai-Tran Bui for Inverse, he added, “The idea that the sunlight, instead of revealing colors, will kill colors; that their own world will be seen in a daylight as a bleak black-and-white world, will tell us a lot about their psychology.”

He took the idea to Australian cinematographer Greig Fraser, who won an Oscar for his work on Part One, and Fraser suggested filming the scenes using Infrared photography.

The DP had used the technique on 2012’s Zero Dark Thirty and 2016’s Rogue One: A Star Wars Story. “It’s the same light the security camera uses, and you don’t see it. So, my fascination with infrared started because our eyes can’t see it, but the camera can,” Fraser told Jazz Tangcay at Variety.

Fraser shot the Giedi Prime scenes on an Alexa LF, modified so it could only see infrared and not any visible light. Since the sun emit infrared (creating life on this planet) it felt like a suitable creative solution to depicting the life-sucking environment destroying Harkonnen.

The effect is an eerie, translucent, effect on human skin, aided by the fact that the planet’s population is bald. But by creating the in-universe rule that the sun was washing out the colors, the filmmakers created other challenges. One was, what happens when characters step from the shadows into the sun?

“We needed to come up with rules for what the sun does,” Fraser told Inverse. “Our rules were effectively everything that the sun hits is washed out. So it’s direct sun and it’s bounced sun.”

When inside or in the shade, characters are lit by artificial light, Fraser explained. To achieve transitions, such as when Léa Seydoux’s Lady Margot Fenring emerges from the shade into the sun during the gladiator fight, Fraser had to shoot on a 3D stereo rig. One camera filmed as normal to a full color sensor, the other aligned on the rig, shot infrared imagery.

“We made sure we had lights that put out infrared for the infrared camera, and we had lights that the infrared camera couldn’t see, which were LEDs that put out visible light but don’t have infrared light. We had to have two different types of light sources on set that each camera could see separately and see differently.”

Another challenge was that when they started to shoot the photography showed up some of the costume fabrics — which were black in daylight — but appeared white under Infrared.

Fraser says he didn’t know why certain fabrics worked, telling Variety, “I’m sure there’s a rhyme and reason, from a material standpoint. I just know we had to do a lot of camera testing to make sure everyone was dressed in black.”

The scene, which is a birthday celebration-cum-Nazi rally, also features strange ink-blot fireworks. Villeneuve told Fraser, “They’re like anti-fireworks. They suck the light out as opposed to putting the light in.” Also, to Inverse, Fraser adds, “We worked pretty hard at trying to achieve that goal, this kind of anti-explosion type of light.”

Fraser elaborated on the decision to shoot infrared in an interview for the ARRI website. “We’d been on this planet for night interiors in part one, but we’d never been outside, so we were discussing what it would look like. I did a test for Denis where the inhabitants have very pale white skin, based on the notion that there’s no visible light from the sun on Giedi Prime, only infrared light. When the characters go from inside to outside, they effectively go from normal light to infrared light,” he detailed.

“On Rogue One, ARRI Rental modified some ALEXA 65s to do exactly the same thing, and we used them as VFX cameras, lighting parts of the set with IR light that didn’t affect the main image,” he added. “We just took that a step further and used them as our main cameras for Giedi Prime. They literally only record the infrared that bounces off skin or clothes, so colors are rendered as different tones and something that looks black to the eye might look white to the camera. It meant that we had to have exterior and interior versions of the same costume for some characters.”

It’s worth noting that infrared shooting techniques are in vogue just now. Hoyte van Hoytema used the 3D rig technique to capture eerie sequences for Jordan Peele’s Nope, and this inspired Florian Hoffmeister to go further and shoot extensive night exteriors for the unsettling Alaska set murder mystery True Detective: North Country (also on paired Alexas, with one camera modified without a color filter and with infrared lights).

Most notably, Lucas Żal shot infrared sequences for The Zone of Interest, although here the rest of the picture is so bleak that these scenes represent hope amid the darkness.

“Feud: Capote vs. The Swans” Goes “Behind the Scenes” of the Black and White Ball With an Imagined Documentary

NAB

The third episode of Season 2 of FX anthology series Feud: Capote vs. The Swans travels back to 1966 for Truman Capote’s “best party ever.”

article here

“Masquerade 1966” relives the legendary Black and White Ball hosted by the infamous writer at New York City’s Plaza Hotel — a lavish event boasting a guest list that included everyone from Frank Sinatra and Andy Warhol to Lauren Bacall, Ben Bradlee, the Kennedys, the Agnellis, the Vanderbilts, and the Astors. “As spectacular a group as has ever been assembled for a private party in New York,” according to The New York Times.

Director Gus Van Sant and showrunner Ryan Murphy present the hour-long episode as a black-and-white documentary of the party and Capote’s (Tom Hollander) weeks of preparations for his big night.

At its heart, it’s a flashback episode, with the Swans — Babe Paley (Naomi Watts), Slim Keith (Diane Lane), and Lee Radziwill (Calista Flockhart) — seen in various states of anxious planning. Creating even more drama, two of the high society Swans are under the impression that they would be the event’s “guest of honor.”

The documentarians catching this all, though rarely glimpsed, are depicted as real-life filmmakers Albert and David Maysles. But no such Maysles documentary was ever shot, let alone released. “It was an invention of Ryan [Murphy’s] to pretend like Truman hired somebody to shoot the ball, and then decide to not to go through with it at the end,” Van Sant tells The Hollywood Reporter. “So that was our concept, and our footage that we shot was supposedly their unused footage.”The Swans.”  

As THR’s Mikey O’Connell points out, there is a seed of truth here. The Maysles did spend time with Capote in 1966, filming documentary short With Love From Truman. It just had nothing to do with the ball.

“That was an invention,” Van Sant confirms to Joy Press at Vanity Fair. He did watch footage from the short film the Maysles shot of Capote when he was younger, but creating a faux-documentary gave Van Sant the freedom to run around with a handheld camera, and allowed viewers to see the Swans’ many layers of masks.

But though this peek behind the scenes is imagined, “it feels oddly real—like watching never-before-seen footage unearthed from an archive,” according to Coleman Spilde of The Daily Beast. “The episode is a fine example of how to meld past and present, fiction and reality, for something unique.”

Van Sant explains the aesthetic he deployed, saying that in the ‘60s, cinematographers were freeing themselves of the tripod.“It’s been emulated to the point that now it’s our standard movie style, which is handheld. And handheld today means, like, jerk it around on your shoulder and move it. The people in the ‘60s were trying to hold it really still. They were also trying to get the action, so that was one little aspect of emulating their style. They weren’t trying to make it bumpy, they just…didn’t have a tripod!”

Matt Zoller Seitz at Vulture calls it “the stylistic peak of the series” and talks to Van Sant about creating it with DP Jason McCormick.

“I’ve watched the work of a lot of documentarians, particularly ones who were part of the same movement as Albert and David Maysles,” Van Sant relays. “There was also D.A. Pennebaker, and Frederick Wiseman and Richard Leacock. The films they made were always fascinating to me. They were informing the French New Wave, partly, and by the 1980s, their work influenced MTV videos, as well as films like Oliver Stone’s JFK, which utilized MTV-style camerawork that was emulating the work of documentary filmmakers from that period.”

The director adds that if you construct reality properly, it really doesn’t matter where you put the camera. “If it’s a reality that makes sense, you could shoot it from the corner of the room with your phone. That’s what those documentarians were doing: They went to a location and put themselves someplace, and it was usually the wrong place in relation to where the action was going to be, so they’d have to zoom in to get to the shot they needed. Or they’d try to run over there. A lot of times they got a bad shot. But it was the action you were looking at anyway. You can kind of force yourself into their situation.”

Van Sant does in fact sneak in a few shots from the actual event shot by newscasters of the arrival of some of the guests. And there was no shortage of film of Truman Capote to help recreate his character.

The director is no stranger to experimenting with form, often in stories that meld reality and drama whether giving William S. Burroughs a supporting role in Drugstore Cowboy, or interpreting the life of HarveyMilk (Milk), or shooting Elephant and Last Days, which are reactions to Columbine and the death of Kurt Cobain but not conventional docudramas. His most formal exercise was remaking Hitchcock’s Psycho shot for shot.

“I always try to make a story conform to the reality as I know it,” he told The Daily Beast. “When I first started out with Drugstore Cowboy, I was putting so much emphasis on blocking. I ended up doing it in the way I understood Stanley Kubrick did his filmmaking: He would work on a scene first and would figure out the shots afterwards. After that, I started working in that manner,” he said.1966,” Episode 3 “As I got more familiar with my cinema, the blocking started to become more and more complicated, because I realized that anything that happens in reality defies the logic of how you would block it in visual fiction. Even with something that happens in a simple, given space, like a convenience store, the way people move and what they do is very surprising. If you were to shoot a basic interaction between two people in a convenience store with your phone and then watch it a couple of times, you’d realize the blocking of reality is quite unexpected. People might enter and exit before they even do anything! Odd things happen all the time. If you can capture those moments and use them in your fiction, you can represent reality in an almost spooky way.”

He adds in the same interview, “Emulating different forms to show different things has always been something to work on, like having a recipe to make. We were doing the same kind of thing on the third episode of Feud but with the films of the Maysles and D.A. Pennebaker. We were trying to approximate a documentary of the Black and White Ball so we could see what it would have been like to capture the black-and-white ball, as opposed to explaining it cinematically. It was an experiment. We were emulating films that existed. Their chaos was inspirational.”


Cricket remains the prize in India streaming battle

StreamTV Insider

The Indian Premier League (IPL) is only played for two months a year (generally April through May) but ranks among the most valuable sport leagues in the world rivalling the NFL and soccer’s English Premier League in terms of rights per match.

article here

So valuable is the franchise that it is tempting to view the proposed merger, valued at $8.5 billion, of Disney’s Star India with Reliance Industries media wing Viacom18 solely in that light.

Viacom18 had locked in streaming rights for the IPL until 2027 paying $3.05 billion and Disney through Star India owns the tournament’s TV contract worth $3.02 billion.

This values each of the 74 matches per season at about $7.36 million per game broadcast on TV and $6.4 million per match for streaming.

Moreover, Disney brings with it streaming rights for International Cricket Council matches events in the domestic market.

It is hard to overstate the value of cricket in India (and in neighboring Pakistan, Bangladesh and Sri Lanka) where the sport is often described as akin to religion.

Media agency GroupM estimates sports industry spending in India totaled $1.7 billion in 2022, up 49% from the previous year. Cricket accounted for 85% of the spending on sponsorship, endorsement and media.

It’s why Ken Leon, research director at CFRA Research, told CNBC, "Cricket is everything in India ... I think [Disney CEO] Bob Iger made the right decisions here.”

But the move is already being scrutinized for its potential to swamp competition. Although the entity will also have digital and broadcast rights to other key sports like the EPL, Wimbledon and FIFA World Cup 2026 in USA, Canada and Mexico it is the cricket rights which have lawyers exercised.

K.K Sharma, a senior partner at Indian law firm Singhania & Co, told Reuters that the merger would leave “hardly anything of cricket left. The regulator [Competition Commission of India] gets concerned even when there is a possibility of dominance. Here, it is not merely dominance but almost an absolute control over cricket.”

A multi-year stranglehold on the most popular sports property will be of concern to advertisers worried that the lack of competition will push prices up.

"The regulator's concern as far as cricket is concerned will be on the advantage the Disney-Reliance entity will have on raising prices for advertisers," noted Karan Chandhiok, head of competition law at India's Chandhiok & Mahajan to Reuters.

India is the most populated country on the planet by some accounts, with 1.4 billion people. The merger is expected to immediately tap more than 750 million of them.

The Indian streaming market is growing rapidly and is why streamers such as Netflix and Disney have targeted it. At issue is that revenue per subscriber there is substantially less than in the U.S. or Western Europe, reflective of lower average incomes. A basic Netflix plan in India for instance costs about $1.80 per month.

Having expanded into the market by acquiring streamer Hotstar (via Twentieth Century Fox) in 2019 Disney launched a mobile service over which it offered exclusive live viewing of the IPL for around $4 a month.

It then lost IPL streaming rights to Reliance’s Viacom18 in 2022. Reliance undercut Disney by offering the tournament for free on its own streaming platform, Jio Cinema, and Disney+ Hotstar shed 4.6 million customers in the first three months of 2023 as a result, followed by 12.5 million in the quarter ending July 1.

“The fact that digital rights value was higher than television showcases the scale and future potential of streaming in India,” Mihir Shah, VP of Media Partners Asia, told the BBC when the IPL rights deal was struck in 2022.

 


Tuesday, 5 March 2024

Tag, Search, Serve: What You Need to Know About Analytical AI

NAB

article here

Generative AI can be used to create audio, stills, and videos but something often overlooked is how useful Analytical AI can be. In the context of video analysis, it would involve facial or location recognition, logo detection, sentiment analysis, and speech-to-text, just to name a few. Analytical tools are the focus of a Michael Kammes podcast, “AI Tools For Post Production You Haven’t Heard Of.”

“Welcome to the forefront of post-production evolution,” he says.

Kammes invites post-production chiefs to take a look at a number of analytical tools. These include StoryToolkitAI, an editing tool that uses AI to transcribe, understand content and search for anything in your footage, integrated with ChatGPT and other AI models. It began as a GitHub project by developer Octimot, runs on OpenAI’s Whisper and Python, and can be used on Blackmagic Design’s DaVinci Resolve among other professional editing systems.

“StoryToolKitAI transforms how you interact with your own local media. Sure, it handles the tasks we’ve come to expect from AI tools that work with media like speech-to-text transcription. But it can understand and execute tasks that it was never explicitly trained for,” he says.

He describes it as a “conversational partner. You can use it to ask detailed questions about your index content, just like you would talk with ChatGPT.”

Kammes likes that StoryToolkit runs locally so users get privacy even while the application itself is open source. He believes the app’s architecture is a blueprint for how things should be done in the future.

“That is, media processing should be done by an AI model of your choosing and can process media independently of your creative software. Or better yet, tie this into a video editing software’s plug-in structure, and then you have a complete media analysis tool that’s local, and using the AI model that you choose.”

While many analytical AI indexing solutions search your content based on literal keywords, others perform a semantic search by using a search engine that understands words from the searcher’s intent and their search context. This type of search is intended to improve the quality of search results.

This is what Twelve Labs seems to have cracked. Its tech can be used for tasks like ad insertion or even content moderation, says Kammes. “Like figuring out which videos featuring running water or depicting natural scenes like rivers and waterfalls or manmade objects like faucets and showers,” he explains.

“In order to do this, you would need to be able to understand video the way a human understands video and what we mean by that is understanding the relationship between those audio and video components and how it evolves over time because context matters the most.”

Cloud storage developer Wasabi Technologies recently acquired Curio AI, a technology developed by GrayMeta that uses AI and ML to automatically generate a searchable index of unstructured data. GrayMeta President and CEO Aaron Edell and his AI team are also joining Wasabi.

According to Kammes, speaking ahead of the acquisition announcement, “Curio isn’t just a tagging tool. It’s a pioneering approach to using AI for indexing and tagging your content using their localized models. Traditionally, analytical AI generated metadata can drown you in data and options and choices, overloading and overwhelming you. GrayMeta simplifies the search process right in your web browser.”

Wasabi is planning to gives its users exclusive access to Curio. It will allow them to easily search their huge archives of unstructured data, something that was not possible before, the company said.

“Imagine walking into Widener Library at Harvard with 11 million volumes, and there’s no card catalog,” David Friend, CEO of Wasabi, told Joseph Kovar at CRN. “That’s what we have right now with unstructured data in the cloud. Our acquisition of this machine learning technology is really going to be the most important development since the introduction of object storage itself.”

He added, “Today unstructured data is still in the dark ages. I believe that what we’re doing here with Curio AI to automatically create an index of every face, every logo, every object, every sound, every word, will really revolutionize the utility of object storage for the storage of unstructured data.”

Wasabi plans to fully integrate Curio into its cloud storage, and not offer it as a standalone technology for other storage clouds.

“It’s going to be one integrated product, and it’s going to be sold by the terabyte just like our regular storage, but at a slightly higher price. And for that, you will get unlimited use of the AI,” Friend detailed.

Curio will automatically scan anything that’s put into Wasabi’s storage and produce an index which can then be accessed using the Curio user interface and one of several media asset management systems including Iconik, Strawberry and Avid. The company expects to go to market with the product later this year “with channel partners who sell into the media and entertainment industry.”

Wasabi even thinks its combination of object storage and Curio is a step ahead of even Amazon, Google and Microsoft in terms of functionality.

“The hyperscalers can’t do what we’re doing with Curio. I mean, they have a toolkit, and you can assemble something like this if you have the time and money. But there’s nothing equivalent to this that anybody else is offering as far as I know.”

Next Kammes addresses Code Project AI server which handles both analytical and generative AI. He describes it as “Batman’s utility belt” where each gadget and tool on the belt represents a different analytical or generative AI function designed for specific tasks.

“And just like Batman has a tool for just about any challenge, Code Project AI Server offers a variety of AI tools that can be selectively deployed and integrated into your systems, all without the hassle of cloud dependencies.”

This includes object and face detection, scene recognition, text and license plate reading, and for even the transformation of faces into anime-style cartoons. Additionally, it can generate text summaries and perform automatic background removal from images.

The Server offers a straightforward HTTP REST API for integration into a facility or workflow. “For instance, integrating scene detection in your app is as simple as making a JavaScript call to the server’s API. This makes it a bit more universal than a proprietary standalone AI framework,” says Kammes.

It further also allows for extensive customization and the addition of new modules to suit specific needs.

Finally, Kammes highlights Pinokio “a playground for you to experiment with the latest and greatest in generative AI.”

Pinokio is a self-contained browser that allows you to install and run various analytical and generative AI applications and models without knowing how to code. It does this by taking GitHub code repositories (called repos( and automating the complex setups of terminals, clones and environmental settings. “With Pinokio, it’s all about easy one click installation and deployment, all within its web browser,” Kammes insists. “It enables you to with various AI services before they go mainstream.”

It already chock full of diverse AI applications to play with, from image manipulation with Stable Diffusion to voice cloning and AI generated video tools. “Pinokio helps to democratize access to AI tools by combining ease of use with a growing list of modules. As AI continues to grow in various sectors platforms like this are vital in empowering users to explore and leverage AI is full potential. The cool part is that these models are constantly being developed and refined by the community,” Kammes says.

“Plus, since it runs local and it’s free, you can learn and experiment without being charged per revision. Every week there are more analytical and generative AI tools being developed and pushed to market.”

 


Monday, 4 March 2024

The Creative Possibilities for AI Script Generators

NAB

In marketing and creator content, a growing trend involves using AI for text generation. Whether you’re looking to brainstorm ideas for your new promo video or want a detailed script for your demo video, you can jumpstart the process with an automated scripting tool.

article here

AI writing tools are only as good as your prompt. For this reason, many AI writing tools offer templates and the ability to enhance your prompts. The best practice is to iterate with the tool, asking it to refine and change the results until you achieve your desired outcome.

“The first attempt probably won’t be good,” advises reviewer Conner Carey at SproutVideo. “But if you instruct the AI to rewrite the script with specific additional parameters, the output improves exponentially, producing a decent script to expand manually.

There’s more from SproutVideo about how to easily write a video script.

Some AI writing tools are built for marketing, storytelling, or both. When choosing an AI tool, review which AI model the tool employs. Some tools use older versions of OpenAI’s ChatGPT or a proprietary AI model.

For businesses and creators needing to create a constant flow of engaging content there are a number of AI powered tools available to speed and polish text. Here are some of those.

ChatGPT is the most famous of the bunch. The chatbot helped generative AI enter the mainstream since launch in November 2022 and has continued to grow in popularity.

Creators work with ChatGPT to support writing scripts or getting past the writer’s block for content, supplementing the work of scriptwriters.

For example, ask it to give you five ideas for an explainer-style video, copy and paste an existing script outline or even a blog article, and ask it to generate a polished script.

Social-media marketing strategist Laura Bitoiu recommends ChatGPT as the number one tool for beginners in AI because of its ease. “There are basically endless options,” she told Business Insider. “I’ve been using it to brainstorm content ideas, write sales pages and copy, and create new digital products.”

Bard is the Google-developed chatbot. It can generate text, translate languages, and write content, among other uses. When a user enters a prompt into Bard, the chatbot forms a response based on information in its database or another source, like Google’s other services.

As with ChatGPT, creators and industry insiders have found various use cases for Bard, from support with idea generation to light editing to writing short text.

Social media consultant Matt Navarra told Business Insider he found Bard useful in generating alternative text and descriptions for images he uses in his newsletter and other types of content.

Unlike ChatGPT, the Vidyard AI script generator is specifically designed for sales videos. Marketers might then use a tool like Vidyard Video Messages to record a script.

Jasper is “a significant step above the competition,” rates Carey, and not (only) because of its AI. “Jasper allows you to save information about your company, brand, products, campaigns, and more. When generating copy, Jasper includes specific details based on the context of the prompt.”

It has a prompt enhancer that does the work of generating a more detailed prompt based on your input. Jasper also includes a template specifically for script writing.

Jasper also utilizes a number of large language models including OpenAI’s ChatGPT, Anthropic, and Google. “This allows the platform to generate the most accurate and dynamic content across subjects,” Carey explains.

WriteSonic is considered by Carey to be a less expensive alternative to Jasper. It offers multiple script-writing AI generation tools, including TikTok scripts and video outlines. But it lacks Jasper’s branded customization and doesn’t employ AI models beyond ChatGPT, he says.

AIContenfy uses natural language processing (NLP) algorithms and ML models to generate content that is customized to a business’s needs. The tool can create various types of content, including articles and blog posts. Users can specify their desired tone, style, and target audience, and the tool will generate content that matches these criteria. AIContenfy says it can help businesses improve their search engine optimization (SEO) efforts and help generate content at scale.

Grammarly is an AI-powered writing assistant that checks for more than 400 types of errors in your writing, including grammar, spelling, punctuation, and sentence structure. Available as a browser extension or desktop app, Grammarly integrates with popular word processors like Google Docs and Outlook. It can also review your social media posts, emails, messages, and comments in real-time to ensure that your writing is “flawless and professional.”

Yoast SEO is a WordPress plugin that helps content creators ensure that their content is optimized for search engines. One of its most powerful features is the ability to create and analyze your XML sitemap automatically. This helps search engines like Google to better understand the structure of your website, making it easier for them to crawl and rank your content accordingly.

Wordsmith uses natural language generation (NLG) to produce “human-like writing.” One of its most significant benefits is its ability to produce localized content. This allows companies to create consistent, targeted communications with global customers in multiple languages.

Acrolinx provides guidance and recommendations to authors and content creators to ensure that their writings are in tune with company guidelines and writing style. For example, it highlights faulty grammar, repetitive text, or overused corporate jargon.

The verdict: Not (Yet) the Finished Article

While AI tools have come a long way in producing coherent content, they may still lack the creativity and finesse of human writers.

“One of the primary concerns when using AI tools for content creation is the quality of the output,” says the team at AIContentfy. “While some AI tools can produce well-written and coherent content, others may fall short in generating human-like language.”

They advise: “adding your personal touch and editing for clarity and coherence will ensure the content meets your quality standards.” When using AI content generation tools, crafting clear and precise prompts is key. Specific prompts yield more accurate and relevant results.

But don’t be afraid to experiment with various prompts and settings within the AI tool. “Test different tones, writing styles, or word limits to see what works best for your content goals.”

As it stands, reliance on AI for the entire text process isn’t feasible. It is wiser to use these tools as complementary to support your creative process.

“Use it to generate ideas, brainstorm topics, or draft initial content, and then add your unique perspective and insights to make the content more engaging and authentic.”

 


Weird is Wonderful: The Adventure of Editing “Poor Things”

NAB

Editor Yorgos Mavropsaridis has collaborated with director Yorgos Lanthimos for more than 20 years and knew from the first moment they met that he had to ditch all the rules he had learned.

article here

“The first question is ‘what is reality?’ he told Hayden Hillier Smith in an extensive interview at The Editing Podcast about the making of awards season favorite Poor Things.

“From the first collaboration I discerned that this is a guy who wants to say things in a different way, not the usual way we approach themes or character. For Poor Things I discovered many themes that existentially if you like, are about how easy it is to be in a society, which puts some rules on you.

For Lanthimos storytelling is not a didactic experience. “I want you to feel no, it’s more loose, it’s more open to interpretations and feelings,” says Yorgos Mavropsaridis who is Oscar nominated again following his work on previous Lanthimos drama The Favorite.

“All Lanthimos’ films desire a new kind of reality, which has certain rules how an individual can behave and questions whether this behavior is dictated by the character’s needs or by some external force. And of course, it’s the same with Bella Baxter.”

The lead character is played by Emma Stone in what has already been a BAFTA and SAG Award-winning performance.

Mavropsaridis says he still has to go against his instinctive approach to editing. “And I have to surprise myself as well, to create something new and not to repeat the same situations all the time.”

In all his previous films, they had used classical music mostly, but the director commissioned Jerskin Fendrix to compose the music for Poor Things months before the shooting started. Not the exact music as it was in the final film, but the general themes so they could have them in editorial after the first cut.

Lanthimos also used a lot of this music on set, having done this previously on The Killing of a Sacred Deer (2017). “Different music was played back for [Stone] to somehow get inspired by the music — to have this surprise of — for the firsttime — seeing something.There was also music to set the inner rhythm or their external movements because Yorgos likesthe choreography of the actors — not only the facial expressions — and this way, the movement,internal or external, is influenced.”

Almost every scene uses an extremely wide-angle fisheye lens. Mavropsaridis explains there was no discussion with the director about when to use them.

“The usual pattern was a fisheye lens, or the 4mm lens with the iris mask, then a long take with movement combination, zoom in or out with tracking shot. Usually, my editing brain needs a reason to use them.

For example, the first time we used this 4mm lens was when Godwin Baxter went down the stairs, heard the piano playing, and then we cut to him. He looks at her and smiles. At that moment, I thought, “Okay, that 4mm lens would be a nice point of view from this strange man.’ Then the next time was when Max comes in, Bella runs and embraces Godwin Baxter like a baby. I thought it was funny: a grown-up woman being like a baby, maybe seeing it through Max’s eyes for the first time — this strange situation, there are always small reasons. Subliminally they might say something to a viewer.”

Another example is when they are in the cruiser and Bella Baxter says to Duncan Wedderburn, “You’re in my sun!” so Mavropsaridis cuts to the 4mm lens when he throws the books away, “just to punctuate the situation. Different reasons all the time.”

It was the director’s idea from the beginning to have the first part of the film be a kind of homage to the old Gothic films shot in black and white. They then break that by introducing the color picture in the beginning.

“It was broken in an interesting way when Godwin Baxter recites the story of Victoria Blessington: how he found her, being pregnant with the baby, was shot in color,” Mavropsaridis says.

“There was a good juxtaposition between black and white in the office narration and the color of her suicide and the discovery of her body, which also breaks interestingly the time continuum between the two situations that are kept continuous with his narrating tale. Then the rest of the film, after her leaving London, was in extreme color and also in different hues of color. For example, the first part in Lisbon was shot with color negative.”

The scene where Bella dances without a care in the world was edited “incorrectly” by Mavropsaridis initially. He felt the choreography should remain intact when in fact it had to be awkward. The creative idea was that the dance was “a microcosm of the big world of the film.”

“Of course, it was very nice to see her in a situation with other dancers, and I thought it was nice to keep this situation with the other people dancing around her that was so funny. But this was not what it was supposed to be,” he says.

“Bella is about 16 years old at that time. She sees people dancing for the first time, and the particular music excites her and she wants to dance, but she hasn’t danced before, her movements are rough and awkward, but she doesn’t care about what other people would think. And we didn’t have to care if her movements were choreographed or ‘correct.’ It had to be spontaneous,” he continues.

“Everybody wants to control her, so the main part of the choreography we had to keep were these movements: When Duncan puts his arm around her, trying to manipulate it, and she reacts, trying to free herself. This dance scene is a microcosm of the whole life situation.”

Once they had reached this point where everything was in place the cut was three-and-a-half hours. Then they had to deconstruct the whole thing.

“We have constructed it. Now let’s take it apart and see what we can do to try this or that. He’s very precise in what he wants, but usually, the edit has to improvise on how to achieve it,” Mavropsaridis says.

“He doesn’t say much, but since we’ve edited together for almost 25 years now, I know what he means, and I know which way I have to tackle it. I have a lot of freedom from him to try things, even if they were not discussed. If I have an inspiration in the middle of the night, I will do it,” he continues.

“Maybe it works, maybe it doesn’t. After many trials and errors, many hours, and many films together, we have reached a very understanding way of working. I believe that Poor Things was an easier film to edit.”

A discussion at the dinner table about marrying Bella includes flash forwards and flashbacks. This was composed in the edit to cut length and keep the story moving, Mavropsaridis told Steve Hullfish on the Art of the Cut podcast.

“It is a method that we developed on the film we did together, Dogtooth because Yorgos likes to shoot his film in continuity. He doesn’t edit during the shoot so in editorial we felt that this big scene with a lot of discussion going on needed to be compressed.”

Typically, editor and director will have a few issues that can only be resolved in the edit, but there is now a telepathic connection between the pair that is only the result of like minds working together for so long.

“There was a problem about a scene on the cruise ship,” he told CinemaEditor magazine. “While Yorgos was emailing me I sent over my solution and he said, ‘That is exactly what I have in mind.’ I have reached a point of being able to understand his thoughts without talking to him. After so many years I know what the small things are that bother him and what he tries to achieve. At the same time, he has helped me to overcome my laziness of the mind, so it is now easy to me to throw a scene out and do it a different way.

“I always have in my mind Lanthimos’ own phrase — ‘Is that all we can do?’ So I have to prove each time we can do more and better.”

 


Sunday, 3 March 2024

Generative “Eno” Documentary Reshapes the Film for Every Viewing

NAB

A randomized documentary of the career of legendary techno-music pioneer Brian Eno, in which every screening is potentially and infinitely different, is the latest project to be served up by generative AI.

Eno is billed as the world’s first generative cinematic documentary. “Like a musical performance that’s different every night, the film creates a unique viewing experience for each audience that takes it in,” explains Matt Grobar at Deadline.

article here

The 75-year old British music producer and visual artist who has worked with David Bowie, U2, Grace Jones and Talking Heads, and who birthed the ambient music genre and frequently mixes technology with art, is ripe for a video retrospective.

“I usually can’t stand docu-bios of artists because they are so hagiographic,” Eno told Variety’s Todd Gilchrist.

So, rather than charting a chronological path through Eno’s career, documentarian Gary Hustwit proposed using a generative system to create a film that would literally be different for every audience that screened it.

“The use of randomness to pattern the layout of the film seemed likely to override any hagiographic impulses,” Eno said.

If that was enough to pique Eno’s interest in the project, for Hustwit the approach was about provoking new ways of creating and experiencing a film.

“I like movies where you learn different things about the subject, but you, as the viewer, make the connections… I always think that’s a lot more rewarding, as a viewer. It’s a different kind of filmmaking, but it’s also a different kind of film watching.”

It helps that the first and last scenes of the 85-minute doc are always the same. Plus, there are certain scenes pinned to the same timeslot in each version, including a scene where Eno discusses generative art.

“We thought that was probably a good scene that everybody should see,” Hustwit told Lauren Forristal at TechCrunch.

Everything else, however, can be different, depending on the material the generative program decides to insert.

“It’s kind of a modular approach,” Hustwit explained to Forbes’ David Bloom. “You can learn different facts about that person at different times in the film. In the end, you make the connections as a viewer.”

Like one of media artist Refik Anadol’s AI creations, Eno is going to be different each time it is screened. That poses a problem for film critics, Bloom points out.

To DeadlineHustwit explained, “There are billions of different combinations that could possibly exist of this movie, and every time you watch it, you’ll never see that version again. So, it’s an interesting experiment. We can change the way that the form of film works [so] let’s talk about the possibilities.’”

Hustwit had another reason for making the film this way too. It’s a showcase for the generative tool (cutely dubbed Brian One) that he has built along with digital artist Brendan Dawes by their startup company Anamorph.

The tech was trained to select scenes from over 500 hours of archival footage and new interviews of Eno as well as animated visuals and music to produce the unique iterations of the doc.

Anamorph spent five years building the software, combining patent-pending techniques with the team’s own knowledge of storytelling. The company says it’s not trained on anyone else’s data, IP or other films.

“The main challenge was creating a system that could process potentially hundreds of 4K video files, each with its own 5.1 audio tracks, in real time,” Dawes tells TechCrunch. “The platform selects and sequences edited scene files, but it also builds its own pure generative scenes and transitions, creating video and original 5.1 audio elements dynamically. The platform also needed to be robust in a live situation, it wasn’t an option to have it crash. So, we did a crazy amount of testing. We can create a unique version of a film live in a theater, or we can render out a ProRes file with its own 5.1 audio mix and make a DCP from that.”

He also stresses, “This is a generative system, not generative AI. I just need to make that clear, because pretty much everything that’s been said about Eno uses the word AI.”

Advertising agencies have apparently expressed their interest, Hustwit reveals to TechCrunch, with one company wanting to make 10,000 versions of a one-minute commercial.

Rather than make its tools publicly accessible, the company wants to collaborate on projects so it can “consider the source material and the overall story goals,” says Hustwit.

“Our main goal is to get the idea out about this new kind of cinema and hook up with great collaborators to help explore this idea.”

Hustwit ponders what an experimental form-pushing director like Jonathan Glazer (The Zone of Interest) could do with something like this.

“You could make a movie that’s always on, always evolving, always changing,” Hustwit told Forbes. “I feel like Eno, it’s really kind of an opening conversation. What’s next? What can we do with this?

A streaming service such as Netflix — which has played with interactive forms of video — could easily generate a different version of the documentary every day, Hustwit added.

However, to TechCrunch he poured cold water on the idea, saying that streaming networks aren’t equipped to dynamically generate unique video files and stream them to thousands of viewers so that each viewer is getting their own version of a movie.

“When we premiered Eno at Sundance, all the big streaming companies loved it, but they also admitted that their systems can’t handle the tech involved… These streamers need to differentiate, and I think enabling the films and shows they’re releasing with generative technology is a way to do that,” says Hustwit.

It’ll likely take years before streaming services adapt to the technology. Until that happens, Anamorph is sticking to live events and theatrical releases.

“Something that the theater industry badly needs right now is a reason to get people to come in, and if there is a uniqueness about the live cinema experience, that’s one way that can be achieved,” he adds.

Six versions of the film will be shown at Sundance, with additional screenings across 50 cities to be presented later this year.