Thursday, 3 August 2023

Building the “Barbie” Dream Edit

NAB

The art of the edit is about selecting which material to keep out as much as what to retain and so it proved with the year’s runaway hit, Barbie.

article here 

In charge of the cutting room was Nick Houy, ACE, making Barbie his third film after Ladybird and Little Women with actor-writer-director Greta Gerwig.

The normal challenge of rhythm and pacing becomes even more acute with a story that is as intentionally arch and anarchic as the one written by Gerwig and Noah Baumbach. Since every joke had to count and had to work while the film is moving at speed, it was important for Houy to stress-test them over and over again.

Barbie was so much more a comedy than Lady Bird and Little Women,” Houy told IndieWire’s Sarah Shachat. “So we were just, like, ‘Let’s put it in front of people and see how they react.’ Everyone’s different and every screening’s different and we’ve definitely learned, over the years, that you really have to let things have their fair chance and then act accordingly. Once you know it’s dead, you have got to get it out of there.”

Houy also spoke with Matt Feury at The Rough Cut, where he again picked up the idea. “The whole fun of this job is trying crazy ideas. It might be terrible and you’ll do six things and one of them will be great,” he said.

The editor relates one experiment where Kate McKinnon, who plays Weird Barbie, is looking down at Barbie who is laying on the ground.

“She’s like, ‘Hey, how’s it going Barbie.’ And then we flash to like a Weird Barbie with makeup all over her face and this like horror music sting. You know, it’s such a weird idea. But it was so great. And that ended up in the movie.”

The beach scene near the beginning of the movie where the dialogue is basically lots of “Hi Barbie” apparently went through more than 50 iterations.

“Some of them are completely abstract works of art that were worked on by multiple people with multiple different ideas like literally things that could be in the Tate Modern or they could be in a 1970s Avant Garde screening. We went there with everything. And so that’s why some of it survived and some of it didn’t. But it was all kind of amazing.”

Houy agreed that the main challenge of editing Barbie was providing clarity over the course of a number of turns across the film — some of which hinge on expressive internal realizations of Barbie confronting the reality of women in the real world.

“There’s always some person that has an issue with these structures. Getting it down to that one person instead of half the audience was a big challenge,” Houy told IndieWire. “But it’s worth it. We get excited by that. We’re always talking about Charlie Kaufman movies and trying to do [things like] that in a way that feels like our own voice.”

There are roughly 1,500 VFX shots in the film, which added wrinkles to the post workflow, VFX editor Matt Garner explained to Feury.

“We had to basically turn over everything once early on so that the executives could see it without blue screen in it. And then we had to redo all the work again. So tracking and managing that with all the various vendors we had was quite an undertaking, the most I’ve ever had to deal with.”

Gerwig screened several reference movies for key crew including The Wizard of Oz, Singing in the Rain, Saturday Night Fever, Close Encounters of the Third Kind, Women on the Verge of a Nervous BreakdownThe Red Shoes, Oklahoma, Wings of Desire, and Philadelphia Story — even Rear Window. Obvious homages in Barbie include the entire 2001: A Space Odyssey pre-title sequence and The Godfather.

“All of those were done at a movie theater in London while they were shooting,” says Houy. “So it was like every Sunday, they would go do that. Our whole crew was in New York, but we watched them all. And those are all things that we talked about early on. I would often just sit and watch a scene of The Godfather, and be like, ‘They’re not cutting at all… we should really should do that.’

“The tone of things like Singing in the Rain were very helpful to understand this crazy dream dance sequence.”

The non-stop jokes and surrealism of much of the movie gives way in a couple of places to contemplative pauses that are in many ways the film’s emotional core.

The final montage, for example, began life as a script note along the lines of “a Terrence Malick-esque sequence occurs” and went through various iterations in the edit before the filmmakers agreed to try selects from of home movies from the people who worked on the film.

Houy told Feury, “We just tried a bunch of stuff. We tried stock footage and never did [find anything] that ever quite worked. And so we started using old Super 8 footage and our own footage. It was a constant evolution. In that sense it was like a film school where we’re all just putting together little pieces of footage and trying things out.

“And where we landed was ultimately the right place where it’s just women. It’s telling the story of becoming human and becoming a woman. And that was what we needed to tell at that moment.”

“Even though we don’t have a sign up that says, ‘This is footage [of] the people who made the film,’” Gerwig adds to IndieWire. “I think in some unconscious way, it’s a reminder that films are only ever made by people. And these were the people that made this one.”

 


Wednesday, 2 August 2023

The Studio Sustainability Standard Aims to Go Global

NAB

No one should need to have the importance of sustainability explained so film and TV studios should welcome the chance to benchmark their business with the Studio Sustainability Standard.

article here

Organized by the UK’s BAFTA through its albert scheme, this is a voluntary, global standard for studio facilities. With the first year under its belt involving 12 studios, the program is keen to enlist more studios and suggests that not doing so will be detrimental to a studio’s brand reputation.

“This commitment to sustainability builds a positive brand image, attracting environmentally conscious partners, investors who want to support sustainable productions and meeting audience demands for more sustainability on and off screen,” says albert Project Lead Steve Smith.

Research published in 2020 found that the average tentpole film production — a film with a budget of more than $70 million — generates 2,840 tons of carbon dioxide emissions.

In response, albert worked with a range of film industry stakeholders and Arup as technical partner, to develop the Standard.

It has three primary aims: to guide studios in the practical steps they can make to become more sustainable; to act as an incentive for studios to clean up their acts, encouraging studios to collect data on their own sustainability progress; and to bring that data together to create a picture of the progress of the industry as a whole.

The dozen studios that participated in the first year were all from the UK with the exception of Sony Pictures Studios in Culver City, California. Others included 3 Mills Studios, BBC Studioworks (which came out on top), Elstree Studios, Maidstone Studios, Warner Bros. Studios Leavesden, and Wolf Studios Wales.

Results of the 2022 Studio Sustainability Standard 2022/23 Report. Cr: albert/Arup

While commending the 12 for getting involved, Smith somewhat icily notes that dozens of others have so far failed to engage.

“In a world grappling with the urgency of climate change, 12 studios have stepped up to the plate, embracing their responsibility and driving a transformative shift towards sustainable practices in a commitment to align with net-zero pathways. This report celebrates the studio trailblazers who are helping to redefine what it means to be an environmentally conscious industry.”

 

In this first year, five studios were awarded with the rating of “Very Good” — not the highest grade possible but that is to be expected, particularly when many older studios have legacy buildings with inefficient energy insulation.

More significant than the rating, according to albert, “is the fact that 12 leading studios committed to measuring and reducing their environmental impacts.”

As sustainability rises up the agenda, these studios will be able to use their albert certifications to gain an edge over competitors when booking business. Quite how much the sustainability of a studio facility plays into the thinking of executives when budgeting and planning to locate shows is another matter. Anecdotally, reports suggest that this depends production to production with some film and TV clients keen to show corporate and social leadership (Netflix is often cited) and others paying mere lip service.

Carys Taylor, director of BAFTA albert, says, “The Studio Sustainability Standard ratings badges allows studios to show off the progress they’ve already made and benchmark the progress yet to come. And productions will know where to go to get support for their own sustainability missions.”

Highlights from the 2022 Studio Sustainability Standard 2022/23 Report. Cr: albert/Arup

This idea of competitive edge is being used as the carrot to entice more studios to join up for the second round of the report.

The performance rating of the next round is valid for 12 months from the date it is issued (April 1, 2024).

Studios failing to reach the minimum standard, will be awarded a “participated” badge for that year.

Submissions are processed by an experienced team of data analysts at albert and Arup. The standard itself will be updated every two years following input from a steering group of experienced studio facility operators, trade bodies and producers.

The cost for a studio to be involved is dependent on the square footage of its sound stages.

Among factors taken into account are electricity consumption, reliance on diesel generators, waste disposal, water efficiency and transport emissions.

The scorecard prioritizes more impactful measures including the use of LED lighting across the studio, incentivizing productions to utilize LED lighting providers and ensuring the studio has achieved 100% renewable electricity sourcing.

Credits can also be gained for on-site renewable energy generation which could include solar or wind turbines and zero plastics use.

“For studios that participated in 2022, the response was overwhelmingly positive in terms of the quality of submissions and the performance that it revealed,” Smith said.

“We hope that studios will reflect on their feedback and seek out ways to be even better in 2023. In many cases it was clear from submissions that studios are already on an improving trajectory with additional measures in the pipeline. We hope that studios will take confidence from initiatives that others have in place to make further improvements.”

Studios wanting to get involved in next round of the report need to submit before the end of November 2023.

 


Tuesday, 1 August 2023

Futurists Agree “AI Won’t Be the Hollywood Version”

NAB

The speed at which AI is advancing has shocked most experts in the field but others think our fear is misplaced and that actually there’s a lot to be optimistic about.

article here

One of them is futurist Sinead Bovell, who contends the current foothills of AI are like the internet in the early ages of email.

“Since we don’t really know how things are going to transpire how things are going to evolve we’re tuning in a lot to Hollywood’s version of the future. Of course, some dystopian futures are possible but I don’t think that’s where we necessarily have to end up,” she said. “There are a lot of amazing people working on things like AI safety, and alignment. So I think we have a we have a good shot, if we can get our act together.”

Bovell was speaking on the “Futurists” episode of Bloomberg’s AI IRL podcast, where she predicted that nearly a quarter of the workforce will be disrupted by artificial intelligence over the next five years. But that doesn’t necessarily mean their jobs will be eliminated by automation — more like augmented by AI.

“For sure, certain tasks will get automated, but that’s different than an entire job,” she says. “It doesn’t matter what job you’re in, you have to figure out how to start using AI tools. Over the next 15 years, most of the jobs [impacted by AI] probably haven’t been invented yet — like a social media manager didn’t exist 15 years ago. And now, if a company doesn’t have one it’s toast.”

Proclaiming himself to be “very excited” and “incredibly optimistic” about the future of AI, Kevin Kelly — a senior Maverick at Wired — likens the transformative power of AI to electricity, the printing press, and even language.

 

“I’m optimistic because so far the benefits certainly outweigh whatever negatives and problems there are,” he told Bloomberg in the same episode. “I think that the problems are smaller, and fewer than we think [and] I think our capacity to solve the problems are greater than we think. So just as AI’s problems are new and powerful, our ability and will to solve them is also increasing.”

Nor does he think that the change of AI on society will happen as fast as some fear.

“This a centuries-long journey that we’re on. We’re gonna be having this conversation for the next century. So we have time to adjust and we’re already rapidly adjusting to these things as [new versions] come out within months. The versions are incorporating the objections that people have — whether it’s copyright or bias — and that’s one of the reasons it gives me optimism about our ability to control this as we go forward.”

Kelly points out that AI is not a monolithic entity.

“There are going to be many AIs, many varieties, many species [of AI]. We’re seeing that happening already. The kinds of AIs that might drive your car can be different from the ones that are doing the translation from one language to another in real time, which might be different than the ones that you’re using to make an image. We certainly can generalize some aspects of them but I think it’s very important to make sure that we talk in plurals.”

Some of these AIs are going to be conscious, he predicts, but this will be added in deliberately by humans for specific use cases.

“Some of them may have a little bit of consciousness [but] it’s not binary, it’s kind of a gradation with many varieties. Consciousness is not necessarily something we’re going to put into most AIs, because it’s a liability in most cases.”

 


Pixar’s Elemental Lays Foundation for AI Powered Workflow

IBC

New Pixar animation Elemental is the Walt Disney company’s most technically complex feature film to date and required a new data storage pipe that lays the foundation for use of AI.

article here

“We are not actively using AI yet, but we have laid the foundation,” began Eric Bermender, Head of Data Center and IT Infrastructure at Pixar Animation Studios.

“One thing we have done is taken our entire our library of finished shots and takes for every single feature and short - everything we’ve ever done, before even 1995’s Toy Story - and put it all online and available and all sitting on the VAST cluster.”

He continued, “As you can imagine all that data in the future could be used as a training data. We’re talking not just final images but all the setup files used to generate those images as well. The library is valuable as training data but the actual applications themselves don’t exist at the moment.”

The data-intensive animation technology used to make Elemental would not have been possible without deploying a data/storage platform from VAST Data.

AI-Powered Volumetric Animation

“Traditional animation uses geometry and texture maps with the geometry deformed by a skeletal system,” Eric Bermender, Head of Data Center and IT Infrastructure at Pixar Animation Studios explained to IBC365. “For example, if you saw a shot of Buzz Lightyear walking, the geometry and texture maps will be the same from frame to frame albeit deformed in some particular way.

“Those assets might be large but they don’t change by frame so we can cache them. However, volumetric characters don’t have that. Every single frame is a new simulation. We lost the ability to cache because everything is unique per frame and the IOPS (Input/output operations per second) went up significantly.”

In Elemental directed by Peter Sohn, characters representing the four elements (air, fire, water and earth) live in proximity (though, of course, elements can’t mix…) in and around a society known as Element City. These characters don’t have traditional geometry and texture maps but are volumetric or simulated.

“This means that every time the animation team iterates the frame it creates a new simulation and that meant that our compute and store capacity needed started to accelerate quickly. Instead of one geometry file and one set of character maps, now every single frame is a unique simulation of that character.”

Pixar’s first experiment with volumetric animation was in creating the ethereal ghost like characters in the ‘great before’ of Soul (2020). This was also the first project on which Pixar worked with VAST.

“With Elemental the characters are much more animated [than in Soul] and every single character is a volumetric character. Even some of the background set pieces and buildings are volumetric animations. Soul was our practice run; Elemental is the full deal.”

Faster Storage for an AI Future

VAST uses all-flash storage as a replacement for the 20-to-30-year-old storage paradigm based on hard disk drives (HDD) and tape and data tiering. Its architecture allows Pixar to have information stored in computer memory and available for rapid access.

For context Toy Story in 1995 utilised just under 300 computers and Monsters, Inc (2001) took nearly 700 computers. In 2003 Finding Nemo used about 1,000.

With Elemental, the core render farm on Pixar’s California campus boasts more than 150,000 computers to render nearly 150,000 volumetric frames and 10,000 points of articulation for each of the main characters, Wade and Ember. By contrast, typical Pixar character models only have about 4,000 points.

Elemental created six times the data footprint and computational demands for data than that of Soul. By moving 7.3 petabytes of data to a single datastore cluster VAST provides real-time access to keep Pixar’s renderfarm constantly busy.

“In the past, we would have to segment separate [Pixar film projects] onto separate network interfaces,” explained Bermender. “We did that because a show that’s in active production has historically generated the most IOPS and capacity growth as we render out.”

However, the new IT system now allows for shows that are in development to be able to trial new methods of animation with an efficiency not previously possible.

“Maybe we are working on a new environment or new character that’s never done before and we hit go for render and it overwhelms the cluster with IOPs. Now, with VAST, we can segment different projects with different paths to the storage and data resource and it doesn’t slow the whole pipeline down.”

He reveals that during production somebody had accidently made a mistake and set the whole system to regenerate every single character and shot overnight.

“We didn’t notice until the next morning that the system had written out as much data as all of Toy Story 3 in a 12-hour period. The system itself was performant and able to do it. It was pretty amazing to me that we literally rendered out the entire footprint of a movie we only made in 2010 in just a few hours.”

Given this boost in rendering speed you would think that the typically lengthy multi-year process of creating an animated feature could be reduced.

Bermender disagreed, saying that even as compute and storage tech advances, the animators will take advantage of that capacity to create more complex images.

“As we create the ability to iterate faster it frees the creative process for artists to create more complex scenes resulting in the same amount of time needed to render an image. Animators will work on a scene during the day and send a job to render overnight. That job has to be done by the next morning so by the time the animators come in they can begin work on dailies.”

He added, “Artificial Intelligence has the potential of enabling more creative and complex images than perhaps we see now but I don’t think it will actually reduce the time taken to render them.”

Greater Processing Capacity adds Flexibility

The ability to deliver large volumes of data at render time will help Pixar as it prepares to leverage AI for future films.

For instance, RenderMan, the Pixar-owned company that created the software that paints the final images, recently released ML algorithm ‘Denoiser’ to the market.

“We’ve been using Denoiser for a long time. We take old shots and curate them and RenderMan uses these curated copies of the images as training data so it knows how to smooth out during path tracing. To do that successfully the denoiser has to be ‘aware’ of what is the scene.”

He says the type of AI image manipulation that solves a practical problem is more useful than the more generic type of image generator.

“It’s one thing to generate an image using something like Midjourney, quite another to do it for animation storytelling where you need to have control.”

 

How a Black Hole of Content is Crushing TV Into Oblivion

NAB

If COVID accelerated latent trends in the entertainment business, the WGA and SAG strikes are tipping the scales into a wholesale restructuring of its economics.

article here

Netflix, Amazon or Apple won’t suffer while production is grounded, but legacy media with arms in both digital and linear may be irrevocably lost in the shake-up.

As Drew Harwell and Taylor Lorenz at The Washington Post report, “Hollywood’s business model has rarely looked so precarious, with box office sales, streamer subscriptions and advertising revenue all trending down.”

For broadcast networks the strikes represent “apocalypse now,” according to Josef Adalian at Vulture,  which has canvassed largely off-the-record opinion of industry players.


“Network TV was already in a bad place, and this is really going to kick it in the nuts,” one broadcast exec told the publication. The biggest danger is that the audiences who have stayed loyal to the broadcast ecosystem may finally give up and give in to the streaming dark side. The broadcast exec said they expected ratings to plunge by 30-40% as a result of the de facto cancellation of the fall season.

Pandemic lockdowns accelerated a headlong rush to streaming as studios tried to compete with Netflix and made subscriber growth their top priority. The result was content saturation and a seemingly futile quest to find a sustainable business model. Disney+, for example, lost four million subscribers in the first three months of the year and made a loss of $659 million.

By letting the strike drag on, legacy companies such as Disney, NBCUniversal, and Paramount Global are risking real damage to both their linear and digital businesses.

Shock: They may even be complicit in making this happen. “It’s like Stockholm syndrome,” Law & Order: SVU showrunner Warren Leight told Vulture.


Privately, some corporate execs told Vulture that they don’t disagree that a protracted strike could be devastating to the network model. But they also argue that striking workers — particularly those in the WGA — should be just as worried. Even if the guilds achieve most of their goals, if the result is a dramatically weakened network TV ecosystem, that will mean far fewer of the good-paying, residual-producing, back-end-yielding broadcast jobs.


“The moment the strike was announced, ABC announced an all-reality schedule,” another “network insider” confesses to Vulture. “There’s no going back to a majority of the schedule being scripted. It’s not going to happen. I think the writers have a lot of legit grievances. But some of the best jobs they’ve ever had are going to be gone after this… This feels to me like we’re going to come out of this strike and everybody’s going to lose.”


Disney boss Bog Iger has even put the for sale sign up over legacy TV assets like ABC. Iger’s own multi-million dollar pay packet is a lightning rod for striker ire this is a sideshow according to one business report: Cut Iger’s pay by 75% and you do not fill the $1.8 billion hole in the Disney balance sheet from just the past two quarters of Disney+ streaming losses or do much to alleviate its bloated debt.


The rock-bottom pricing strategy, which Iger put in place along with piling on debt, won’t work, but there’s only so much Disney can hike rates in a crowded market, even with what Iger confidently calls “pricing leverage.”


Netflix is not as exposed as the networks because it has a catalogue of content stockpiled for over a year, including international content, it has embraced ads and cracked down on password outlaws.

Indeed, one of the ways that Netflix is managing to still make a profit in spite of all the instability and change in the industry is that it derives a healthy share of profit outside of the US.

“One of Netflix’s innovations is that it has connected the world from a programming perspective,” Lucas Shaw, MD for media and entertainment at Bloomberg, told NPR. “And so it has invested a lot of money in South Korea and in popularizing Korean dramas not just across Asia but around the world.”

 

Every day the strikes continue, the networks get weaker and Netflix gets stronger.

“I’ve gotta believe that Netflix is very happy to just sit back and let the networks burn,” another unnamed insider told Vulture. “Whether that’s by design or happy accident, I don’t know. But even if they don’t see the broadcasters as their main competition, everyone is competition in the video space. Now you’re gonna knock three or four of your competitors off who represent 20 percent of viewing.”


David Smith in The Guardian poses a bigger question, which is what exactly is the value of the content? And specifically, what is the value of content in an age of content saturation and pending AI script domination?

Screenwriter and voice actor Jared Butler tells the paper: “It starts with giving creative people a way to create well. A lot of the great content that people celebrate now, whether it’s from the 70s or 80s or 90s, those things that people go back to, those things that built libraries on streaming, that spawned all the sequels, was created when people could earn a living doing it.”

He adds, “There was a financial incentive to do great work and, if you take all that away, I don’t know what’s going to happen.”

Phil Alden Robinson, a writer and director whose credits include Field of Dreams, blames the tech companies for not understanding the industry. “They pride themselves on entering new industries and disrupting. They’re doing it here to the point where young writers who are not from well-to-do families can’t afford a career. Writers are no longer on the set learning how to become a showrunner. It’s unsustainable.”

 

Could the network’s pain be the creator economy’s gain? Already in demand, top influencers are now being courted by producers and studios hungry for content to fill depleted pipelines, writes Paula Parisi at ETC. Meanwhile, striking actors and writers are taking their ideas to YouTube, Instagram, TikTok and Twitch, where they can forge a direct relationship with viewers — albeit not one that will result in direct-deposit paychecks.

 

The online creator market will “likely double in size over the next five years,” from $250 billion today to half a trillion dollars by 2027, according to Goldman Sachs ResearchWith YouTube outperforming all services for June viewership, “‘there’s less incentive for people to stay on to see old libraries of content,’ and the industry ‘may start to realize that the creators are the only ones left to do business with,’” suggests The Washington Post.

ChatGPT Tells All: “Predicting” the Future of Media

NAB

From holographic broadcasts to neural storytelling and even interplanetary communications, the media landscape of 2050 will be an immersive, algorithmically customized, and boundary-pushing experience.

At least according to AI.

article here

Fahri Karakas, associate professor of Business & Leadership at the University of East Anglia in the UK, had the (excellent) idea to prompt ChatGPT 4 to make predictions about the future of media, and the ideas the machine came up with are mind blowing in so much as they do not seem at all like science fiction.

Responding to a prompt by Karakas for “Media trends of 2050,” the AI asks us to imagine watching TV or a live sports event with holographic images projected right into our living room. Thanks to advancements in neural networking and AI, video content will be generated in real time by analyzing the preferences and emotions of individual readers.

That’s not far fetched and neither is the idea of interplanetary media given the rocket into orbit of several commercial space initiatives and the planned missions to the Moon and Mars. In 25 years, “interplanetary communication networks will enable real-time news, entertainment, and cultural exchanges between different colonies and settlements across our solar system,” the AI predicts.

In 2050, synthetic media stars will take center stage, says ChatGPT 4. AI-generated characters with unique personalities and appearances will become cultural icons, captivating audiences in movies, music, and even influencing fashion trends.

Media platforms will implement advanced AI algorithms that understand our preferences, values, and emotions. These algorithms will curate content across different mediums (articles, videos, podcasts) specifically tailored to our tastes, saving hours of scrolling and searching. Genetic tests will reveal our predispositions towards certain genres, styles, or creators, resulting in highly curated content recommendations and personalized media experiences for each individual.

Our clothing will incorporate media capabilities, “allowing users to display digital content, share messages, and interact with others through their garments.”

ChatGPT 4 invites us to imagine being able to change the design of our clothes with a few taps on your wrist and conveying emotions through animated patterns.

Individuals will have the option to “micro-dose media,” consuming bite-sized content experiences designed to boost mood, enhance focus, or provide relaxation. These personalized micro-experiences will be carefully crafted, offering a tailored media diet that suits individual needs and desires.

And of course, in the future, social media experiences will extend beyond the screen. Users will be able to physically immerse themselves in virtual reality environments, attending parties, concerts, and interacting with friends from around the world, blurring the lines between physical and digital reality.

Karakas also asked the AI to imagine what the media ecosystem looks like in 2050. To no surprise, the machine reckons that the traditional media industry has undergone a profound transformation.

“Traditional television networks and print publications have largely become relics of the past. With the ubiquitous adoption of AR/VR technologies, media consumption has transitioned into an immersive and personalized experience. Users can now create their own tailored media environments, blurring the lines between reality and fiction, and leaving behind the one-size-fits-all approach that defined earlier iterations of media consumption.”

Rather than relying on traditional screens, individuals now access media through smart contact lenses or eyewear that overlays digital content onto their physical environment.

In 2050, there has been a seismic shift from passive consumption to active participation in media creation. User-generated content (UGC) has become the “lifeblood” of the media ecosystem, with individuals sharing their stories, opinions, and experiences.

Social media platforms have evolved into immersive multi-sensory spaces, allowing users to curate their media channels and generate content through neural interfaces that directly translate thoughts into digital form.

If you believe the AI, this “democratization of media production” has transformed the dynamics between creators and consumers, fostering a new era of collaboration and shared narratives. Users will become active participants, exploring dynamic environments, and shaping the outcome of the story through their choices and actions.

Much like in Minority Report, augmented reality advertising will blend with our surroundings. AR glasses or contact lenses will overlay digital content onto our physical world, providing personalized ads tailored to our preferences and location as we go about our daily lives.

Journalism and news media will also see a transformative shift. This includes ubiquitous AI news anchors and nanobots “capable of infiltrating high-risk situations, capturing visual data, and transmitting information in real-time.

According to ChatGPT 4, this technology will provide unparalleled reporting from conflict zones, natural disasters, and other dangerous environments.

By 2050, news will not only be delivered through traditional written articles or broadcast segments, but also through immersive virtual reality experiences. People will be able to “step into the news,” witness events firsthand, and interact with virtual objects.

Journalists and anyone else won’t need to use a keyboard anymore, either. Instead of typing or even speaking, expect people to be able to communicate directly through thoughts. That’s because brain-computer interfaces will become the norm, “allowing us to transmit ideas, emotions, and even memories to others. This technology will revolutionize storytelling, as authors can share their stories directly from their minds to the readers.”

 

How Steven Soderbergh Brings It All Together for “Full Circle”

NAB

Full Circle, a six-part series that just completed its run on Max, is a melodramatic crime drama series with interconnected storylines and hidden secrets, taking viewers on unexpected twists and turns.

Director Steven Soderbergh collaborated with writer Ed Solomon and together they talked about the project during an hour-long roundtable with a handful of trade outlets.

article here

On Shooting Long Takes

One of the hallmarks of the show’s visual style is a tendency toward long takes that present the action at a distance without punching in excessively for close-ups. According to Soderbergh, as Jim Hemphill reported in IndieWire, those long, intricately choreographed takes have a practical component as well as a desirable emotional effect: They allow him to work faster.

“The thing that takes time when you have a lot of work to do in a day is unnecessary coverage,” the director said. “If you can rehearse and block and stage something and know where the cuts are coming before you’ve shot it and you don’t capture any redundant material and you’re not doing 20 of 30 takes of stuff, you can move pretty quickly.”

On Virtual Production

In the endeavor to shoot efficiently, much of the show’s interiors were shot on a volume stage. While Soderbergh initially hoped to shoot these scenes on location in an apartment near New York’s Washington Square Park, various factors led to the production opting to shoot inside a sound stage instead. Instead, the production team decided to use the new RDX System from Rosco.

Phil Greenstreet, Rosco’s head of development for backdrops & imaging, went on the location scout around the apartments near Washington Square Park and shot hundreds of images with a Fuji 100 GFX camera. The apartment set was modified with long hallways for Soderbergh’s roving camera.

“They didn’t want to be messing with motion,” Greenstreet told Bill Desowitz at IndieWire. “They didn’t even want motion in the background, so the flags weren’t moving, the cars weren’t moving, you only see small slivers of cars in the distance anyway.”

Soderbergh explained to IndieWire, “I love what you get from [RDX] and the ability to go from one look to another in a matter of seconds. Literally, I can move the image around, I can adjust the contrast, I can adjust the brightness, I can blow things up, I can shrink them. There’s no other way to get this interactive, refractive light bouncing around the room off the surfaces with that kind of technology.”

On Branching Narratives

Soderbergh and Solomon originally intended Full Circle to be a branching narrative like their 2018 HBO series Mosaic, which gave viewers the option to choose different outcomes for the story via the app.

 

“On Mosaic, we were able to do that, because that was repurposing the footage to use in both ways. I was using the same footage for the linear version that I was using for the app. That’s why that was not a problem,” Soderbergh explained during the roundtable, as quoted by The Hollywood Reporter’s Hilary Lewis. “My vision for the app version of Full Circle was completely different imagery, completely different approach directorially, different cameras, different everything.”

The Full Circle script was 400 pages, Soderbergh said, with the app version consisting of an additional 170 pages “in which there’s no overlap,” he said.

“I can shoot fast, but I cannot shoot that fast. We had to throw all of that away [though] some of those 170 pages leaked its way back into the linear version.”

The process made Soderbergh question whether there’s any real place for branching narratives in narrative storytelling.

“It’s not clear to me that this form of storytelling is needed or even wanted by audiences. In a primal sense, around the campfire or a dinner table, if somebody pulls the attention of the group to tell a story, the people in that group are expecting and wanting to hear a story that resolves itself. They don’t want to hear somebody tell a story at a dinner table in which they go one way, and then they back up and go, ‘Or it could go this way.’ That’s not what you want. I think there’s a very strong impulse for people to want to be told a story like, ‘You’re the storyteller. Tell me a story. Don’t make me do the work. That is your work.’

“That’s what I’m beginning to think. So it’s a real question whether or not I would return to that format without an idea that I feel can only be executed properly in that format.”

“That’s what I’m beginning to think. So it’s a real question whether or not I would return to that format without an idea that I feel can only be executed properly in that format.”

On AI

Asked for his thoughts on AI, Soderbergh said it could be helpful as a tool but he has doubts about AI’s ability to mimic the lived human experience.

“It doesn’t know what it means to have a flight cancelled and have to figure out how to get home,” he said, as quoted by Christina Radish at Collider. “At a certain point, that’s a real problem. You have to remember, its only input is data, text and images. It has no body temperature. It doesn’t know what it means to be tired.”

He added, “I think it’s useful for design creation… as a basic way to accumulate a framework. Let’s say it writes a script and it’s supposed to be a comedy script that ChatGPT has generated, and you say to it, ‘It needs to be funnier.’ And it says, ‘How?’ And you go, ‘I don’t know, it just needs to be funnier.’ What does it do? It’s just a tool. But if you asked it to design a creature that’s a combination of a cat and a Volkswagen Beetle, it can do that. That’s fun.”

This naturally segued onto a discussion of AI’s implications on industry jobs. Solomon doubled down on his belief that art made by human beings cannot be replicated.

“The problem is, the people making decisions on the highest level are all about the bottom line and “How can I get rid of as many human beings as possible?” [and they] don’t have the ability to judge what is good art and not good art. If we don’t draw a line in the sand now, my fear is we’re going to continue to a place where a lot of people are [going to be] out of work.”

 

Engineered to the Final Shot

Inspired by Akira Kurosawa’s 1963 film High and Low, the premise of Full Circle asks: What if there were a kidnapping but the wrong child was taken?

But while viewers may have originally tuned in to see Claire Danes, Dennis Quaid, Timothy Olyphant, Zazie Beetz and Jim Gaffigan, those who stayed with the limited series saw a story take about two Guyanese teenagers take center stage.

“You think it’s about this group of well-off white people being victimized. And then over the course of the show, the whole thing starts to tilt,” Soderbergh told the group of reporters, as quoted by THR. “By the end of it, we’re in a very different place than where we started. So it was this melodrama that had this very interesting subterranean thematic thread bubbling along that eventually comes up and takes primacy in the last two episodes.”

The series ends with the lead Guyanese characters walking around the unfinished Colony at Essequibo, the ill-fated development that connected them with Danes’ character’s family, and a pan over to a billboard advertising that the aborted project is “coming in 2003.”

“From the very, very beginning of the script, it was all engineered to that one last shot,” Soderbergh said.

Critical Reception

Whether moving from character to character or balancing suspense and action, Full Circle thrives on efficiency, reviews Ben Travers of IndieWire.

“Taken as a creative twist on a tried-and-true format, it balances the experimental and the satisfying in a way TV should strive for more often, especially in an era when filmmakers are being asked to create content. If you’re going to churn out stories for streaming, you may as well maintain your artistic credibility.”