Tuesday 30 April 2024

Here’s Some Good News: Broadcasters Are Actually Optimistic About AI

NAB

The time and resource-saving capabilities of AI for cash-poor news organizations should be welcomed as much as the technology’s potential to erode trust should be guarded against, say executives working in the field.

article here

Scott Ehrlich, CIO at Sinclair, said AI was forcing his organization to reevaluate everything it was doing. “Whether it is traffic, or finance, or story publishing, or how we manage the archives, or how we extract value from our archives, we get the opportunity to reevaluate and ask if AI is this something that can help,” he said.

“I don’t think that we’ve yet found a corner of the organization that is not going to be impacted in some way, shape, or form. One of the big challenges is prioritizing all of the use cases that you can develop for generative AI across the entire enterprise.”

Christina Hartman, VP of News Standards and Editorial Operations for Scripps News, said AI tools could cut down on the daily non-newsgathering functions that their journalists have to do in order to free them up to actually gather news.

“Our mission is use tools and use cases that prioritized our journalists to be able to do as much news gathering as possible,” Hartman said.

“That’s led to a world in which AI helps our journalists format their broadcast scripts for digital, generate SEO friendly headline, format for social media and a number of other tasks that take away the drudgery of the post-news gathering process to allow our journalists to focus on what is most significant, which is the news gathering.”

One use case Scripps News is considering for AI is generating more local stories from city councils and school boards. AI could be used to transcribe an interview or a public meeting and then synthesize the most important parts.

One such product was previewed at NAB Show by Moments Lab (formerly Newsbridge). Its new MXT-1.5 indexing model automatically analyzes live streams and archives, generating descriptions of video content, “just like a human would.” This claims the company empowers your team to work 10x faster, focusing on creative tasks and generating more revenue from your ever-growing media library.

“As newsrooms shrink, that is a problem very much worth solving,” Hartman said. “That is where I think generative AI poses really incredible opportunities to allow us to effectively be there, when we can’t be there.”

Mohamed Moawad, managing editor at Al Jazeera, had another pertinent use case for AI. He explained how the channel was using AI to enhance and analyze satellite imagery from Planet Earth of Gaza, in particular so it could inform its journalists on the ground where Israeli and other forces were.

“The imagery wasn’t high resolution so we relied on AI to [up-res] the imagery and secondly to, in a second, analyze it. How many tanks, how many Israeli soldiers, approximately, of course. It empowered the journalist and gave us the opportunity not only to take our reporting to another level, make it more in depth, more analytical, but also gave us the opportunity to have a safety measure in place.”

Scott Zabielski, chief content officer of AI startup Channel 1, highlighted the ability to translate news to a multitude of languages as a major advantage of the tech.

“We can pull stories directly from other countries translate them and bring them to the United States. Then we can bring translate them back out to the rest of the world. What we’re really focusing on with AI is the efficiencies of how to distribute that story, how to edit into a video package, how to publish to different countries.”

He pointed to a day when news will be personalized. “So when you’re watching a newscast, you can watch something that’s a 30 minute newscast, but instead of it being the same thing everybody in the country sees, it can be about your local area. It could have the sports scores from your favorite team, business news focused on your stocks. Exactly customized to you.”

Of course there are negatives with AI too. Channel 1, for example, pulls stories from sources that it trusts, and these are written and shot by journalists.

“We are very cognizant of the idea that there’s a trust issue with AI. How do you know if you can trust what you’re seeing in the news?” Zabielski said.

Moawad was concerned about accuracy: “Sometimes we rush to conclusions that AI offers us in terms of analyzing imagery from satellites, even drawing conclusions from data, but we have to be cautious about the accuracy, because it’s not 100 per cent accurate. And that’s a challenge because if there is a tape out there of a military leader and there are rumors online that it’s a deepfake, we have to talk about it publicly, we have to show people how we verify it. If we don’t talk about it some people will accuse us of accepting fake videos and airing them.”

To combat conspiracy theories and suspicion of news organizations, Hartman argued for “doubling down” on making human connections with the audience.

Other spoke of working with camera vendors to ingrain and track metadata from the moment video is captured to publication online to verify provenance.

Hartman was optimistic that the essential functions of news gathering won’t be radically altered by AI, even in a decade’s time.

“My perspective is that we will continue to use AI to reduce the amount of drudgery that is non-news gathering related, but I think the core fundamental work of building sources, getting those sources to share things with you, and then vetting what you’ve been told and reporting that out at its core isn’t going to change.”

Christopher Ross BSC / Shōgun

British Cinematographer

article here

Sprawling epic Shōgun allowed cinematographer Christopher Ross BSC to craft a rich tapestry of 17th-century feudal Japan.  

An English pirate is shipwrecked on the shores of feudal Japan and uses the local political system to ingratiate himself in a game of thrones among warlords competing to be top dog (or shogun). Ultimately, everyone just wants to survive. 

The 1980 NBC series starring Richard Chamberlain, based on James Clavell’s 1975 novel, was one of the first international TV experiences, a global hit that spawned miniseries like The Thorn Birds and North and South and a trend that continues to this day.   

“The original series was very much a white saviour story,” says Christopher Ross BSC. “Showrunner Justin Marks was very much of the opinion that the lens should be flipped and that the civilised Japanese feudal structure should look on this primitive pirate washed up on their shore as a savage. Our telling would be a Joseph Conrad, Heart of Darkness story.” 

At the same time, Marks wanted to adhere to the narrative concepts of the traditional Japanese Jidaigeki (the Samurai genre made most famous by Akira Kurosawa). “The western equivalent to the vocabulary of the Jidaigeki film is expressed in Jacobean tragedy,” says Ross, who boarded the project to work with director Jonathan van Tulleken and film the pilot for Disney brand FX. 

Van Tulleken and Ross met in 2009 on Channel 4 show Misfits and continued their relationship on episodes of Top Boy (2013) and Trust (2018). 

“We talked a lot about the form of the traditional Jidaigeki film, such as the use of long focal lengths and low camera positions, and an eclectic mix of cultural references but the one thing we came back to time and time again was that we wanted the story to feel very present, very first person and visceral,” Ross explains. 

Episode one introduces the principal characters Blackthorne, also known as Anjin-san (Cosmo Jarvis), Mariko (Anna Sawai) and Toranaga (Hiroyuki Sanada). “One of the things we wanted was to create a first-person perspective in each of their worlds so you are very rooted with each of them,” he says. 

 

“We also wanted the camera to be enigmatic and for the imagery to be leading the audience down a series of blind tunnels so they use their imagination to fill in the gaps.”  

Macbeth (2015), Apocalypse Now, The Revenant and The Assassination of Jesse James were referenced for depictions of the integral relationship between nature and human protagonists. 

“When creating any period drama it is really important to me that you bury the characters in the imagery so that it feels like they created the universe, rather than looking as if you created an aesthetic that sits on top of the imagery.” 

 

Immersive worlds 

Ross was aided by production designer Helen Jarvis who had built scale models of all the sets based on concept art that was itself the product of research. Landing in Vancouver for ten weeks of prep in autumn 2021, Ross used those to model how light would bounce around inside the sets. Historians were able to advise him on the sort of lanterns and night lights used in the late 16th century. 

He was struck by how traditional Japanese architecture framed its buildings with walkways, porches or verandas called engawa, the rooflines of which are designed to block direct sunlight from hitting the walls of a room and fading the colour of its fabrics.  

“What I took from a light study of these spaces is that the rooms are generally lit by a soft light from the sky, if anything a sun that skims low off the decking off the engawa itself,” Ross says. 

 

“I also noted the way the wood panelling of the walls leads into the wood panels of the ceilings. The ceilings are dark wood and the flooring is covered with beautiful bamboo and cotton tatami mats. This means you must invert the western lighting protocol by having light bounce from the floor while the wraparound shadow reflecting skin tones comes from the ceiling.” 

This knowledge guided Ross in lighting the two sound stages at Mammoth Studios, Vancouver. Locations in Japan were scouted but found to have little in common with the wooden building and wilderness of Shōgun’s setting. 

“The main reason we didn’t consider shooting a volume was the scale,” he says. “The set stages are about half the size of a standard football pitch.” One stage contained the primary set for Osaka palace and its elaborate garden (built inside so production could manage the inconsistent local weather for the shooting schedule which began in September and ended the following summer). The other stage housed the vast ceremonial meeting hall seen in the opening episode and another series of gardens. Additional spaces were used for sets of the Catholic mission and the prison. A set of Osaka fortress was built on a backlot. Other locations included for the opening shipwreck at Ucluelet on Vancouver Island and at Rocky Point in nearby Port Moody. 

“We worked hard to create a very natural atmosphere for the exterior scenes on the soundstage. It’s quite a challenge. When you inhibit the distance that light can travel, the light sources become much more apparent. Additionally, there is very little sunshine in Shōgun. It’s only in a few scenes. There’s a dominant soft sunlight that is utilised as a backlight for most of the garden scenes and interiors.” 

The solution was to construct a system of “punchy” softboxes. Normally you’d select a SkyPanel or Vortex to create a softbox (which Ross did to create a skylight in one interior) but for the illumination he wanted the softboxes that had to “punch” into the spaces. With gaffer David Tickell, Ross employed Studio Force LED units in 20ft by 10ft banks with motors to move position on both stages. 

“That was the key to lighting of the show,” Ross says. “The use of those mobile softboxes and a sort of permanently three-quarter soft-sun approach to lighting.” 

Anamorphic experiments 

The DP appreciated the flexibility of the Sony Venice, having just used it to photograph The Swimmers for director Sally El Hosaini, but says he is camera and lens agnostic. 

“On each project I like to rediscover the process all over again. We shot a lot of lens tests for Shōgun to find this visceral, first-person look.” 

He experimented with anamorphic lenses (which Kurasawa first used in his work on 1958’s Hidden Fortress), noting the differences of stigmatism in the defocus zones.  

“Each set of anamorphic blur backgrounds in a more horizontal than vertical direction to create an impressionistic feel,” he says. He and van Tulleken found the best match in the Hawk class-X range, which they felt sat halfway between the Hawk Lite and Vintage in terms of flare and defocus: “We both fell in love with the incredible close focus and intimacy you can get with the class-X.” 

The choice of anamorphic lens naturally led to a conversation with FX about aspect ratio but the broadcaster was not keen on 2.35 or 2.39, so they settled on a 2.1 aspect ratio, which, “combined with anamorphic and 4K finish, led us to creature a unique resolution frame line on the Venice,” Ross says. 

Another key to the show look was “to feel a granularity and rawness of the image” by applying texture from LiveGrain on-set, tweaked in post. With colourist Élodie Ichter (at Picture Shop) they conjured the magic formula based on Eastman Colour Negative 100T stock used to shoot Taxi Driver

 

Further costume and set tests in pre-production helped refine their judgement. Indeed, the harmonious cooperation of HoDs in what was a pandemic-delayed and then extended prep gave Ross among his best experiences of filmmaking yet. The results can be seen on screen, since the storytelling is coherent, considered and compelling. 

“Once the HoDs have created as much of the on-set mood and environment as possible then hopefully that filters down and ideally makes everyone’s lives easier.” 

After shooting episodes one and two pretty sequentially, Ross handed over to Sam McCurdy ASC BSC, Marc Laliberté CSC and Aril Wretblad FSF.  

McCurdy was still in Vancouver shooting Peacemaker when Shōgun came to town, so had time to prep with Ross and director Van Tulleken. “It was great because we all got to share those initial ideas and concepts,” the DP recalls. “Everyone spoke about authenticity and this was something that we all knew was important to get across.” 

McCurdy collaborated with director and friend Fred Toye on episodes four and five. Having worked together a number of times previously, they already had an established shorthand which made prep easy. Shooting mostly through the winter of 2021-22, they decided early on to use the weather as a key character in the story. “Trying to utilise snow and inclement weather [forced] us to use colours and tones that we thought suited our stories. The cold blues of the Pacific North West would become a key guide to our palette.” 

He admits with a smile that the challenges of the shoot were often self-imposed: “Shooting in snow and rain to keep texture and character in the story was pivotal for us, but it came with its downside. We utilised cranes on 4×4 tracking vehicles to ensure we moved around quickly and this became our main shooting tool for the show in the end.” 

McCurdy enjoyed “every second” of the shoot, but one of his favourite scenes to capture was the arrival of Lady Ochiba (Fumi Nikaido) in episode nine. “We shot exterior night (but on stage with rain) and the whole of the castle gardens were dark other than moonlight and torchlight,” he recalls. “We walked the actress down a darkened porchway to the castle entrance and to see the silhouette of the actress in her costume was something quite special; it felt like a powerful and yet beautiful introduction to her character.” 

Cablecam capture 

Aside from a mood reel, set plans and look book Ross shared with the other DPs an app detailing the lighting, gripping and camera operating plans for every scene.  

“As each DP watches rushes that we photographed they can see what devices we are using,” says Ross. These included lots of Steadicam, cranes, some drones and the occasional cablecam.  

A bespoke cablecam rig was built by senior SFX Brandon Allen for an establishing shot of Osaka which closes episode one. Ross explains, “We wanted to establish Osaka in a very characterful way. The idea was to start the shot on a VFX drone that sails past a shipwrecked vessel with Osaka harbour in the distance, then segue to a cablecam as we fly in over the water and over the heads of fishermen and alongside Blackthorne on a boat being escorted to prison. The camera slips alongside jetty with them and spins and holds on Blackthorne in close up.” 

In the final edit this sequence has been cut but Ross’ mission remains intact. “We wanted the audience to go along on this journey which is unfolding in front of their eyes.” 

 


Monday 29 April 2024

Masters of the Air: Soaring Shots

British Cinematographer

article here

Challenged to create a deeply immersive environment that not only enhanced the actors’ performances but also remained true to the historical context of the series, Lux Machina collaborated with the team at MARS to develop bespoke solutions to bring the harrowing aerial combats of WWII to life.  

Apple TV+’s definitive account of US aerial combat over Europe is the third WWII-based epic series after Band of Brothers (2001) and The Pacific (2010) for producers Steven Spielberg (Amblin Television) and Tom Hanks (Playtone). Presented by Apple Studios, the nine-part story follows the fate of the 100th Bombardment Group, a US Air Force squadron stationed in England and tasked with bombing Nazi-occupied territory in 1943. 

The nine-episode series is directed by Cary Joji Fukunaga, Dee Rees, Anna Boden and Ryan Fleck with cinematography from Adam Arkapaw, Richard Rutkowski ASC, Jac Fitzgerald and David Franco. 

A primary goal was to recreate the missions B-17 Flying Fortress bomber crews with historical accuracy. Given the premise and verisimilitude of the series, virtual production was the obvious choice for capturing interactive lighting and reflections on the bomber during aerial combat.  

Discussions began in 2021 when Apple called Lux Machina Consulting about a potential show and introduced them to VFX supervisor Steven Rosenbaum.  

Conventionally, the LED volume is used for in-camera real-time visual effects but for Masters of the Air this was not the case. 

Rosenbaum explains, “Typically, before going into a volume, you will shoot background plates for photoreal playback in the volume. To do that you want to know what the foreground action is going to be so you can compose plates accordingly. However, for scheduling reasons we didn’t have that benefit. So we played back previz in the volume, using aerial plates as reference, and matched them in post.” 

He adds, “The goal was to create a deeply immersive environment that not only enhanced the actors’ performances but also remained true to the historical context of the series.” 

The novel use of previz and realtime environments created a flight simulator experience for the talent including accurate lighting, reflections and layers of effects for explosions, smoke and tracer fire. 

“It’s like an interactive experience on rails,” says Galler. “Typical VP environments are relatively static or use plates which have all the movement in them. We had a hybrid of the two. As the plane banked on the gimbal the environment reacted appropriately. It’s the first time at this scale we’ve combined a flight sim game-like experience with dynamic action that the ADs could call on. The whole approach made it feel more real.” 

Lux began working with the Third Floor and Halon Entertainment to establish the shots that would be used in production. This included using a volume visualisation tool to create a unique design for production. 

In early 2022, Lux began designing the specs based on an understanding of content and set layout. A machine room to power the whole enterprise and four ‘brain bars’ for the virtual art department (VAD) were built and tested in the US. Everything was shipped from LA, prepped in London and installed at the stage in Aylesbury.  

Volume stages  

Masters of the Air was shot at Apple’s Symmetry Park in Buckinghamshire. The production used two soundstages equipped with custom LED volumes and a virtual production infrastructure that was constantly adapted to fit the needs of production. The first volume, used for 75% of the shooting, was a 40ft-wide, 30ft-high, 270-degree horseshoe. It featured an LED ceiling surrounding the cockpit, which was on a fully mechanical gimbal. The SFX team drove the gimbal while referencing content on the volume. The wings of the plane were extended into the virtual world on the screens and tracked with cameras and markers. 

The second stage was a 110ft-wide wall and very little curvature and contained sets for the fuselage of the bomber. One side was surrounded by a large, curved wall with a lightbox above with DPs able to use more than 280 SkyPanels. The stage also housed a fully mechanised gimbal reacting to content in real-time. 

A third volume was comprised of ramps and movable walls into an ‘L’ which could be quickly configured and used in a more ad-hoc manner. This was mostly used for the ball turret where a gunner could look down to a target, and occasional take-offs.  

In partnership with the team from MARS, Lux Machina managed and operated all Volumes fusing the virtual and physical worlds by incorporating bespoke solutions and innovative technology and often shooting on all simultaneously. 

Cameras 

The production shot on Sony VENICE to take advantage of the Rialto Extension system which enables the sensor block to be mounted within the set tethered to the camera block on the gimbal. The camera’s high speed sensor readout also made it a good camera for VP. 

Phil Smith, 1st AC, says, “We couldn’t have shot on the gimbal and in the replica B17 planes without using the Rialto. The director wanted the actors and action to take place in the actual spaces the B-17 plane has so this would give everyone a sense of what the actual pilots and crew had to experience. The planes were built to scale and the spaces in the interior of these plane were very, very small. Full sized cameras would not have physically been able to fit inside and get the angle and positions that they needed.” 

Up to 16 cameras were running at one time with the feed calibrated to screens with Lux proprietary colour pipeline. 

Galler says, “Our system allows for DPs to add their camera or lens back in. Instead of having to reverse out the lens and do all this other work, we believe that giving the DP the tools that they need to create the look that they want is the most important part because we’re trying to power their vision of the story.” 

A Sony FX6, which shares the same sensor size as VENICE, was also used as a helmet camera and as a director’s viewfinder for checking out shots in the tight set without having to get the camera inside or bring the plane gimbal down. The DPs appreciated the camera’s skintones, colour reproduction and low light performance. 

Syncing each camera with each other and the refresh rate of the wall was no small feat. Lux built a second MCR adjacent to stage to manage lens data, video input and sync over Timecode and Genlock. 

The backgrounds were not frustum tracked but the position of the gimbal was. “We were not worried about multiple points of view. We chose a track very close to the centre of the action, i.e. the centre of the cockpit, aligning a position from the perspective of the talent as opposed to the perspective of the cameras. All cameras were synched to the movement on the wall from that perspective.” 

Lux worked closely with the DP and lighting technician to synchronise the DMX with the real-time content. Lux operators were able to facilitate creative requests from the DP and Director instantaneously providing flexibility and creative freedom on the day. 

Lux led the supervision and production of what turned out to be around 80 environments over the course of seven months of shooting starting in summer 2022. 

Lux and the MARS team of software developers also worked in a custom version of Unreal Engine building 50+ proprietary virtual production tools specifically for challenges faced on the show.  

“This was needed to provide more seamless integration of a traditional production into a virtual production,” says Cameron Bishop, MARS’ lead Unreal operator, who was instrumental in some of the tool development. 

These included a light card tool to enable lighting operators to interface with light cards in Unreal, controlling size, shape, opacity and colour temp. Gaffers were also able to control a ‘programmable sun’ with historically accurate positioning. On the second volume a series of interactive lighting machines were dedicated to driving the pixel-mapped SkyPanels. 

“If the gaffer wanted to add another 100 SkyPanels then we needed a way of providing this quickly and efficiently in Unreal,” says Galler. “We adopted a process akin to how a lighting programmer would create a connection to those lights that allowed us to be much faster in set up and more adaptable. The lighting programmer could take over a block of lights while VAD could also drive content to screen which gave us very interesting layering effects that otherwise would be very difficult to achieve.” 

These included real-time ‘flack-triggering’. Bishop elaborates, “As an example of being agile to director needs, it became clear we needed a real-time ‘flack-triggering’ solution. We developed a tool that allowed us to simulate explosions on the LED volume with the press of a button. This provided instant feedback to the director, and in turn helped the actors believe the danger their characters were in.” 

Immersive for realism 

Lux and VFX teams spent hours in prep providing solutions to the smallest detail. What colour is a flare? How big is it? How much light is it going to throw into the cockpit? Where is that plane going down? How much smoke is coming out of the fuselage? What’s the cloud cover? Where is the sun right then? 

“All that would have been so difficult to do or imagine in green screen,” Galler says. “In many ways it was like working on location. The actors felt the lighting and interactive elements really impactful in their ability to connect with the story because it added to the illusion that they’re really there in the cockpit. They felt connected to the environment in a way I don’t think was possible on a green screen stage.” 

Holistic collaboration 

Associate VFX producer Will Reece says VP is a holistic way of approaching production. “It’s not a tag onto VFX or any other department. It’s how you structure your production, and that needs to go from the top to the bottom for it to ultimately be as successful as possible,” he says. “Everyone in the crew has to buy into that. We were so fortunate to have a strong presence from Lux Machina Consulting on the ground who spearheaded support and ongoing education during production to help us think about or approach things differently.” 

VP producer Kyle Olson says, “From the dynamic volumes on our UK soundstages to the intricate cockpit and fuselage sets, every element was designed to ensure historical accuracy and interactive realism. This project was not just a technological triumph but a testament to the power of collaborative storytelling.” 

“It was a real pleasure to strengthen and support this production and be part of the biggest VP crew either company has worked on” comments Rowan Pitts, founder of MARS Volume. “The mutual respect that the teams displayed and desire to overcome challenges with agility and collaboration no doubt forged the success on set.” 

 


We Need Copyright Laws for AI in Hollywood, But There Are… Issues

NAB

article here

The legal battle between developers of generative AI tools and creators, artists and publishers is often viewed as a zero-sum game: deleteriously impacting the business and livelihood of the latter or the bottom line of the former.

But the outcome will be more complex according to Paul Sweeting, co-founder of the RightsTech Project and founder & principal of Concurrent Media Strategies.

In a primer on the subject at Variety, he explains that despite at least 16 high profile legal cases in the US the courts are likely to struggle to find precedents that clearly apply.

Defense lawyers for OpenAI/Microsoft and Stability AI, defending respective copyright infringement suits brought by The New York Times and Getty Images, will claim fair use — that the training process is transformative of the input and therefore not infringing under prevailing legal precedents.

As Sweeting explains, the amount of data used to train the largest AI models is in the order of tens of billions of images (or hundreds of billions of words). And what the system actually retains from its training data is not the words or images themselves, but the numeric values assigned to their constituent parts and the statistical relationships among them.

It’s complex.

“Whether that process constitutes actual reproduction of the works in the training data, as plaintiffs have claimed, is as much a technical question as it is a legal one,” he says.

Pamela Samuelson, a professor of law and information at UC Berkeley, tells Sweeting that the biggest challenge plaintiffs in those 16 cases face will be establishing actual — as opposed to speculative or potential — harm from the use of their works to train AI models, even if they can establish that their works were copied in the process of that training.

She still rates the NYT and Getty Images cases as most likely to succeed or compel the defendants to settle because both companies had well-established licensing businesses in the content at issue that pre-date the rise of generative AI.

Meanwhile in Europe, the EU’s AI Act will require developers of large AI systems to provide “sufficiently detailed summaries” of the data used to train their models.

This sounds like good news. Surely, we should all want to trim the march of AI in order to compensate human creators whose work has helped to build AI tools now and in future?

However, some artists are concerned the balance will be tipped too far or that any new legislation will not be sufficiently nuanced to allow for legitimate copyrighted creation of works by artists who have used AI.

The US Copyright Office has a long-standing policy that copyright protection is reserved for works created by human authors. It treats the purely human elements of a work separately from the purely AI elements as distinct from the AI-assisted human elements.

Hollywood is similarly concerned at the extent that narrow interpretations of copyright will throttle the use of AI in production and post-production.

For its part, the Copyright Office is about to publish the first in a series of reports into AI with recommendations to Congress of any possible changes to copyright law.

The first such report will address issues around deepfakes. Others will cover the use of copyrighted works in training AI models, and the copyrightability of works created using AI.

Sweeting says there is “broad agreement” that the Copyright Office’s current policy is “unworkable, because the volume of mixed works will quickly overwhelm the system, and because the lines will keep shifting.”

In the absence of those updates or new legal precedents then the working and training with AI picture remains murky.

Who’s Going to Intervene When It’s Creators Vs. Big Tech’s AI?

NAB

article here

Is the tide turning on the ability to amass art from the internet to train generative AI… without compensating creators? A legal pincer movement from Europe and the US could soon regulate access to copyrighted material and few people in Hollywood or in wider society would shed a tear.

For those who think building Gen AI products on other people’s work is wrong; the passing of the Executive Order on the safe, secure and trustworthy use of AI is already late.

The target of their ire is Gen AI billion-dollar market leader OpenAI, whose video generator Sora, revealed earlier this year, laid bare the potential of the technology to auto-create photoreal content.

Although OpenAI refuses to admit it — to the increasing frustration of media commentators — The New York Times demonstrated that OpenAI has in fact trained its ChatGPT large language model on more than one million hours of YouTube videos, all without payment or consent.

"Why should OpenAI — or any other LLM — be able to feed off the works of others in order to build its value as a tool (or whatever you call generative AI)?” argues IP lawyer-turned-media pundit Pete Csathy in The Wrap. “And even more pointedly, where are the creators in this equation?”

The core argument is that GenAI would not work nor be a product without being trained with content and that artists and creators of those creative works should be compensated.

OpenAI and other AI companies contend that that their models do not infringe on copyright laws because they transform the original work, therefore qualifying as fair use.

“Fair use” is a doctrine in the US that allows for limited use of copyrighted data without the need to acquire permission from the copyright holder.

A tighter definition of fair use is what the Generative AI Copyright Disclosure Act is designed to achieve on behalf of creators. Following the EU’s own historic legislation on the subject, the act introduced last week would require anyone that uses a data set for AI training to send the US Copyright Office a notice that includes “a sufficiently detailed summary of any copyrighted works used.”

Essentially, this is a call for “ethically sourced” AI and transparency so that consumers can make their own choices, says Csathy who says “trust and safety” should logically apply here too.

“To infringe, or not to infringe (because it’s fair use)? That is the question — and it’s a question winding through the federal courts right now that will ultimately find its way to the US Supreme Court.”

And when it does, Csathy’s prediction is that ultimately artists will be protected. He thinks that the Supreme Court will reject Big Tech’s efforts to train their LLMs on copyrighted content without consent or compensation, “properly finding that AI’s raison d’etre in those circumstances is to build new systems to compete directly with creators — in other words, market substitution.”

As Csathy puts it, simply because something is “‘publicly available’ doesn’t mean that you can take it. It’s both morally and legally wrong.”

Few people, and certainly not Csathy, go so far as to want to ban GenAI development or even that there might be instances where “fair use” is appropriate. What they want is for OpenAI to fess up and be honest, trustworthy, and transparent about the source of its training wheels.

Behind the Scenes: Civil War

IBC

article here

In Alex Garland’s action thriller cameras are a weapon of truth


There’s a scene in Oliver Stone’s 1986 movie Salvador about the country’s chaotic civil war where a photojournalist played by John Savage is killed in the heroic attempt to capture the money shot - or proof - of military bombs falling on civilian population.

The heroic nature of photojournalists and the wider importance of upholding the journalistic quest for truth is writer-director Alex Garland’s mission in Civil War - although the lines are blurred. The film’s hero, a veteran war photographer, is among a press pack dreaming of the ultimate money shot: the capture or execution of the US President.

Garland has said he intentionally wanted to embody the film’s action through the grammar of images that people may have seen on the news. This grammar is less cinematic and more documentary like, a tactic also used by Stone on Salvador and filmmakers Roland Joffé and Chris Menges on The Killing Fields, another film about war correspondents fighting for truth and justice.

The cinematography reflects the vérité feel of actual combat, eschewing the clean camerawork that Garland and regular DP Rob Hardy ASC BSC have used on previous films like Annihilation.

While the main camera used is Sony Venice they shot a lot of the action scenes using the DJI Ronin 4D, a relatively inexpensive camera costing around £6000.

“I wanted something truthful in the camera behaviour, that would not over-stylise the war imagery,” explains Garland in a feature he wrote for Empire. “All of which push you towards handheld. But we didn’t want it to feel too handheld, because the movie needed at times a dreamlike or lyrical quality.”

“That more handheld look when it comes to combat stuff [is] in my mind the way I view things,” comments Ray Mendoza, the film’s military advisor. “Watching these handhelds — it’s more visceral.”

The cautionary fable takes place in a near-future America that has split into multiple factions embroiled in a civil war. The Western Forces, an armed alliance of states rebelling against the federal government, is days away from pushing the capitol to a surrender. In the hopes of getting a final interview with the president (Nick Offerman), Lee (Kirsten Dunst), a veteran combat photographer  travels 857-miles across the country to the White House with an aspiring photographer named Jessie (Cailee Spaeny).

Garland chose to shoot the $50 million movie chronologically in part to capture something more truthful in the actor’s performances. The schedule dictated that they shoot quickly, and move the camera quickly, which also leant itself to a more maneuverable camera. Very few shots in the film use tracks and dollies. The crew also mounted eight small cameras to the Lee and Jessie’s Press van.

“It does something incredibly useful,” Garland writes of the DJI Ronin 4D. “It self-stabilises, to a level that you control — from silky-smooth to vérité shaky-cam,” Garland writes. “To me, that is revolutionary in the same way that Steadicam was once revolutionary. It’s a beautiful tool. Not right for every movie, but uniquely right for some.”

The point about a combat photographer is that they have to put themselves in a position where they can see the thing that is happening, otherwise they can't take the photo.

The small size and integrated self-stabilisation of the DJI Ronin 4D meant that “the camera behaves weirdly like the human head,” Garland adds. “It sees ‘like’ us. That gave Rob and I the ability to capture action, combat, and drama in a way that, when needed, gave an extra quality of being there.” 

While the camera is not certified as an IMAX camera, Civil War (like The Creator) is presented for IMAX screens because it used IMAX post-production tools and a sound design suitable for the giant format.

Garland provocates by repurposing the images, tools, and euphemisms of modern war — airstrikes, civilian targets, collateral damage — onto American soil. Familiar and iconic images, from the streets of New York to the nation’s capitol, are radically recontextualised, like the eerily empty streets of London in Garland’s screenplay for the 2002 zombie film 28 Days Later.

As the son of political cartoonist [Nicolas Garland], Garland grew up around journalists. Lee and Jessie, whose last name in the film is Cullen, are named after two war photographers whose work Garland admires: Lee Miller and Don McCullin.

Iconic images from the Vietnam War of a young girl who had been burned by napalm, of a Buddhist monk who set himself on fire and the execution of a VC soldier [in a Pulitzer Prize winning shot by Eddie Adams] “became reasons why journalism did have an effect and changed the public mood,” Garland said after the film’s premier at SXSW.

“That's partly why photojournalists are at the heart of this film,” he said. “Often modern journalism of that sort is videoed, rather than stills. But journalism can be fantastically powerful, provided that it's being listened to. And one of the really interesting things about the state that the U.S and the U.K and many others are in right now is that the warnings are all out there on all sides of the political divide, but for some reason they don't get any traction.”

“Is it just that we're not able to absorb information because of the position we already hold?”

Hence, he decided to take such polarization out of Civil War to the point of refuses to engage in how it started – and instead tries to find points of agreement. It is “de-politicized for a political reason.”

Modern Warfare

It is exceptionally difficult however to make a war movie that is, in fact, anti-war.

“War movies find it very, very difficult to not sensationalise violence,” Garland says in A24’s production notes. “Most of the anti-war movies in a way are not really anti-war movies. They have so much to do with camaraderie and courage. It's not that they are trying to be romantic, but they just become romantic. They sort of can't help it because courage is romantic and tragedy in a way is romantic.”

He points to films like Stanley Kubrick’s Paths of Glory (1957) or the harrowing Soviet war epic Come and See (1985) as rare exceptions.

So, in Civil War, when characters are shot, they don't have squibs on them spouting fountains of blood. You don't see big blood splatters up the wall behind them. They just fall down. Blood then leaks across the ground if they've been lying there for the right amount of time.

“There's nothing really glamorous about a mass grave,” he said. “There's nothing really romantic about it.”

Similarly, they deployed blanks for gunfire (rather than purely reliant on audio FX). These make a loud noise, like a .50 calibre gun, that people react to instinctively by flinching.

The film’s explosive denouement featuring a siege of the capitol had to have each beat choreographed to be as tactically authentic as possible. Filmed on soundstages in Atlanta it involved 50 stunt persons, cars, tanks, explosions and gunfire. The aim was to put the audience in the middle of the battleground surrounded in chaos.

“We'd have a map of the area sketched out, and we would be drawing arrows and drawing little cones over where a camera was positioned,” Garland explains. “You could put together quite sophisticated choreography: this tank will move here, as this Humvee drives forward fast towards the other Humvees, and as it passes, that's when these soldiers will move down. We would just run that choreography again and again and again.”

He gave Mendoza free reign to choreograph the sequence, so long as nothing was embellished. “I hired a lot of veterans, and it's great to see them move through it, get into the scene of it,” Mendoza says. “It's pretty accurate just even from the dialogue, to the mood, and a lot of the gun fighting.”


Saturday 27 April 2024

“Civil War:” The Camerawork to Capture the Chaos

NAB


Perhaps it could only take an outsider to update the American Civil War of the 1860s and imagine what would happen if similar events tore apart the United States today.

British writer-director Alex Garland didn’t have to look far for inspiration: The January 6, 2021 mob attack on the Capitol was a vivid insurrection filmed live on TV in broad daylight. While these events are a thinly disguised template for the finale of his film Civil War, Garland seems less interested in apportioning blame to the political right or left than in asking why we might end up there again.

You could see similar events play out in Britain or any other country, he told an audience at SXSW after the film’s premiere. “Any country can disintegrate into civil war whether there are guns floating around the country or not,” he suggested, adding that “civil wars have been carried out with machetes and still managed to kill a million people.”

“I’ve known a lot of war correspondents because I grew up with them,” Garland said in the same on-stage interview. “My dad worked [as a political cartoonist] on the Daily Telegraph. So I was familiar with them.”

Garland showed cast and crew the documentary Under The Wire about war correspondent Maria Colvin, who was murdered in Syria. His lead characters are news and war photographers played by Kirsten Dunst and Cailee Spaeny, whose character’s names echo those of acclaimed photojournalists Don McCullin and Lee Miller. Murray Close, who took the jarringly moving photographs that appear in the film, studied the works of war photographers.

“There are at least two [types of war photographer],” said Garland. “One of them is very serious minded, often incredibly courageous, very, very clear eyed about the role of journalism. Other people who have served like veterans are having to deal with very deep levels of disturbance (with PTSD) and constantly questioning themselves about why they do this. Both [types] are being absorbed and repelled at the same time.”

He represents both types in the film. While it is important to get to the truth — in this case, the money shot of the execution of the US President — he questions if that goal should take priority over everything else they come across in their path. At what point, Garland asks, should the journalist stop being a witness and start being a participant?

“Honestly, it’s a nuanced question, nuanced answer,” he said. “I can’t say what is right or wrong. There’s been an argument for a long time about news footage. If a terrible event happens, how much do you show of dead bodies? Or pieces of bodies? Does that make people refuse to accept the news because they don’t want to access those images? Or worse, does it make them desensitized to those kinds of images? It’s a tricky balance to get right.”

In this particular case, one of the agendas was to make an anti-war movie if possible. He refers to the controversial Leni Riefenstahl directed 1935 film Triumph for the Will, which is essentially Nazi propaganda.

Garland didn’t want to accidentally make Triumph for the World, he said, by making war seem kind of glamorous and fun. “It’s something movies can do quite easily,” he said. “I thought about it very hard and in the end, I thought being unblinking about some of the horrors of war was the correct thing to do. Now, whether I was correct or not, in that, that’s sort of not for me to judge but I thought about it.”

Garland establishes the chaos early, as Dunst’s character covers a mob scene where civilians reduced to refugees in their own country clamor for water. Suddenly, a woman runs in waving an American flag, a backpack full of explosives strapped to her chest.

“Like the coffee-shop explosion in Alfonso Cuarón’s Children of Men, the vérité-style blast puts us on edge — though the wider world might never witness it, were it not for Lee, who picks up her camera and starts documenting the carnage,” reviews Peter Debruge at Variety.

To achieve the visceral tone to the action, Garland decided to shoot largely chronologically as the hero photographers attempt to cross the war lines from California to the White House.

After two weeks of rehearsals to talk through motivations and scenes and characters, Garland and DP Rob Hardy then worked to figure out how they were going to shoot it. He wanted the drama to be orchestrated by the actors, he told SXSW. “The micro dramas, the little beats you’re seeing in the background, are part of how the cast have inhabited the space.”

Spaeny, offers insight into Garland’s “immersive” filming technique in the film’s production notes. “The way that Alex shot it was really intelligent, because he didn’t do it in a traditional style,” she says. “The cameras were almost invisible to us. It felt immersive and incredibly real. It was chilling.”

A featurette for the movie sheds light on Garland’s unconventional filming style, in which he describes Civil War as “a war film in the Apocalypse Now mode.”

While the A-camera was a Sony VENICE, they made extensive use of the DJI Ronin 4D-6K, which gave the filmmakers a human-eye perception of the action in a way that traditional tracks, dollies and cranes could not. They also bolted eight small cameras to the protagonists’ press van.

To Matthew Jacobs at Time Magazine, Spaeny likened the road scenes to a play, adding, “unlike theater, or even a typical movie shoot, Civil War changed locations every few days as the characters’ trek progressed, introducing constant logistical puzzles for the producers and craftspeople to solve.”

Dunst’s husband Jesse Plemons makes a brief appearance in the film, but commands the scene as a menacingly inscrutable soldier carrying a rifle and wearing a distinct pair of red sunglasses.

“I can imagine that people might read into or some kind of strange bit of coding into Jesse Plemons’s red glasses,” Garland says in A24’s notes. “Actually, that was just Jesse saying, I sort of think this guy should wear shades or glasses. And he went out himself and he bought six pairs, and we sat there as he tried them on, and when he got to the red ones, it just felt like, yep, them.”