Wednesday, 20 November 2024

Behind the Scenes: How sound and vision get under the skin in Nickel Boys

IBC

article here

Re-recording Mixer Tony Volante and Cinematographer Jomo Fray explain how they told a drama from the perspective of its lead characters.

Staying true to the format of Colson Whitehead’s 2019 best-selling novel, director RaMell Ross has shot Amazon/MGM feature Nickel Boys in a first-person point of view, presenting a unique challenge for sound design and dialogue mix.

Inspired by real events, the story follows two African American boys, Elwood and Turner, who are sent to an abusive reform school called the Nickel Academy in 1960s Florida. Because the of the strict POV design, we rarely see the protagonists’ faces so the sound - both heightened and naturalistic - takes precedence.

Tony Volante, the lead re-recording mixer and co-supervising sound editor worked with re-recording mixer Dan Timmons to create the final Dolby Atmos mix. They previously teamed on Ross’ Academy Award-nominated documentary Hale Country This Morning, This Evening (2018).

“What made this specific project special was RaMell’s choice to shoot Nickel Boys predominantly in first person perspective,” Volante says. “This would bring the soundscape to the forefront of the primary characters' emotional storytelling.”

There were numerous discussions with Ross about how to generate concepts for presenting the first person perspective artistically and authentically through sound. Throughout most of the film it is quite obvious visually to the viewer who is in first person, but there are a number of exceptional moments when you think you’re not in first person until midway through a scene. That’s when the sound moves with the camera catching every detail and you realise you are the camera.”

By creating a specific dialogue sound in the mix for the ‘camera’s’ voice through added ADR breathing and unique dialogue treatment, the viewer is immersed in the world in front of and 360 degrees around the camera. Moments like these are what keeps pulling the viewer back to the first person POV. 

“Creating a sonic perspective that would accurately and entertainingly portray the first-person point of view was a unique challenge for the dialogue mix. Initially, I began mixing the POV voice mono/centre while panning the other dialogue and world around it.

This sounded quite nice, but wasn't really different from how I usually approach a traditional film mix. For Nickel Boys, we knew we wanted the first-person POV to have a unique sound relative to the rest of the dialogue in the film. The POV voice needed its own ‘space’ that separated it from the film slightly, but also subtle enough not to distract the viewer/listener out of the film.”

Their concept was to pull the voice slightly off the screen and hover it within the camera/viewer position. To begin the process, Volante started by creating a wider soundscape for the POV voice while monitoring in Atmos.

“This sounded good in Atmos, but down mixes did not capture the dialogue effect accurately,” he recalls. “It wasn’t going to be possible to do completely separate first-person dialogue treatments for all the different mix formats, so I wanted to absolutely make sure the stereo mix would accurately portray the POV.

“To come up with the POV sound, I made the decision to monitor in stereo with my Neumann NDH20 headphones. When mixing with headphones, I can more accurately hear the spatial differences—the proper amount of reverb and stereo imaging—that sometimes get masked within the mixing room’s acoustics.”

He experimented with various plug-in image settings to discover which ones sounded best. After listening to how these would upmix in 5.1 and Atmos, uses the one that also translated the best across all formats.

“Hearing how the effect translated from stereo to up mix rather than the more traditional approach of starting big and checking how it ends up down mixed in stereo, ultimately proved a more accurate approach.”

Dialogue editor, Michael Odmark, created a set of tracks containing all the first-person POV dialogue. This allowed a customised spatial treatment during the mix for those particular clips.

“It was clear early on in the process that the POV clips needed to start as a stereo image before adding any additional treatment,” says Volante. “I added a slight stereo spread plug-in to the chain of these clips, followed by an upmix plug-in to spread the dialogue to multiple channels, including the surround channels.

“Some scenes needed a little extra spatial enhancement, so I added a reverb send for the POV that was used sparingly for an enhanced 3D ‘in your head’ effect. Despite having an immersive configuration that worked for most of the film, I discovered that minor tweaks were required throughout the mix depending on location or quality of the voice recording.” 

Production Sound Mixer, Mark LeBlanc recorded some production sound with a stereo/MS microphone during photography. There are many moments in the film where these recordings were used to enhance the spatial environments and capture a natural stereo spread of the background voices at Nickel Academy, further enhancing the POV perspective.  

“I love panning, so this film was a dream to work on,” says Volante. “All dialogue in the film was panned to accurately portray the POV. Even the slightest off-centre image, dialogue is panned, following the characters throughout the scene.”

Timmons used a similar technique for SFX and Foley mixing, panning not only hard EFX and Foley but also unconventionally shifting the background pan viewpoint to enhance the POV camera motions.

Volante says his colleague’s sound design during the ‘White House’ punishment scene is a highlight in the way of immersing the viewer into Ellwood’s POV. The viewer doesn’t see his beating but sound conveys the brutal whipping while black and white archival footage is shown. 

Source music also played an important role in portraying the first-person narrative. Special care in panning was used throughout on the music tracks playing out of a radio or from a record player. Panning was, needless to say, very active during this mix.  

The dynamic score, by Scott Alario and Alex Somers, was delivered in stereo stems. Volante says, “For music, especially for a movie like this, I prefer stereo stems to 5.1 stems since I can be more precise with the panning and channel placement. I like catching movement on screen and panning components in time with the score. Using the stereo stems, I was able to create spatial movement inside scenes, which further enhanced the sensory experience of the first-person POV.”

The feeling of sight

Cinematographer Jomo Fray played an intimate role in adapting the story. “From the first time I read the script I saw every single moment in the movie in a first-person perspective,” he says. “We constantly asked ourselves how could we manipulate traditional film language to work from a sentient perspective.”

By this, he means images that don’t just convey what is being seen from the point of view of the protagonists but how they are feeling too.

“I feel like the promise of cinema is the ability to walk in another person’s shoes. To feel what it’s like to be another human on Earth. What’s so unique about this film is that it truly does invite you into being in the body of a young black boy during the ‘Jim Crow’ years.”

He is referring to laws introduced in the Southern States that enforced racial segregation, ‘Jim Crow’ being a pejorative term for an African American.

“I want this film to invite you into living concurrently with them and their thoughts and their feelings as they move through the world.”

The technique they evolved was to shoot long takes to maintain a flow of camera with actor and to capture details to transition in and out of scenes. These details or inserts as Fray calls them include a deck of cards being shuffled and a gold bracelet.

“Inserts are the visual idea of the things that our mind remembers after an event that are more fragmentary, but still are a part of that memory,” Fray explains.

“These details became really important to us, not only for the edit, but also to try to describe the experience of sight,” he says, “the way that sometimes we hyper-fixate on things especially in moments of inhumanity.”

“After a car crash people rarely remember the impact itself as if our brain deletes the memory. There’s some aspect of that that feels tied to trauma. We tried to build in that kind of camera language throughout the film as Elwood tries to interpret what has been a very traumatic experience for him.”

Fray shot on a Sony Venice with Panavision large-format spherical primes often in shallow focus and with a 4:3 aspect ratio throughout. The format helped integrate archival collages of American history including the moon shot program of the late 1960s and black and white stills of Black children who may have endured the same treatment over the years.

“For me, the use of archive felt like a cascading of thoughts so using 4:3 was a way of trying to lessen the artifice and immerse the audience into that,” he says.

Combining a 4:3 aspect ratio with shallow focus was also a means of articulating the feeling of sight. “Humans have an incredibly wide field of view but our experience of looking is of our brain forming selective focus on where we look. Our brain will perceive something to be in focus whereas everything else is out of focus.

“RaMell and I wanted to capture that feeling of selective focus and of the way that the brain then puts together meaning in what we see. Hopefully, we are inviting the audience to not just see through these boy’s eyes but be in their thought process.”

Of the lens choice he says: “The way that they shot volume felt really special. Especially for a movie like this, where you’re seeing from the eyes of a character, it’s really important to have a sense of presence about the relationship between the camera’s eye and the rest of the space around you.”

Fray recalls a moment in production when Aunjanue Ellis-Taylor (playing Elwood’s grandmother) has to give Elwood some devastating news. “As we were shooting that scene, I’m thinking as Elwood and I found it really hard to look her in the eye. When I’m hearing her start to say something that I know is painful for her to have to say, my camera’s gaze drifts away.

“There was a moment of silence and Aunjanue did something unscripted. She put her hand out and she said, ‘Elwood, look at me son’. As an operator, I had to look back. I had to meet her gaze. After that take that, we all understood with clarity why a sentient perspective is so interesting.

“It isn’t just that the image is inside the scene, it’s that the image itself has to respond and react to the vulnerability that the actors are giving. There’s just such a deep intimacy that is created here not only for the viewer, but even for us as operators. As an operator you compose in a different way someone is acting as a mother and physically hugging you as a mother. That changes how you think of composing the scene.”

In one scene set at a bar, an adult Elwood meets a former inmate from Nickel. Daveed Diggs, the actor playing older Elwood suggested to Fray that he open up the right side of his shoulder to allow the other character to come into shot. He then suggested that he close his shoulder when Elwood starts feeling isolated by the conversation. Doing so pulled the secondary character out of the frame.

“It was co-authoring back and forth,” says Fray. “As the cinematographer, I was invited to connect and feel like a partner to the actors. There are also shots where the actors were invited to have a sense of co-authorship of the image. How they moved their body fundamentally changed the image.”

This is particularly the case for scenes set around 2010 which depict the older Elwood. To convey this the camera is mounted on the actor themselves using a rig called a SnorriCam. It was positioned to keep the back of their head in shot with the effect of giving the character, decades on and still traumatised, a feeling of disconnect with his younger self.

“The effect is what I call a second person perspective which is being able to see yourself in space but with a slight dissociation with yourself that just felt right for how trauma is remembered.”

Sunday, 17 November 2024

How 'Endurance' brought Ernest Shackleton’s epic Antarctic adventures to life

RedShark News

article here

Considered the world’s first documentary feature, South was a record of Sir Ernest Shackleton’s 1914 to 1916 Endurance expedition to Antarctica, during which the ship was crushed by ice, stranding the crew. Over a hundred years after the ship was lost beneath the ice and Shackleton had led his crew to safety in an epic feat of survival, sponsored expedition organised by the Falklands Maritime Heritage Trust attempted to find and film the wreck.

They did so, with historian and presenter Dan Snow on board the South African icebreaker Agulhas II, to publicise the event. That 2022 expedition and the original heroic failure are the subject of new film Endurance which not only colourises the original footage but uses AI to bring Shackelton’s voice to back to life.

It’s directed by Elizabeth Chai Vasarhelyi and Jimmy Chin Oscar-winning filmmakers of Free Solo and Emmy-winning documentary The Rescue, about the against-all-odds rescue of 12 boys and their coach from inside a flooded cave in northern Thailand.

Bob Eisenhardt edited both of those docs and is also a producer on Endurance. The production had access to the 4K restoration of South from the BFI that was released in 2019.

“We wanted to make sure we could tell the Shackleton story and give it weight in the film because it's just one of the greatest survival stories there is.”

There was around 40 minutes of remastered footage but in the process of accessing it from the BFI they discovered another unrestored reel of 10 minutes.

 “The Hurley footage is spectacular. It looks like it was shot yesterday, but unfortunately when they abandoned the ship, they abandoned the cameras.

“So we had half the story we could tell through Hurley and the other half of the adventure, the fight for survival, was dramatically reconstructed.”

They filmed original recreations in California and Iceland, on glaciers and recreated boats in actual ice and freezing temperatures.

Treating the archive

“For the longest time I was just looking at the footage as a 1.3:3 image in black and white and it felt like we should keep that as an artifact. But when compared to the other material it also began to look like you were looking through a little porthole.

“The first decision was to blow it up a little to 1.6:6 and as soon as we did that we could see what was happening in the images much better. Then we talked about how we could the whole story more immediate. The big problem with history is how you make it resonate for audiences. That’s when the idea of trying colour came in.”

Hurley had in fact tinted his original film with sequences in blue, green and amber. It looks somewhat crude today but was cutting edge for its time.

However, their agreement with the BFI explicitly forbade any colourisation. “We couldn’t touch the footage,” says Eisenhardt. “So we ran an experiment. We had our partners at BigStar use AI to colour a sample of the image and it looked amazing.

“I’d been living with the black and white footage for six months and suddenly the images jumped off the screen. You can see that they’re eating peas for dinner and there was Shackleton in the middle of the scenes which you never really noticed before.”

They still had to get permission from the BFI. “They were adamant against colouring it. The archivists were very afraid of the Peter Jackson effect - that we would be creating something completely new. We wanted to stay away from that too, but they also insisted that the colours be accurate. That was very complicated but we solved it by devising a colour wash. When the BFI saw the colour wash samples they allowed us to do it. The fact that we were able to use colour gave the story so much more life.”

Like Jackson’s work on the Imperial War Museum’s archive to make They Shall Not Grow Old however, they did use AI to interpolate or create new frames to enable the film to be presented at 24 fps.

Colouring the Endurance

“We did a lot of research around the colour of clothing, the colour of ships and other textiles and materials of the voyage,” explains Josh Norton, Founder, and creative director at New York based creative agency BigStar (styled BGSTR) which previously worked on Free Solo. “We standardised those colours and spent weeks digital crafting each frame with some advanced software and AI to get to our final result.”

They determined that AI was not of a high enough fidelity to do all the work. “You can do an okay job quickly using straight out of the box AI approaches and effects - and there's a lot of different packages out there that do that,” Norton says, “but the amount of control that we needed to stay true to the material and to give a consistent result needed a large amount of manual craft work.

“This includes tracking clothing, and having just the right Burberry green for all the parkas, making sure that there was no fluctuation in the tones on the painted surface of the Endurance. There’s a degree of exactitude that needs to be achieved when it comes to the details of the tactile nature of that world. Every piece of linen and rope, every piece of wood, all the hair on the dogs, and the colour of the clothes needed very specific attention. AI processes as they exist right now cannot afford that attention.”

Norton says, “A wash technique really allows the black and white imagery to create all the value and contrast difference within the frame. We're just simply adding colour rather than adding any other kind of visual information or overriding the black and white. We didn't want to create a highly saturated result. We wanted the material to still feel aged.”

BigStar also did graphic design and title sequence for the show as well as taking in all the survey data and high rez photography to create a 3D model of the wreckage.

“The 3D model serviced several points throughout the film as far as explaining the status of the wreck and also giving us material for the title sequence itself,” Norton says.

Exploration and human endurance

The doc finds parallels with Shackleton's story and the recent expedition to find the sunken ship. “Thematically they were very similar,” says Eisenhardt. “Both expeditions got stuck in the ice, they suffered ups and downs and harsh weather. Both are tales of friendships. To make the story work, you had to find specifics that kind of spoke to each other. The idea of exploration as pushing the boundaries is a universal theme. But finding those moments in both the Shackelton footage and the new expedition that ‘talk’ to each other so that it would enhance both stories takes a lot of time. It took months to figure out the right percentage of each story to include.”

Footage of the 2022 expedition itself amounted to over 500 hours, the bulk of which was from shooting three cameras onboard the remote operated submarine that would spend six hours at a time underwater searching for and filming the wreck. The film’s co-director Natalie Hewit was onboard the Agulhas II to supervise.

“We watched everything, literally,” says co-editor Simona Ferrari. “We had a team that went through the footage shot 3000 meters below sea level so instead of having six hours’ worth we got maybe 2-3 hours per dive.”

AI brings Shackelton’s voice to life

The documentary also narrates portions of the 1914-16 expedition in the words of Shackelton and his crew who all kept diaries. “There were thousands and thousands of pages, some of it unpublished,” Eisenhardt relates. “All that had to be copied and sorted for the best material and from there we began a discussion about AI. It was a real discussion about whether using AI to voice their words was the proper thing to do. Our conclusion, was that AI is the perfect tool for this situation.”

The alternative might be to have some celebrity or actor pretend they're Shackleton but if they could use AI to replicate his voice speaking words he wrote, that felt closely to the truth.

It was easier said than done though. The only recording of Shackleton’s voice lasts just four minutes and was made on an Edison Phonograph wax cylinder.

The noise on the recording was as loud as his voice, and he was speaking strangely since he was projecting into a giant megaphone. It was unusable.”

They turned to Ukrainian AI voice specialists Respeecher which, among other work, had resynthesized Mark Hamill’s voice for a young Luke Skywalker in an episode of The Mandalorian. They were able to scrub away the noise to leave a clean vocal track. Meanwhile, the crew’s diaries were whittled down to the raw material that they could exact dialogue from. They hired an actor with a neutral accent to record the dialogue, giving the appropriate intonations, from Respeecher made a model. They then applied their AI app using the voices of Shackelton and his crew.

“Everybody knew the story of The Rescue because it had been headline news for over a week. People know the outcome of Shackelton’s heroism. I think what matters to me is finding out who these people are and what makes them tick. That means digging deeper into the characters and wondering why did what they did. We started from asking what their motivation is to do what they do and we build up from that.”

The same filmmaking team are already embarked on another project, this one about climbing Everest. 

History of South

The original footage from South was donated to the BFI in the 1950s and the archive began to restore it back in 1994. There was no one complete original negative source for South.’ Overall, 99 different copies of film relating to Shackleton in the Antarctic, varying in length and age, were examined to piece together a restored version as authentically as possible.

The restoration used original camera neg from the expedition, prints from the sound reissue, nitrate release prints from the EYE Filmmuseum in the Netherlands with colour tinting, as well as 18 photographic glass slides.

The original photochemical techniques for colour tinting and toning were also recreated by the conservation team. This was completed in 1998, then digitally remastered for the film’s centenary, with renewed intertitle cards and a newly commissioned score by Neil Brand, in 2019.

 

Thursday, 14 November 2024

Sports piracy: who’s pulling ahead in the AI arms race

IBC

article here

Live sports is the battlefield as AI plays both sword and shield in the ongoing war with piracy.

Piracy of sports streaming is rampant. So much so that an ESPN reporter was accused of watching an illegal streaming site when he posted a comment relating to a recent NFL game on social media.

Meanwhile, having paid €400m a season for rights to cover France’s top football division, Ligue 1, sports streamer DAZN set a target of 1.5 million subscribers only to find that in the first week of its broadcasts around 200,000 people were illicitly streaming its coverage.

Cybercrime is endemic according to anti-piracy solutions vendor Synamedia which suggests the sports industry is missing out on $28bn a year as a result. Even that figure, calculated in league with Ampere Analysis, accepts that a hardcore 26% of viewers will never pay.

“Piracy is exacerbated by the fragmented content market,” says Tim Pearson, Product, Solution and Partner Marketing Leader at Nagra. “There’s anecdotal consumer feedback that says ‘I’ve paid for two or three services, I can’t pay for any more so I’ll access this one through a much cheaper service’. The problem is that many consumers don’t realise that they are buying into a criminal enterprise that is probably harvesting data from them as well.”

Werner Strydom, Head of Advanced Technology and Innovation, Irdeto, says: “There is more live streaming piracy than in the past because live is the most valuable content to try and monetise with a pirate business model.”

AI sword

AI is escalating the problem. For example, Generative AI is accelerating the process by which pirates can create teasers, clip content and publish on social media to drive audiences to their platform.

Pearson says: “Pirates run sophisticated marketing organisations and AI is making their fake content look as good as if it were created by someone like the BBC. When content looks this professional, it dupes the consumer into thinking that the site must be legitimate.”

From a forensics perspective, the use of AI by pirates is a genuine concern. High-value content like a Champions League football match is embedded with a watermark that is so subtle that pirates find it difficult to determine whether it’s present or not, let alone remove it.

“Usually, [pirates] do a lot of content manipulation in the hope that if there’s a watermark present, it damages it to such a degree that you can’t read it again,” says Strydom. “With AI they probably no longer have to guess; they will know for a fact.”

Similar risks are now occurring with upscaling. “Even if pirates have stolen a crappy SD version, they can upscale it to HD. That used to take a lot of processing and video editing skills but AI has made it a lot easier.”

That upscaling process can also scrub away the watermark.

AI shield

The shield side of the picture is that AI is also accelerating defensive capabilities. Andy Haynes, SVP of Engineering at Friend MTS, an anti-piracy provider which counts UEFA among its clients, says: “AI is a really hot topic. There’s a lot of misinformation going around about how vulnerable the industry might be to AI, but it can also be extremely useful. The real value for AI is going to be in a lot of small things that help us work more efficiently, rather than one big system that just fixes everything for you.”

He reports: “We’ve seen cases where people have got a TV on in the background and they’re effectively doing the commentary with the live broadcast. It’s not the same content as was originally broadcast but we can start to use AI to detect that and then investigate further.”

The power of AI mostly comes down to automating what used to be an extremely manual workflow. A lot of metadata gets added manually based on human judgement and with human interpretation of the results. The latest generations of AI are making it a lot easier to automate those processes.

“It’s not completely taking the human out of the loop, and I doubt that will ever be the case,” says Strydom. “But what it certainly does is make it possible for us to extend the scope of our [web] crawling to look for piracy and to process a much larger quantity of potential piracy candidates.”

Dealing with live piracy requires responding to an illegal stream within minutes of an event having started, and according to Pearson, this is a major area where AI can help in triggering and accelerating workflows automatically.

“If an algorithm detects a watermark or a fingerprint of content that’s distributed illegally, the model is also smart enough to be able to react and deal with it,” he says. “You can scan for a lot more patterns and do a lot more pattern matching with AI than you can do conventionally.”

For a popular live event, there may be tens of thousands of potential streams that need to be investigated, but only a fraction are relevant to the actual event requiring protection. “Step one is differentiating between what is in scope and what is out of scope,” says Strydom. “Using AI to make a judgement call about whether something is tennis or football or some other sport is an established capability.”

On top of that, logo recognition can be added to filter out legitimate candidates. The next step might be to issue takedown notices to the few streams that are left. There are, however, risks of scooping up legal streams in the trawl.

“A lot of operators are not entirely comfortable with takedowns being completely automated,” Strydom continues. “Instead, they may try to prioritise known pirates who are causing the most harm from a brand or revenue perspective, and not worry too much about the smaller guys. Many operators are willing to accept a certain degree of piracy because they don’t want to create too much disruption for the legitimate customer base.”

In Italy, anti-piracy platform Piracy Shield – which has been in operation since the beginning of this year and is managed by the nation’s media regulator to protect sports rights on behalf of Prime Video, DAZN, and Sky – has managed to take down legitimate providers on more than one occasion.

“Clearly that’s not a good thing,” Haynes says. “We don’t want that to happen [to us]. We have a certain amount of reticence about using AI in that decision-making process for the very reason that if it gets it wrong, the negative consequences are pretty staggering.”

Haynes says police forces are similarly hesitant about using AI in pre-emptive decision making. “They can use it for pattern analysis and to predict where things might happen, but it’s usually a very robust process when it comes to enforcement because you have to be able to stand by how you’ve reached a conclusion. It’s not good enough to just say ‘That’s what the model spat out’. You’ve got to have the actual evidential chain.”

Minority report

One of the most compelling ways AI could transform the battle against sports piracy is through advanced content recognition and detection systems. Could algorithms predict potential instances of piracy before they occur?

Haynes is sceptical. “I wouldn’t like to say ‘never’, but I don’t think you’d need to detect it before it appears so much as you just need to find it sooner. Especially for boxing matches or events that can be over very quickly, the time taken to respond is hugely important.

“You can assume people will be trying to pirate an event and you could probably use some AI behavioural analysis to judge where to look. You don’t necessarily need to recover the stolen goods, but you need to have enough evidence to suggest that someone is committing a crime. We have techniques in place in certain areas that look for indicators of piracy to support that evidential chain,” he says.

Irdeto’s Strydom says it is possible to predict piracy based on subtle signals that have been accumulated as data sets, as a result of interactions by customers with call centres and conditional access broadcast systems.

“The patterns are so subtle that a human probably can’t see anything wrong with it but if the data set were large enough and you train a model on ‘normal behaviour’ versus abnormal behaviour, it may be able to filter for possible pirates. Whether you’d go a step further and pre-empt piracy before it happens, I’m not sure. It sounds a little sci-fi, even a little scary. We’re not experimenting with anything in that area right now.”

Disrupt to desist

Disruption is sometimes better than cessation, especially when blocking streams becomes a game of whack-a-mole. “Shut down a pirate stream, another one will open up. Whereas if you disrupt the experience, it is less easy for the pirate to monitor,” Pearson says. “That’s where AI can really disrupt the operation. For example, you could put up an overlay on the illegal stream to tell viewers it’s not authentic but they can scan a QR code and watch the rest of the game legitimately.”

Various other counter measures can be deployed using AI that will disrupt the viewing experience. “Ultimately, it’s not necessary to always kill the stream, but to make it such a bad experience that subscribers are going to give up trying cheap versions that don’t work and will then convert back to paying subscribers.”

AI-driven dynamic pricing models

Another possible deterrent is to alter the price of legitimate streams depending on precise consumer analytics based on their preferences and local market.

“It’s definitely possible,” says Pearson. “In fact, one of our products (Insight Negotiation Agent) uses AI in a virtual call centre agent, so if a customer thinks they’re paying too much for sports they could re-negotiate a new price within the guardrail set by the operators, via the virtual customer agent.

“That technology is here. Dynamic pricing on a match day is also possible. However, you’ve got to be careful that your core paid-up subscribers don’t lose out to people who are paying a third less after converting from a pirate service. It’s a balance, and tricky to do on a match day if you’re working on a pay-per-view basis.”

After DAZN had its nose bloodied on the first weekend of its Ligue 1 coverage, it offered a temporary discount on its monthly subscription fees (from €39.99 to €19.99) in an attempt to address pricing concerns. According to a survey from French market research and polling company Odoxa, Nearly two-thirds (65%) of French football fans believe the cost of a subscription to DAZN will encourage more illegal streaming of Ligue 1.

“The price was enough to drive the illicit behaviour,” says Pearson. “Once DAZN reduced the price and deployed a load of counter-piracy measures they saw an improvement.”

No AI silver bullet

Experts emphasise that tackling the criminals requires a belt-and-braces and straitjacket approach.

“There’s always something new coming along that will thwart whatever model you’ve already built,” Haynes says. “Pirates try to keep in business by avoiding detection, so we find that some of the things we’ve done in the past don’t always work the second time around.”

Nagra research quotes an operator of a pirate organisation whose candid response was to call it a game of ‘who can outwit the other the quickest’. “Just because pirates are using AI, that doesn’t mean that the industry isn’t using AI,” says Pearson. “So for every benefit the pirate gets, the industry also gets another one.”

There are no signs that cybercriminals are using AI to break the cryptography associated with the digital rights management (DRM) system, yet. Irdeto’s Strydom calls it “an arms race”, but also that there does seem to be greater awareness among clients that security is no longer something that can be skimped on.

“For a long time, anti-piracy has been seen as a ‘nice to have’, not an imperative; almost as if operators were asking why they should police the net,” he says. “Now that they can quantify losses due to piracy in cold financial terms, we’re seeing a change in anti-piracy attitudes.

“For years everybody talked about multi-DRM, but that is now just a hygiene factor. Operators are learning they need a lot more than that.”

 


Tuesday, 12 November 2024

Behind the Scenes: Gladiator II

IBC

In a world of green screen and AI, the sets for Gladiator II might be the last great build in movies.  

article here 

In 2000, Gladiator reaped more than U$465 million worldwide, revitalised the historical epic, catapulted Russell Crowe to international stardom, and won five Oscars from 12 nominations, including Best Picture.

“Twenty-five years ago, we made G1 and I know it was special,” director Ridley Scott said at a Bafta preview of the film alongside cast and crew. “It wouldn't go away. I’ve been busy making 17 other movies and along the way I kept being told by different generations, different nationalities ‘I love Gladiator’. They’d seen it online. The great thing about the platforms is they perpetuate all films all the time and they look as good as the day you made it.”

‘Are you not entertained?’ baited Russell Crowe’s Maximus Decimus Meridius to a baying crowd in Gladiator. With the sequel Scott appears to be baiting the cinema audience with action set piece on top of set piece.

Producer Douglas Wick says of ancient Rome, “The audience has seen grand combat many times over and their thirst for more was unquenchable.”

Production designer Arthur Max describes G2 as “Gladiator on steroids.”

In Malta, they assembled the palace, a grand city entry arch adorned with Romulus and Remus motifs and whole blocks of ancient Rome in an area approximately 8 km long. There was even a life size statue of Pedro Pascal, playing a Roman general, on his horse.

“It would be hard to overstate how massive a production Gladiator II was,” says producer Lucy Fisher. “The scope was overwhelming. In Morocco, there were over 80 huge tents dedicated just for the extras’ hair and makeup, and to house countless props and costumes.”

To film the Colosseum the production returned to Fort Ricasoli in Malta, the 17th-century building that had served as the site of the Colosseum set in the original film. The practical build was roughly one third the correct height of the real Colosseum, and somewhere between a quarter to a third of the span.

In a world of green screen and AI, this might be the last great set build in movies. 

Scott disagrees, “I want to build them bigger and bigger! We worked out it was cheaper to build a set than to use blue screen. Each time you add blue, it means money. There would be some element of blue in almost every frame of this film. So, what you see is real and none of it is blue screen.”

He shot the opening scene’s sea battle of Numidia in the middle of the Moroccan desert repurposing the old set from his 2005 film Kingdom of Heaven. “That was very economical,” he says.

“Ridley wanted two 150-foot ships coming toward this wall where a huge battle is taking place,” says Special Effects Supervisor Neil Corbould. “But there was no water there.”

They deployed hydraulic building movers (capable of holding nuclear reactors or tanks) and used them as platforms to steer two full-scale ships over the desert to simulate an invasion by sea.

“I’d seen these [machines] on the internet and had wanted to use them for years,” says Corbould. “This was the perfect job for them.”

ILM added water, sails and the rigging for the boats as well as arrows and fireballs.

“We replaced the clear skies with ominous dark clouds. And then we put in a few birds because the way to Ridley’s heart is always to add some birds to the shot,” says VFX Supervisor Mark Bakowski.

Multicam theatre

Scott, who is known for using up to four cameras at a time, regularly used eight to 12 for this shoot, plus additional drones and crash cams. He proudly claims to have shot the film in 51 days as a result.

“I can capture Paul [Mescal’s] entrance into the Colosseum in two takes as opposed to it taking all day,” explains the director. “You have to know exactly where to place the cameras. I can do that because I’ve storyboarded it all in advance. For even the best camera operator it can be hell. I don’t rehearse with the actors, but I do rehearse with the camera operators, and I dress them in costume on the set because they could end up in a scene.”

Mescal, who plays the hero Lucius, explains what it was like for the actors. “When Lucius arrives into Rome in caged-cart and into the arena it was shot in a single set up. Ridley had mapped it out half a mile of coverage. All of that was shot before lunch. Which is absolutely absurd.”

He added, “It felt like theatre to me because the cameras are always on.”

Scott likens the technique to directing each scene like a play, with simultaneous action taking place all over the set. “It helps the actor because their performance is not interrupted [by stopping for many set-ups]. I'm going to run the scene and the camera never stops. Even if you're not speaking [he told the actors] you're on. I'll be watching you.”

Denzel Washington, who plays scheming former gladiator Macrinus, agrees, “Everywhere you turn is Rome. It’s 360. That made our jobs easier than looking at markers for visual effects.”

Director of photography John Mathieson BSC, nominated for an Oscar for Gladiator, admitted, “I wanted to tear my hair out some days. Ridley works with a great deal of urgency. He has a lot to get done and this makes the process much faster.”

There was little conversation between the DP and the director while shooting, Mathieson says.

“I don’t do anything fancy. I place the lights and the cameras in the right positions. Some people claim we just mumble and grumble at each other on set but we don’t need to talk about the image. We’ve done this before. I know what he likes, I know what’s expected and I know it must look good.”

Calling the film “vivid, gaudy and a little camp” the DP’s visual cues were taken from the way Victorian painters romanticised neoclassical subjects.

“They painted idealised pictures of what Rome might have been,” Mathieson says. “There were goddesses in diaphanous gowns, beautiful marble stonework, opulent furniture, over-the-top feasts, and flowers. Rome was a bit of a mess by the 19th century, so it was primarily from the artists’ imaginations. These are not intellectual paintings but there is magic there.”

Enter the rhino

The Mill famously landed the UK’s first ever Oscar for VFX for its work on the original. This time around it is ILM in charge, including a gladiator-versus-rhino sequence which Scott had wanted to stage back in 2000, but was too expensive at the time to do with CGI.  Though never filmed, the CG test for the sequence was included on the film’s DVD release while Corbould, who worked on the original, dug into his own archive.

“I found some old storyboards of the rhino fight,” Corbould explains. “When I showed them again to Ridley he said, ‘Let’s do it this time.’”

Building the creature was a joint effort between Corbould and prosthetics designer Conor O’Sullivan. A wrinkled skin made of thick plastic was draped over the frame that became the rhino.

“We made a mechanical rhinoceros that could shake its head, flick its nose up in the air, and move its eyes and ears,” says Corbould. “We could literally drive it around the Colosseum like a go-kart.”

Flooding the Colosseum

In another scene inspired by historical fact, the Colosseum is flooded with water and filled with tiger sharks. Gladiators fight for their lives in a staged naval battle.

“There were two obvious ways we could approach it,” says Corbould. “We could build the Colosseum in a tank or use VFX. The best solution was to do both.”

Many of the larger shots were filmed on dry land, with Bakowski and the ILM team adding water in post. That meant Corbould had to find a way to create the sensation of floating with real boats filled with actors.

They brought back the industrial building movers, using them as a base to maneuver and crash a pair of galleons in any way Scott requested.

“Ridley was sometimes shooting with as many as 12 cameras,” Corbould says. “You want to get something in front of each of the cameras, whether it was boats or explosions or smoke or crashing water.”

The colour and depth of the water provoked debate. “We did many iterations, from the canals of Venice to Ridley’s LA swimming pool,” Corbould says. “The sharks, relatively speaking, went to plan but certainly didn’t make things easier.”  

Sound

More than 500 extras were brought in to play the Romans who crowded the Colosseum, with thousands added digitally.

“We wanted the actors to have as realistic an experience of the arena as possible,” says production sound mixer Stèphane Bucher. “We outfitted the set with huge speakers and assembled a wide variety of crowd noises to create the ambiance of the real games.”

Matthew Collinge and Danny Sheehan, founders of London-based sound studio Phaze UK, supervised sound editing and mixing.

To replicate the sound of 10,000 spectators, they recorded background players on the set, built that into layers, then added recordings of cheers and jeers from real-life bullfights, cricket matches, rugby and baseball games.

“We transformed them into a cohesive roar using a Kyma workstation,” says Sheehan. “Another device helped shape the roar of the crowd making it seem even bigger and louder.”

In the battle sequences, actors were outfitted with two mics concealed in their costumes, positioned to record dialogue no matter which way their heads were turned.

Nonetheless, “it was almost impossible to capture the dialogue audibly, but I take my hat off to Stèphane,” says Paul Massey, re-recording sound mixer. “He worked miracles so that we could minimise any ADR and preserve the original performances.”

Baboons go ape

The gladiator’s also fight vicious baboons in the arena. It was an idea that stemmed from Scott’s viewing of a zoo documentary.

“I'd seen a documentary about a wildlife park. There was ice cream, a tea shop and into shot come some baboons and some lady goes up to try and pat it. These are carnivores. It will tear your arms off.”

Envisioning a scene in which the gladiators face a troop of baboons Scott says, “Actors have to have an opponent to get the physicality and movement of the fight. So, I cast the smallest stuntmen and women we could possibly find. They are in black tights with black masks and Nikki (stunt coordinator, Nikki Berwick) made them short crunches that fit under their armpit so they could move on all-fours.”

Like the first film, the hero is fighting to restore democracy and honour in opposition to tyranny. Might the film’s release virtually day and date with the US presidential election have timely political resonance?

“Are you kidding?” Scott responds. “A billionaire wants to be the leader of the universe! Evil is evil. A sword will kill you just like an atomic bomb will kill millions. Death is death and where we are together today [as a society] we've really got to reign it and sort it out.”

 

Friday, 8 November 2024

Where the Wild Things Are

interview and copy written for RED 

article here

Combining blue-chip natural history footage and ‘in the moment’ observational documentary, independent Botswana based Natural History Film Unit (NHFU) is pioneering a fresh approach to wildlife programming. Operating from a private ‘film camp’ in the breath-taking Okavango Delta, the NHFU sends elite nature cinematographers into the field almost every day of the year to record raw, unfiltered and unique animal behavior which it then makes into cinematic award-winning films and series.

“We’ve kind of reverse-engineered the commissioning process,” says Brad Bestelink, the Emmy nominated filmmaker who co-founded NHFU in 2008 and whose extensive credits include The Flood, Living With Leopards, Okavango: A Flood of Life and many others. “Myself and up to five cinematographers are permanently in the field following all the big cats and animals regardless of whether we are commissioned or not.

“We live with the wildlife and wait for the stories to reveal themselves to us. As soon as we recognize the story, we start drilling down into that particular character or that particular circumstance.”

Bestelink will typically have shot 70 to 80 percent of the content needed to make a film before approaching a broadcaster.

“If I pitch an idea of a story, I join the line of everybody else pitching stories. It's also just a paper treatment and with that comes expectations about what the commissioning producer wants to achieve and that can be a straitjacket when you’re out filming.

“Instead, we go in with material already shot. We've already got a strong sense of what the story is. We've captured a lot of the key behavior. That gives the broadcaster a much clearer idea of the look and feel of what the film can be. There's a lot less risk involved for the broadcaster in making a decision to commission because they're more secure in what they what they're getting. The flip side is there's a lot more commitment and risk on our side, initially at least, but that's long been my approach to the majority of the films that we've made.”

NHFU is currently filming a second series of Big Cats 24/7, a six-hour documentary for BBC Studios Natural History Unit co-produced by PBS that follows the dramatic lives of lions, leopards, and cheetahs in the Delta. Bestelink’s team are following individual big cats around the clock, capturing their behaviour day and night.

“All of my camera operators are committed to working in the field. It’s their lifestyle and their passion. They will spend probably 275 days a year behind a camera in the field, following these predators all the time and gaining a deep understanding of their dynamics. Because we’ve invested so much time with them, we’ve already established biographies for many of the cats. The BBC is coming into something that is active and running instead of going into an area and trying to hire guides.

He stresses, “We know the individual characters, we know their territories, we know the terrain. There's a lot of experience and depth to our knowledge of these cats which just makes producing films a lot easier.”

Over the years, NHFU has amassed a 2 Petabyte archive of material that the company can exclusively draw on when producing new films.

“It's an enormous library that no one other than us has access to and because it’s all original material that we’ve been building since 2010 it has huge value,” he says.

A key reason for that is that Bestelink had the foresight to record virtually everything on RED. “It's the codec that is so exciting for me. We’ve invested in every iteration of RED camera but it’s the consistency and excellence of the codec that means everything we capture will have a very long lifespan. Our media doesn’t age.”

In 2010 Bestelink shot his first independent film on a popular professional camera, but when it came to deliver later that year the commissioner had moved onto requesting different formats.

“I thought, if I'm investing my life into making these films, I need to make sure that the format is going to be sustainable over time. At that point RED was not really utilized in Natural History but I had a friend working in commercials with an EPIC. On his invitation I went over to Australia for a month and tested the camera out. After that, I put my order in for one of the first RED cameras and have not looked back.”

Today, NHFU has one of the biggest fleets of RED cameras on the continent. “The compact ergonomics and the ease of the workflow are fantastic but more than anything being able to record at 6K and beyond has future proofed the media.”

His camera team go out solo into the bush and spend three to four days there filming wildlife before returning to film camp.

“A single person in the field is more wide awake, much more aware and much more in tune with the bush,” Bestelink explains. “As soon as you put two people in a jeep they will talk to each other and then that becomes their world, whereas a person on their own means it’s entirely up to them. They are listening and looking outwards all the time.”

Their camera kit consists of a HELIUM, WEAPON or V-RAPTOR and either Fujinon Cabrio 25-300mm or a Canon 50-1000mm which Bestelink calls “the ultimate wildlife lens”.

They also carry portable drives onto which they download the 4-6TB of media they are likely to generate on each field trip.

Back in camp their first job will be hand the drives to a colleague who will ensure it’s all backed up with the masters stored onto LTO tape.

“If the operator come across an incident in the field where there's a lot of action they'll radio in and one of the other cameramen will join them. Often, we'll have two or three photographers on one sequence, all cross-shooting on RED and in the same format. That’s quite an efficient way of working.”

The cinematographers rotate in shifts and are now able to shoot around midnight using military grade thermal imaging cameras. “You don't need any lights whatsoever and you can get great images without disturbing animal behavior,” he says.

The basic kit is complemented by a variety of specialist film equipment, including Phantom 4K FLEX, Shotover F1 Gimbal, DJI drones and even underwater housings and a submersible remotely operated vehicle.

The NHFU’s bespoke ‘film camp’ deep in the Okavango Delta houses a complete postproduction infrastructure with offices, suites and equipment for media management and processing. Edit teams prep proxies, tag and select media.

“I’ve brought several projects to a rough cut in the field right here,” Bestelink says. “It’s a one-stop shop. We've got multiple cameras with accessories and spares to prepare and repair them. We’ve got a full complement of editing software and we run eight customized filming vehicles out of this area.”

“There is safari tourism that is permitted in the area, but NHFU has exclusive rights for filming in this private area. We support that photographic tourism pays to keep the Delta as wild as it is and we work closely with operators to maintain this precedent”. We don't facilitate crews or operate as an agent for third party productions. Any production that we're working on, like Big Cat 24/7, is a partnership between us and the broadcaster so we’re very much entrenched in the production.”

Bestelink, who has lived in the Delta since he was four days old, also operates camera and spends almost as much time in the field as his camera team.

“I balance that with being at film camp producing and with my family. You know, I'm not a young cameraman whose heart is solely in the bush but I do live here with my family out in the middle of the bush.”

His work is increasingly focused on projects that have a conservation and environmental message. To do that, he also shows on-screen the experiences and relationships his cinematographers have with the cats as their stories unfold.

“Natural History filmmaking has experienced a boom but there’s some audience fatigue setting in because of the number of shows with the same glossy, high-end presentation.

“Incorporating people into the stories is a way to make it more accessible. The primary focus remains on the wildlife but the cinematographers are our primary storytellers. It’s through their relationships with the big cats that we learn so much more about them.”

He says, “We have to make people care about animals and the big cats in particular. If people don't emotionally connect with individual characters they're not going to develop in interest and passion for the wellbeing and future in their species in the wild.”

Visible from space, the Okavango is the world’s largest in-land delta. A combination of marshland and seasonal flood plains, it is rich in biodiversity and is often described as one of Africa’s last wildernesses. Yet the combination of population pressure and climate change is putting the whole biome at risk.

“The Okavango lives and dies by its annual flood and the amount of water that flows into the Delta. I just hope that we can protect it for long enough for the wet cycle to return. I am very concerned about its future.”

 


Thursday, 7 November 2024

TAMS: fulfilling the promise of IP interoperability

IBC

article here

The transition to IP using SMPTE 2110 has been broadly successful in a studio environment but interoperability in the live and near live domain still has work to go. A recent innovation from the BBC could provide the answer.

The Time-Addressable Media Store (TAMS) API developed by BBC R&D is a new way of working with content in the cloud. It’s an open specification that fuses object storage, segmented media and time-based indexing, expressed via a simple HTTP API. It is intended to lay the foundations for a multi-vendor ecosystem of tools and algorithms operating concurrently on shared content all via a common interface. In effect, blending the best of live and file-based working.


The open-source API specification was launched to the industry at IBC2023 which is where AWS sourced it as the basis for a proof-of-concept Cloud-Native Agile Production (CNAP) workflow, demonstrated at IBC2024.

AWS was particularly interested in the potential of TAMS to streamline the process of fast-turnaround editing in the cloud in an open, modular way.

“The most important outcome of all of this is interoperability,” says Robert Wadge, Lead Research Engineer at BBC R&D. “That’s really what we’re driving at. TAMS enables sharing between systems and sharing between workflows across and between organisations. The aim is to give the media industry a way into near-live fast turnaround cloud production that doesn’t require them to buy into a single vendor’s vertically integrated solution.”

The BBC and AWS approaches are part of a wave of similar software-defined architectures coming to market. TAMs dovetails with the EBU’s Dynamic Media Facility; systems integrator Qvest is proposing to build video streaming platforms using what it calls ‘Composable OTT’.

What is TAMS?

Work leading up to TAMS stems at least as far back as the IP Studio project showcased as a live ‘IP-end-to-end’ outside broadcast at the 2014 Commonwealth Games in Glasgow.

The initial goal with TAMS was to bring the worlds of live and post-production closer together. Wadge explains: “Until very recently those two worlds have been quite disparate because everything was locked into hardware devices and bespoke systems. It’s almost like you record video onto a bunch of files then you bring it into your post-production and half the referencing gets lost on the way. The move to software means that for the first time we had the opportunity to do things differently and make media addressability reliable and consistent.”

The shift to software certainly promises flexibility benefits, but it’s not enough on its own to solve the problems of scalability and interoperability. Simply replacing signal processing with software won’t move the dial beyond the limitations of workflows designed originally for coax cables and tapes.

Wadge continues: “We wanted to move beyond the ‘lift and shift’ of taking a bunch of black box fixed function devices in racks and putting them in a data centre. Instead, we designed TAMS to be cloud native. With that comes a new philosophy about the way you write and deploy software. You can take a more modular microservices approach to media workflows and the infrastructure that supports that. Crucially, it enables us to architect horizontal capabilities that can be shared among a variety of people rather than having a very specific integration for each workflow in order for people to access and move media around.”

Vendors that may have been reluctant to cede a competitive edge by opening their systems up before are apparently changing their tune. It helps that AWS has backed the project and brought in partners CuttingRoom, Drastic Technologies, Adobe, Vizrt, and Techex for the CNAP demo at IBC. Sky also participated. It’s worth noting that TAMS is cloud vendor agnostic.

“With this project we’ve seen a different approach to vendor collaboration,” Wadge says. “A lot of vendors we’ve spoken to are facing a situation where they have to do a lot of bespoke integrations themselves on behalf of their end users.

“For example, there are a whole variety of different media asset management systems (MAMS) which any tool vendor in this space is under pressure to integrate with. The interoperability interface that TAMS offers gives vendors an opportunity to integrate at a [foundational] level which means that they do one integration and everybody wins. In that scenario, people are starting to see that it will save a lot of time and effort on integration that could be spent on adding features and innovating their own products.”

Breaking video down into smaller chunks is not new. It’s pretty much ubiquitous in streaming for distribution. HLS and MPEG-DASH are both based on the concept but these are optimised for linear playout. TAMS effectively takes those short-duration segments, stores them in HTTP-accessible object storage, and applies a time-based index over the top. This creates an immutable database from which any piece of the media can be accessed via the API.

Although its prime application is to smooth inefficiencies for producing near-live sports and news content, there’s no reason why TAMS can’t be used further downstream, Wadge says.

The ‘store once, use many’ approach to repurposing media means simple edits can be expressed as a metadata ‘publish’ rather than a new asset or exported file. This strategy reduces storage duplication, time spent processing storage, and the volume of space required for the same workload. Basic operations like time-shifting, clipping, or simple assembly can be achieved without knowing the media type or format, described purely in terms of timelines.

Nor does TAMS place any constraints on the media format. Indeed, BBC R&D has experimented using uncompressed video. Most users however will want less data-heavy workflows, especially in remote production scenarios which require media to be streamed.

“The idea is to abstract everything to a timeline and that’s the key principle behind interoperability,” Wadge says. “One benefit that flows from that is that TAMS will work with any media type today or media types that might arise tomorrow.”

Next steps

The IBC demo was reportedly a success with interest in the technology from across vendors and end users globally.

“We’ve taken a lot of feedback from AWS and the partners who’ve been involved in CNAP to refine the specification and we expect to continue doing that with a much broader range of vendors. What we really want to happen is for people to pick up the TAMs API and to build products based on that.”

He says the BBC is looking to use TAMS internally, specifically for fast turnaround news workflows and for extraction of VOD assets from live streams.

“Beyond that, TAMS really starts to come into its own when it’s used to share media by reference more widely across the supply chain. For instance, you can store your media once in a serverless repository which is accessible by everyone who needs to access it and then people can just go and get all or a portion of it to work with. They could transform it and then write that transformation back into the store to be shared with others. That sharing function is extremely valuable. It starts to break down the silos between a lot of the different functional blocks on the supply chain.”

The identity and timing model that underpins TAMS aligns well with SMPTE 2110 and NMOS, as well as MXF and IMF file-based delivery protocols for interchanging finished assets between organisations.

“There are common principles that map very nicely between these different areas which we’d like to build on. We think that the real value here is to have that timing and identity flow throughout the supply chain. Then it becomes a foundation which we can use for richer discovery of media and management of media. That’s a big focus.”

TAMS also dovetails with project work undertaken by the BBC with the EBU.

Like BBC R&D’s foundational contributions to SMPTE ST 2110, the JT-NM Reference Architecture and the NMOS family of specifications, this is another project which could only have come out of a body that does not have a vested commercial interest. The BBC will benefit from the work just like any other media organisation if TAMS enables them to integrate best-of-breed solutions from different vendors to build better supply chains.

“We want to build the BBC’s technology estate in a more modern way, one that’s not limited by the interoperability issues that that we would have otherwise,” Wadge says. “We’ve removed the barriers to adoption by making TAMS an open and freely available spec with no license fees. It means that there’s very there’s very little friction there for vendors to come on board. So we’re really excited to see what people build with it and hopefully it can help them innovate rather than having to focus on reinventing the basics.”

Overlap with EBU Dynamic Media Facility

The EBU Dynamic Media Facility (DMF) initiative is focused on design patterns for systems that integrate software-based Media Functions, proposing a layered model and recommending the use of containers for deployment on a common host platform.

In the reference architecture, published just before IBC, Media Functions are interconnected using the Media Exchange Layer, forming chains or pipelines that can be instantiated and torn down dynamically as needed, on a common infrastructure platform. The Media Exchange Layer “provides high-performance transport of uncompressed or compressed media payloads between software Media Functions running on containers on the same compute node, or on different compute nodes in a compute cluster.” Wadge comments that this is a low-latency transfer between running processing functions and a clear point where interoperable approaches will be needed.

TAMS, on the other hand, focuses on how media can be stored in short-duration segments in an object store such as AWS S3 and accessed by ID and time index via an HTTP API. This can be used to share media between tools and systems with a fast turnaround from the live edge of an ingesting stream.

“The two projects are complementary, and there are common threads in the different domains that we’re interested in drawing together,” he says.