Thursday, 27 June 2024

Sports Production and 5G Make a Very Cute Couple

 NAB

This summer’s sports broadcasting schedule introduces new possibilities for 5G, a technology that promises to revolutionize production and even the whole broadcast infrastructure.

article here

While tests, proofs of concept and small-scale activations (such as at fan zones) still dominate, there are a few examples of 5G being used at scale in broadcast production.

Among other benefits 5G is claimed to improve efficiency for live broadcasters by removing the need for large onsite crews and tethered equipment, both in arena venues and remote locations.

According to the GSMA, this approach can help broadcasters lower their production costs by as much as 90%, and could ultimately save the global media industry billions of dollars each year.

The most high-profile current uses of 5G are around the Olympics and the UEFA European soccer championships hosted in Germany.

The Olympic Opening Ceremony along the Seine in Paris will be covered at least in part by cameras connected to bonded cellular links provided by telco Orange.

Details are being kept under wraps to manage security in what will be a packed and wide area. Meanwhile, the Olympic torch is being relayed around France in segments live streamed via a combination of 5G and Elon Musk’s Starlink satellite network.

France Télévisions is hosting the production of 10 hours of live coverage per day, tracking the torch’s 1625 km journey, traversing various geographical locations with differing levels of network coverage.

The broadcaster’s CTO Fredéric Brochard is on message: “This innovation gives editorial teams the ability to produce more content with greater adaptability and responsiveness, while controlling costs and maintaining the quality standards dear to France Télévisions.”

On the ground, the production makes use of a private 5G bubble “Dome” for content capture and distribution from French start-up Obvios. The Dome device is small enough to be carried and deployed in the trunk of a car.

France Télévisions broadcast engineer Amy Rajaonson explained that they had created a “private 5G bubble” with latency between 50 and 90 MB/s and a bitrate of five 5000 MB/s.”

She said, “In this setup there’s a car following the [torch relay] runners with two antennas on the car roof. One is for the private 5G and the other for Starlink. They bring the camera streams into the cloud hosted by AWS.”

The system used cellular bonding and cloud technology to combine different network connections, dynamically selecting the best one in real-time to ensure optimal signal quality.

Romuald Rat, director of technological innovation and AI at France Télévisions, said, “If we had not produced it like this, we would not have had the capacity to deploy a traditional setup. This kind of setup gives us more opportunities to produce content.”

At the Euros, Deutsche Telekom successfully debuted 5G coverage from fan zones. It set up a 5G campus network for RTL Deutschland, providing customized private facilities which enabled various live broadcasts during the tournament’s opening game. Live 5G reports directly from the fan zone were made possible by its proximity to the network.

Signals were received directly via an integrated SIM card for transmission to the studio with no additional mobile solution needed. The network provided the basis for several parallel data streams with very high bandwidth, the company added.

In addition, an RTL team equipped with two mobile 5G cameras has been working in Cologne, Germany testing different functions via 5G. These — apparently successful — tests included for ultra-low latency live video and audio intercom for direct communication between the control room, the camera operator, and the interviewer. Return video to show the camera operator or interviewer what is being broadcast in real time was also tested, as was remote control of the camera from a central control room.

Ericsson is also out in Germany at the Euros using 5G to “transform fan experiences.” Deutsche Telekom Ericsson supports RTL’s private 5G campus network in Cologne, which spans 35,000-square-meters around RTL Deutschland’s production studios. From there, cameras transmit interviews with fans transmitted in HD.

O2 Telefónica, in partnership with Ericsson, rolled out 5G coverage across ten host cities in Germany for fans with 5G devices and network subscriptions to enjoy better than average mobile connectivity.

At around the same time the Swedish company was involved in a “landmark” trial that marked “a transformative leap in the production and consumption of live sports entertainment in Denmark.”

This was the trial of a live broadcast of a football match in Copenhagen transmitted over telco 3’s 5G standalone network (which is supplied by Ericsson). The proof of concept tested four 5G broadcast cameras and one drone camera, each demanding 35Mbps and 100% uplink time from the network.

“However, enough capacity was provisioned in the stadium to support full arena TV production in the future, supporting many times more 5G cameras to capture the action from a large variety of camera angles,” the partners said in a statement.

Morten Brandstrup, head of News Technology at TV 2, said, “There are many advantages to producing with 5G. Photographers become much more flexible and mobile when everything is done wirelessly. Setting up a camera for sports matches is faster when you don’t have to pull hundreds of meters of cable, and we can use the 5G network that is already there, but now with a completely different security and stability in live production.”

Sony’s 5G-enabled broadcast camera technology were also showcased during the trial. Claus Pfeifer, head of Connected Content Acquisition at Sony Professional Europe, said: “5G-enabled cameras [are] now capable of transmitting high-resolution images wirelessly in real-time. This significant advancement paves the way for new possibilities in live broadcasting, offering viewers more captivating and interactive live content through 5G-powered transmissions.”

In the US, this month T-Mobile continued its 5G sports activations, this time in partnership with the PGA of America at the KPMG Women’s PGA Championship. Just like at the Men’s PGA Championship, T-Mobile’s 5G portable private network enabled NBC Sports to add more 5G cameras to more holes for coverage on NBC, Peacock and the GOLF Channel.

5G links were also able to capture real-time player data and tracking, more broadcast views and a “PuttView” augmented reality experience for VIP guests.

Callie Field, president of the T-Mobile Business Group, said, “The ability to bring women’s golf fans more content like real-time scores and shot data, is a game-changer.”

T-Mobile is promoting the operational efficiencies for sports and broadcasters of using 5G in production.

The telecom company says that the data-intensive process to bring fans real-time scoring, shot data and other insights “is typically expensive and requires a lot of labor. But 5G is changing that.”

Ahead of the Championship, its team captured 47GB of data to create a digital twin of the golf course. It said 5G reduced the typical data transfer time by nearly 75%, “making it easier than ever to bring fans this unmatched experience.”

In the past, broadcasters had a hard time bringing more cameras onto the course. Hard-wired cameras needed miles of cables and bandwidth for wireless cameras was limited and didn’t support 4K.

T-Mobile says its 5G private network paired with portable data transmitters and high efficiency coding devices, mean broadcasters like NBC can bring more cameras onto the course and capture more live content in 4K at 60 frames per second. That’s “thanks to superfast data speeds and glass-to-glass latency that averages under 100 milliseconds.”

Meanwhile, Verizon continues to help the NHL deliver coverage from League matches into the AWS cloud for remote centralized production, a workflow it demonstrated at NAB Show in April.

Running on AWS Wavelength Zones, a mobile edge compute service that delivers ultra-low-latency applications for 5G devices, this solution is dubbed “a game-changer” for broadcast, with the speed from content capture on ice to broadcast going from seconds to milliseconds.

“Live cloud production empowers the NHL to produce high-quality content from virtually anywhere and at practically any scale, using the optimum combination of resources,” said Julie Souza, global head of sports at AWS. “The speed of Verizon 5G unlocks new opportunities for the continued adoption and deployment of live cloud production across sports and beyond.”

Paris 2024 Olympic Flame Relay Ignites an Innovative Production Workflow

 NAB

The Paris 2024 Olympic flame relay broadcast, executed by France Télévisions and TVU Networks, serves as a pioneering achievement for reducing CO2 emissions by 300 tons and slashing the budget by 70% compared to traditional live production methods.

article here

In what is considered to be a first in broadcast history, the workflow used a cloud-based private 5G network to stream 10 hours of live coverage a day over 80 days, beginning May 8.

“Broadcasting live the journey of the flame is a unique offering, rightly aligned with France Télévisions’ ambitious editorial stance for covering the Paris 2024 Olympic and Paralympic Games,” Laurent-Eric Le Lay, director of Sports at France Télévisions, states.

“This innovation gives editorial teams the ability to produce more content with greater adaptability and responsiveness, while controlling costs and maintaining the quality standards,” adds Frédéric Brochard, CTO and CIO at the broadcaster.

This approach, innovative for even simple live productions, is said to be groundbreaking as it applies to every facet of production for an event of such scale and complexity, including live feeds, multi-camera switching, graphics, commentaries, audio mixing, intercom and playout.

In a recent webinar hosted by TVU Networks, the participants explained the planning and workflow for the relay, which is still traveling its 1625 km journey around France.

For a start, using a private 5G network was essential for live broadcasting in areas with poor network coverage. Developed by the broadcaster with TDF and Nokia, this private network operated on a dedicated frequency to avoid interference and delivered a low latency (approximately 50-90ms) and high bandwidth (around 500 Mbps).

Starlink antennas, typically used on boats for their ability to track multiple satellites, were used to ensure continuous connectivity. This setup included a mobile unit with antennas mounted on vehicles following the flame.

Cloud Production

AWS provided the backbone for cloud production, enabling real-time processing and distribution of live feeds. This allowed the production team to handle up to 10 hours of live broadcast per day with eight cameras and multiple relay points.

Cloud-based tools, notably TVU Producer, enabled management of the entire broadcast remotely, including video mixing, real-time comms, and adding in graphical overlays.

“TVU Producer and our synchronization tools allowed us to manage live broadcasts seamlessly, ensuring high-quality video and audio integration,” noted TVU cloud engineer and sysadmin Cristian Prieto.

A Remote Commentator app allowed commentators to provide real-time commentary from remote locations with minimal latency.

Eight cameras were deployed, with signals transmitted through antennas on vehicles. These cameras included both mobile units and stationary setups at key relay points. The mobile units were equipped with TVU One transmitters, ensuring reliable live streaming.

France Télévisions broadcast engineer Amy Rajaonson explained: “The compact and portable setup was crucial for following the relay across different regions. Our equipment was designed to be self-sufficient, ensuring continuous operation throughout the day.”

TVU’s system dynamically selected the best network for transmission, using both private 5G and public 4G/5G networks to ensure optimal performance.

The results were high viewer engagement with 1.2 million unique viewers and 80 million video plays in the first two weeks.

By minimizing the use of helicopters and other traditional broadcasting equipment, the project cut nearly 300 tons of CO2 emissions and cutting production emissions by 12 times.

“It’s about working together, challenging what we accept as possible,” said Paul Shen, CEO of TVU Networks. “This isn’t just a milestone; it’s a wake-up call to the industry. The future of broadcast is here, and it’s accessible, sustainable, and opens up endless possibilities for bringing more content to fans everywhere.”

 


Behind The Scenes: Glastonbury 2024

IBC

It’s the time of year when Britain goes midsummer mad for the festival of live music in a field.

article here

With more than 40 hours of TV plus 85 hours of live radio in addition to live streams from the five biggest festival stages, Glastonbury 2023 delivered record coverage live from Worthy Farm, figures likely to be matched if not exceeded by the time US R&B star SZA closes the festival next Sunday.

In addition to network coverage on the four main linear channels and on iPlayer there is a second iPlayer channel this year. It will serve up 30 hours alone and will act as more of a catch-up service. That could be important this year given the clash with the Euros which means programme schedules will be calculated at the last minute.

“What worked really well in 2023 was iPlayer. It was only the second year of iPlayer, the numbers were good and it gave everybody a better understanding of what could do on iPlayer,” says Alison Howe, Executive Producer, BBC Studios.

Content from Worthy Farm was streamed over 50 million times across BBC iPlayer and BBC Sounds - up 47% on 2022. On BBC iPlayer, viewers streamed sets and Glastonbury programming a record 47.5 million times, up 49% on 2022. 

A highest ever audience of 21.6million also tuned in to some of the coverage on TV bringing the total audience up 7% on last year across linear television.

“What also worked well was being able to use a drone for the first time on an artist’s set.”

This was timed for Elton John’s arrival on the Pyramid Stage and required all the stars to be aligned including artist approval, clear space for launch and take off, health and safety greenlight and good weather.

“The technology for capturing live music is evolving especially with cameras but we’re always mindful of not introducing new angles or visual ticks that distract the audience enjoyment of the artist.”

Howe works with musicians and their management year round for various BBC shows and says the collaboration with them is essential to making Glastonbury coverage work.

“They trust our teams to deliver the performance and often work directly with artists or their representatives on sound and the visual side.”

It is one reason why Howe and the rest of the BBC team maintain a strong presence on-site.

“Connectivity at Glastonbury is challenging at the best of times and you cannot rely on phone signals or WhatsApp so often the best way of checking something with someone is to go find them. It is hard to do that if you are offsite. The artists expect you to be there to sort things out.”

Technical provision has broadly the same template as last year including for at least 64 cameras. “There’s been a few tweaks in the various technical compounds and as a result we've had to do a little bit of replanning,” says Gareth Wildman, Head of OB, Timeline TV. “It’s a big site and way that we connect things together uses quite a lot of fibre connectivity and juggling things around.

Although it seems like a really simple thing, if a truck parks on a different side of the compound makes quite a big difference to the technical planning in terms of cable runs.

It's been really obvious whilst doing the planning for this year's Festival that it’s not like any other OB. It’s almost a temporary installation of an IBC because there's so much going on.

“We're collaborating with BBC Radio, BBC Technology and other OB teams and the whole machine needs to work together to make these hugely popular music programs. It is really heartwarming that there's still that much collaboration between the different stakeholders but also it’s eye-opening for how meshed everything is with everything. Every cog in this machine is vitally important.”

Timeline provides satellite uplinking as a back-up broadcast in case there are any issues with the IP. It provides the radio cameras and RF links used for all BBC presentation hits over the festival site. It also supplies the fibre interlinking the stages and the presentation areas which include a BBC studio up at The Park and another on the hill by Worthy Farm. Timeline also operates three large scanners where BBC programmes are mixed on site. These are designated for BBC One and BBC Two, another for channels Three and Four and a third for iPlayer.

The BBC cover five stages live: Pyramid, Other Stage, Park Stage, West Holts and Woodsies with camera and engineering coverage provided by Cloud Bass and Vivid Broadcast. All audio across for broadcast and radio is provided by BBC Radio. Audio and vision feeds from each of the stages are fed back to Timeline’s trucks for program production.

Facilities for a sizeable on-site editing operation of non-live is managed by Origin Broadcast including Avid suites and EVS systems.

This year, Timeline is also providing off-site facilities for the second iPlayer channel at its Broadcast Centre in Ealing. An advantage of doing this off-site is that the BBC keeps its crew numbers on-site down.

“Those people working on it get to sleep in their own beds every night which is a big plus for some,” says Wildman.

There's been a permanent fibre network on the festival site for some years and it's been through a few iterations.

“Obviously, being on a farm it is quite susceptible to being damaged between festivals,” Wildman says. “Over the years we've been burying the fibre deeper and deeper and getting better at routing it.”
The network is used to route TV signals, all audio and news signals, security cameras and internet for the festival itself and for merchant’s pay machines.

“We've just finished going through this year's checks before we arrive on site on 25 June. It’s all looking pretty good. On top of that we lay in 50km of fibre for the last mile of connectivity running between stages and the connection hubs to the presentation and uplink vehicles.”

Fibre circuits are more cost effective than satellite and it would be technically feasible to remote all of the feeds back to a central hub. That they don’t is partly a matter of reducing risk of reliance on one contribution path and also because it helps smooth running of the event to be on site.”

All coverage of the Pyramid Stage, which hosts headliners Dua Lipa, Coldplay and SZA, is UHD HDR to feed the dedicated UHD channel on iPlayer. All other coverage is HD SDR

Howe doesn’t rule it out expanding the higher bitrate format in future. “We have to balance our ambition and budget. It’s about investment and being mindful of the budget available to us and what is the best use of our budget.”

Howe’s work on 2025’s Glasto will actually being during the live weekend. “The only time you can really think about new camera positions for example is when the festival is at its busiest because if you come back a week later, it’s just a field.”

“What makes it so successful and so enjoyable is that a lot of different teams come together with the single goal of making it work and whether something is weather related or artist related or technical things only get resolved because of the knowledge and calmness and humour of everyone involved.”

She is particularly excited about Cyndi Lauper and Shania Twain but she is particularly eager to see which stars of the future Glastonbury 24 might uncover. She picks out New Zealand-Australian Jordan Rakei playing West Holtz as one to watch.

 

 

Wednesday, 26 June 2024

“Kinds of Kindness: Yorgos Lanthimos Made the (Beautiful) Feel-Bad Movie of the Year

NAB

Director Yorgos Lanthimos says he is not trying to provoke with his new film Kinds of Kindness. On the contrary, he and script writer Efthimis Filippou say their aim is the exact opposite:

“If we come up with things that feel too provocative out of context, an idea that goes a certain way that’s not consistent with the rest of it, we might do away with it,” Lanthimos tells Ryan Lattanzio at IndieWire. “We don’t write thinking about the effect it has on our audience.

article here

“We just see how it feels to us, and what’s instinctive on our side, and what feels right and what we both feel comfortable. We don’t know how people are going to react.”

The director, who is lauded for recent movies The Favorite and Poor Things, is one of those artists who doesn’t necessarily like to tell an audience what to think or what his movie is about, preferring that they draw their own conclusions or come to the story with their own emotional response.

Kinds of Kindness is a trio of stories with a total run time of more than 160 minutes, each featuring some of the same actors (and a troupe familiar from his other work) including Emma Stone, Willem Dafoe and Jesse Plemons.

There are consistent themes of physical or psychological violence and body-modifying and have had critics trying to divine a through line. Some see a political allegory about ceding control and becoming blindly loyal. Other see more of psychological tale of co-dependency.

Lanthimos isn’t giving anything away. Perhaps, as seems likely, he is less interested in directing the audience one way or another and more interested trusting in an instinctive and organic approach to creation that encompasses collaboration with the likes of Stone, DP Robbie Ryan, editor Yorgos Mavropsaridis and composer Jerskin Fendrix

“We don’t work in an analytical way, so we don’t know what the theme is,” Lanthimos told Lattanzio.

“It is just a creative process that is not analytical. It’s not like, OK, the theme of faith or the theme of control, or whatever. It never starts like that. And I think even by the time we finish, we don’t even think about that.”

The 50-year-old filmmaker began to put together what eventually became Kinds of Kindness after making The Killing of a Sacred Deer in 2017. His starting point was Caligula, a play by Albert Camus, which he had just read.

“I started thinking about how much power one person can have over other people, and what would that mean in our contemporary world and what it would mean also in a more personal level,” he told FilmWeek’s Larry Mantle in a podcast. “That’s how the inspiration for the first story came about.”

Originally, there were 10 stories that they pared down to three. Lanthimos and Filippou considered making the stories run parallel and to interconnect, as in Robert Altman’s Short Cuts.

 

of Kindness.” Cr: Yorgos Lanthimos/Searchlight Pictures

“You would follow the stories in parallel, but then this idea stuck in my head that I wanted the same actors to play different roles in his story,” the director explained to Lattanzio.

 

“Parallel stories would’ve been very confusing, so we decided to separate the three stories so that it was clear that the characters changed, but the actors were the same.”

He elaborated in interview with IndieWire’s Chris O’Falt: “When we separated them we felt that they became even stronger as entities one after the other. It [wasn’t] just to do with the themes and the story itself, but it’s also like a tone or duration thing. It was more like how you compose music.

It just felt like that’s how the stories should be, in that kind of order.””He elaborated in interview at Cannes with IndieWire’s Chris O’Falt: “When we separated them we felt that they became even stronger as entities one after the other. It [wasn’t] just to do with the themes and the story itself, but it’s also like a tone or duration thing. It was more like how you compose music. It just felt like that’s how the stories should be, in that kind of order.” of Kindness.” Cr: Yorgos Lanthimos/Searchlight Pictures

Lanthimos says that he himself prefers films that treat him respectfully, “in a way like I have my own ideas and experiences. So I can apply all that to what it is that I’m watching and experiencing. It’s the same with music or [any other art].”

He suggests that trying to cater for a particular audience or subset of an audience is futile. “You can’t cover every human mind that exists. So the balance that you strike needs to feel right, according to your own understanding of the world.”

The director makes a similar argument with Mantle, that he tries to construct his films in a way that allows for that kind of thought process by the viewer.

“We didn’t have a very clear, distinct through line. We just very instinctively felt that the stories kind of belong together. That they’re not so heavy handed in telling you exactly how you must think of these stories or characters.

“There’s room for every person with their own individual personality and backgrounds to have their own space, engaging with the film to act in an active way and make a sense of it all.”

Lanthimos’ influences as a filmmaker range from watching Bruce Lee films to the verité of John Cassavetes and the groundbreaking choreography of Pina Bausch.

“They’re also vastly different… between them they are very dark or ridiculous or absurd. If we don’t find humor in every kind of situation, we’re kind of missing the entirety of the human experience. So I can’t avoid to include that in way I make films.”

Even as the budgets have grown, Lanthimos has retained final cut of his work. “I was always lucky to have this creative freedom,” he tells James Mottram for The Independent. “Searchlight just saw the potential in this film as well. It’s very straightforward. They know the kind of filmmaker I am, and they know that this is what you get. And they wanted to be involved.”

The film is considered more divisive than his two previous Oscar-winning works The Favorite and Poor Things.

“The abstraction is presented with even more cloying cuteness, the sadism is more juvenile and purposeless, and the humor is stomach-turningly glib,” commented Slant critic Ryan Coleman.

Promoting a film that resists easy interpretation, Lanthimos is equally reluctant to put definitive labels on it. Like the idea that freedom is a prison.

“Well, I guess it raises those kinds of questions,” he told The Independent. “It is showcasing, I think, the complexity of relationships and it asks questions of whether we even know what we want when we’re free, or if that’s the best for us.

“Or if having some kind of structure and rules in our lives is actually beneficial. Or is it beneficial to also break from them?”

Kinds of Kindness feels like the most nihilistic film of his career. “Not having any hope?” he asks, “I don’t know… I just made a film that had a happy ending.”

 


Tuesday, 25 June 2024

This Is Not a Test: Prepping Broadcast and Digital Technologies for Paris 2024

NAB

article here

Virtual production, AI and digital outreach are some of the new and expanded innovations delivering comprehensive coverage of the upcoming Paris Olympic Games.

The Youth Olympic Games, hosted in Gangwon, South Korea, served as a fertile testing ground for innovation and experimentation in broadcasting and digital technologies for host broadcaster Olympic Broadcasting Services. 

“We have always looked at the Youth Games as a great opportunity to test things that, potentially later on, we can actually implement in the Olympic Games,” explained OBS CEO Yiannis Excharcos. “It’s very difficult to be testing many things in the summer games; you have to go with very mature technologies. We use the Youth Games as an incubator for innovation.”

The YOG is also an ideal training ground to bed in new digital tech and social media outreach aimed at younger audiences — essential for the main Games, too, if it is to continue to hold relevance.

Exarchos highlighted that the coverage for Gangwon 2024 was 12% more comprehensive than Lausanne 2020, with 170 hours of live coverage. The focus was not only on live sports but also on interviews, athlete stories, and behind-the-scenes content. This approach helps in building a connection with the audience and provides a platform for young athletes to share their journeys.

Localization for a Global Event

It’s also why localization of content is being ramped up. “We know that this gives bigger penetration,” said Exarchos. “It’s not just about showcasing the competition. It’s very much about giving a voice to the young athletes. We interview them, we follow how they prepare, how they train, how they warm up for the event.”

In Gangwon, OBS tested working closer together with 10 local content creators and influencers in addition to 30 other content creators worldwide creating Gangwon-inspired content remotely. 

Coverage here is not necessarily what is going on in the field of play, since that is the responsibility of right holders.

“They were here to tell their own stories, to tell their own experience of the Olympic Games,” said IOC Digital Engagement and Marketing Director Leandro Larrosa.

Digital-First Initiatives

A collaboration with Pinterest brought another creative dimension to the YOG, with content ranging from figure skating makeup tutorials to Olympic-themed nails and winter sports fashion.  

Exarchos also claimed that Olympics.com ended 2023 as one of the strongest sports digital platforms in the world, despite it not being an Olympic year. 

Cr. Olympic Broadcasting Services

“We’re doing stories specifically targeted for different countries, about athletes of their own countries. So what you see in one country on official Olympics channel Olympics.com, may not necessarily be the same,” Exarchos said.

The Olympic app is the entry point for fans wanting a unique immersive and personalized. Many new features will be available in the app for the first time this summer. 

For OBS, the thrust of testing is to make production more efficient in a way that can be scaled up for the complexity of a Summer Games.

This effort started at least as far back as 2022’s Beijing Winter Games by beginning to move away from a conventional OB. IP and cloud technology has enabled the transition from production at venues to production based on server systems, which could be anywhere in the world.

Intel is a prime partner to OBS and for whom it supplies compute and store capacity. China’s Alibaba is another; it manages the cloud services for the OBS. 

“It’s a combination of virtualized processes that also uses a lot cloud services,” Exarchos explained. “The system has proven itself very reliable. We will be using it in Paris for the coverage of judo, wrestling, tennis and shooting. We believe that it makes the future of Olympics broadcasting far more efficient and far more sustainable.”

In Gangwon, half of the operation was remote. Many of the traditional processes that would have been performed in the host city, like master control, distribution to broadcasters, graphics creation, and editing were actually done at OBS HQ in Madrid. 

Exarchos said, “This obviously leads to very significant savings, and very significant help for the local organizers. It means we have less people on the ground, they need less support, less logistics, less transport, less accommodation. This is the way to the future.” 

This doesn’t mean that everything can or should happen remotely, he added. “There are many things that should be happening in the host city, especially everything that has to do with interaction of athletes and, production of shows that have to do with the city. But it’s pointless for us to be shipping containers around the world with equipment and bringing people to do something that they could be doing back home.”  

AI Enters the Games

The use of AI has seeped into Olympics production in a number of ways.

In Gangwon, OBS tested two workstreams that it believes are “very important” for the larger scale of the Summer Games, for which OBS plan to produce 11000 hours of content in less than three weeks. One workflow concerns automatic highlights generation and the other with AI systems tools for editors.

“It’s a huge amount of content to manage and to create and customize highlights for different countries, different athletes, different sports, for different platforms for social media, for vertical videos, and so on,” Exarchos said.

He explained, “AI has started very credibly producing this capability for us. It is giving our rights holding broadcasters a lot of capacity, too.”

Automated highlights will be produced for 14 different sports in Paris. Many of the algorithms it will use have been trained by OBS, almost from scratch.

“It’s important to understand that AI systems do exist for the main sports like football and tennis,” the exec said. “The difficulty for us is to create credible systems for many sports that are not as popular.”

AI will assist editors by suggesting which video and audio elements are important to include in stories. The idea is to save editors’ time trawling through mountains of footage and to produce polished videos fast.

These AI systems pull data from a sports’ live commentary. “In a massive, complex and dense event like the Olympic Games, where time is of the essence, this is an incredibly useful tool,” Exarchos pronounced.

Paris will see the use of AI digital production systems that will auto tag video and create automated summaries to further assist editorial. 

Another AI generated analytics system will produce feedback in realtime on audience engagement to keep the editorial teams better informed on what type of content to focus on.

“We’re doing also something similar with AI generated assistance for editors who write and publish stories,” Exarchos added. “We do not allow AI systems to auto-publish stories but it helps us a lot in identifying all the elements that make up good stories for posting to social media. 

There is pressure to deliver. Larossa said, “Everybody is expecting to clips to social media to be almost instant, right after the live coverage of the competition. That’s where AI is a big help. We’re using Intel powered technology to have this AI live clipping. We will also live stream vertical video to mobiles for the first time ever at this Games.” 

Another semi-automated technology is coverage of the medal celebration. OBS is not just creating one feed but instantly creating and publishing content on all the main social media platforms in multiple different languages.

In Gangwon, for ice hockey, OBS tested an AI operated camera system, but it’s not quite ready for Olympic primetime.

“These technologies not yet mature for the complexity that we have in the Olympic Games,” Exarchos said. “But I think that in a very short period of time, they’ll be quite mature for simpler coverage. It will be extremely cost effective and very, very sustainable in the long term.”

The use of cinematic lenses and 360-degree replays are intended to provide a more cinematic feel to the coverage, enhancing the Games’ visual appeal.

 


Guess Who Decides the Role of AI in Hollywood? Oh Yeah, You, a Human.

NAB 

Generative AI adoption is underway at many Media & Entertainment companies, according to a survey developed by Variety’s VIP+ in collaboration with HarrisX.

article here

Ultimately, however, it is consumers who will vote for whether they want generative AI used in M&E content and how far and fast that should happen.

The analysis in “Generative AI in Film & TV” partly draws on 28 interviews with leaders at generative AI tech companies (including Pika, Runway and Metaphysic, among others), service providers and filmmakers. It also polled more than 1,000 consumers and another 300+ workers in the industry.

The survey found that, as of May 2024, 80% of M&E decision makers say their company is either exploring, testing or actively deploying GenAI in some form.

Nearly half of them (49%) say their employer has already implemented AI, while more than a third of workers in the US film and TV industry say they currently use GenAI (and another 28% say they plan to).

When it comes to implementation, the report expects GenAI to be used for concept design, VFX and marketing and distribution, as well as for content localization.

Obstacles to Adoption

It’s worth noting that 6% of those polled said that their company had banned any use of the tech.

That indicates that it’s not straightforward to greenlight the use of AI, even where there are clear cost benefits to doing so. This is particularly the case when GenAI is considered for use in final screen output.

The primary obstacles to the effective adoption of generative AI include a lack of skilled AI personnel (31%), potential consumer backlash or confusion about AI-generated content (27%), and legal restrictions (27%). Additionally, concerns about the quality and reliability of AI-generated outputs (25%) and uncertainty about the sources of AI training data (25%) are notable barriers.

The biggest drawback right now, however, is not ethics or legal or labor implications, but the final quality of generative AI output.

More than half of decision makers told Variety that the quality of AI-generated content was relevant to their decision to use GenAI, followed by its efficacy and accuracy (39%).

 

What Consumers Think Is Important

It is viewers and subscribers who will be the ultimate arbiter about the extent and timing of generative use in the content they watch.

Consumers have mixed feelings about the use of generative AI in creating the content they consume. While some consumers are intrigued by AI-generated content, many remain skeptical.

About 36% of consumers are less interested in watching movies or TV shows written using generative AI, compared to 23% who are more interested. This divide suggests that transparency and education about AI’s role in content creation are crucial for gaining consumer trust.

Consumers who regularly use AI tools tend to have a more positive perception of AI-generated content. For example, those with one or two paid subscriptions to AI tools rate their perception of AI-produced content in movies and TV shows at an average of 3.68 on a scale of one (extremely negative) to five (extremely positive), compared to 2.53 among those with no interest in AI tools.

That said, consumers may be more accepting of GenAI being used for specific aspects of production such as creating sound effects, illustrations for animation, and visual effects. Acceptance is higher when AI enhances the content experience, such as through seamless dubbing or the creation of high-quality visual effects, according to the HarrisX/VIP+ survey.

For example, 55% of consumers are comfortable with AI-generated sound effects that represent the action onscreen, and 51% are comfortable with AI-generated VFX such as making an actor look older for a role.

Forty-three percent of consumers are OK with AI-generated voice-over narration for docs or animated characters; 42% are fine with AI-generated theme music or original scores.

Acceptance is lower for AI-generated digital replicas of actors, with 27% comfortable with digital replicas of deceased human actors and 28% with digital replicas of living human actors.

Only 34% of consumers are comfortable with AI-generated scripts or screenplays, indicating a preference for human creativity in writing.

Variety points out that studios have generally avoided disclosing if and how the tech has been used, even in promotional content, for fear of backlash.

For example, Horror movie Late Night With the Devil faced a boycott after its directors disclosed the production had used AI for three still images that served as interstitials in the film.

Ultimately, the report finds that “consumer acceptance is likely to fluctuate” with evolving awareness about gen AI capabilities.

A ChatGPT-derived summary of the report says the “future of generative AI in Hollywood is bright,” but requires careful navigation of the complexities involved.

“Transparency, education, and collaboration between human creators and AI technologies will be key to realizing the transformative potential of generative AI in the entertainment industry.”

 

 


Monday, 24 June 2024

Streamers look to AI to crack the codec code

IBC

Streamers are looking to AI to dramatically improve compression performance and reduce their costs with London-based Deep Render claiming that its technology has cracked the code.

 article here 

For streamers, every bit counts. Their ability to compress video maintaining quality while reducing bandwidth is critical to business. But as content increases in volume and in richness the limits of existing technology is buckling under pressure.

The looming problem has been apparent for several years with developers turning to artificial intelligence and machine learning as a potential salvation. The prize is a market estimated to be worth $10bn by 2030 which makes AI codec developers prime targets for acquisition.

AI techniques are already being used to optimise existing codecs like H.264, HEVC, or AV1 by improving motion estimation, rate-distortion optimisation, or in-loop filtering. Content-aware techniques, pioneered by Harmonic, use AI to adjust the bit rate according to content.

UK based firm iSIZE, for example, built an AI-based solution that allowed third-party encoders to produce higher quality video at a lower bitrate and was acquired by Sony Interactive Entertainment last winter.

A second approach is to build an entirely new AI codec. California startup WaveOne was developing along those lines and was promptly bought out by Apple in March 2023.

That leaves the field open to one company which claims to have developed the world’s first AI codec and the first to commercialise it.

Deep Render, a London-based startup, has sidestepped the entire traditional codec paradigm and replaced it with a neural network module.

“This is an iPhone moment for the compression industry,” Arsalan Zafar, co-founder and CTO tells IBC365. “After years of hard work and exceptional R&D, we’ve built the world’s first native AI codec.”

He claims its technology is already “significantly better at compression, surpassing even the next generation codec such as VVC” and that its approach provides the opportunity for 10-100x gains in compression performance “advancing the compression field by centuries.”

What’s more, its tech is already in trial at “major publishers and Big Tech companies” which IBC365 understands to include Meta, Netflix, Amazon, YouTube, Twitch, Zoom and Microsoft.

Roll out will begin from Q1 2025 before moving towards mid-market publishers and prosumers.

“For the first time in history the industry will go from ITU-backed standardised codecs to one company supporting the codec for all major content providers,” Zafar claims.

MPEG (Moving Picture Experts Group) has set the standard for digital compression for over three decades but has recently seen its monopoly eroded by streaming video services eager to find a competitive edge.  The prevailing standard is H.265 / HEVC first developed in 2015 and its successor is VVC – but Deep Render claims its technology demonstrates 10-15% improvements today, with significant advances by the end of the year as its algorithms develop.

“We are working with major content publishers to embed our AI codecs throughout their content delivery chain from encoder to decoder and all network layers in between,” Zafar says. “We’ll make sure all the data works and build that relationship to a point where they are happy to rely on our codec and for us to be their main codec provider. They will wean off MPEG codecs. We expect all major content publishers to be using Deep Render codecs.”

Zafar’s background is in spacecraft engineering, computer science and machine learning at Imperial College London. He founded Deep Render in 2019 with fellow Imperial computer science student Chri Besenbruch and it now employs 35. Last year the company received a £2.1 million grant from the European Innovation Council and raised £4.9 million in venture capital led by IP Group and Pentech Ventures.

Their confidence stems from the fact there is a real business issue to solve. Heavy streamers like Netflix pay more to content delivery network providers like ISPs the more bandwidth their service takes up.

Deep Render estimates that a streamer such as Netflix could save over £1 billion a year on content delivery costs by switching to its technology.

“Content published online globally is exponentially increasing but existing codecs are showing diminishing returns,” Zafar argues. “If you combine these two things it’s not great for the future of any business.”

He asserts that YouTube and Twitch stream huge amounts of content at a massive financial cost in bandwidth. “They really feel the pain and would love to shave a few billion off their content delivery costs. The easiest way to do that is with a better codec.”

There is continuing tension between streamers and telcos about the cost of carriage over telco-owned networks. Telcos argue that streamers should pay more. Content publishers push back knowing that their business model is under threat.

“ISPs could turnaround tomorrow and significantly increase the cost they charge for carriage, or lower the streamer’s resolution or framerate or throttle their bandwidth to popular regions,” Zafar says. “This over reliance on ISPs threatens the streamer’s business model. One way to deleverage the ISPs is to have a better compression scheme such that the compression itself is no longer an issue.”

The problem with existing compression

Traditional video compression schemes have approached the limits of efficiency. MPEG/ITU based codecs have been iteratively refined over nearly 40 years and most of the significant improvements in algorithms for motion estimation, prediction, and transform coding have already been realised. Every new codec makes the block sizes larger and adds more reference frames, but there is a limit to how long this can go on for.

Enhancements in compression efficiency often come with increased computational complexity, which can be prohibitive for real-time applications or devices with limited processing power. The cost of encoding for example increases around 10x with each new codec.

Traditional methods have also found it difficult to take the human visual system into account. According to Zafar the perceptual limits have been reached because we lack a rigorous understanding of how our vision works and we can’t write it down mathematically. However, methods that learn from data can learn these patterns and finally enable this.

The advantages of AI compression

AI codecs use algorithms to analyse the visual content of a video, identify redundancies and nonfunctional data, and compress the video in a more efficient way than conventional techniques.

AI-based schemes use large datasets to learn optimal encoding and decoding strategies, which can more effectively adapt to different types of content than fixed algorithms.

Secondly, instead of breaking down the process into separate steps (like motion estimation and transform coding), AI models can learn to perform compression in an end-to-end manner, optimising the entire process jointly. This makes the codec more context-aware.

AI models can also be trained to prioritise perceptual quality directly, achieving better visual quality at lower bitrates by focusing on features most noticeable to human viewers.

Being software based not only means AI codecs are more performant, since they do not rely on specialist hardware, but the expense and time of manually ripping and replacing systems can be null and void. This also means that the conventional 6-8 year cycle for introducing next-gen codecs can be dramatically slashed.

“This is the true beauty of it,” Zafar says. “You could effectively stream a new codec overnight with a whole new set of parameters. Updateability is extremely easy and significantly reduces costs as specialised silicon is no longer required.”

Unlike traditional codecs which are fixed one-size fits all systems, an AI codec could be optimised for specific content, further increasing efficiency.

Zafar says, “The football World Cup is streamed to between 500 and a billion people. A AI codec specifically trained on football match data sets would be significantly less expensive per bit when streamed at such scale.”

Deep Render says it would optimise its content specialisation algorithm for customers based on customers own data.

There are other AI optimisation techniques being evaluated for commercial use. Companies like Bitmovin are playing with using AI to optimise encoding parameters dynamically, improving efficiency and video quality.

Nvidia RTX Video Super Resolution uses AI-driven post-processing to improve video quality through denoising, super-resolution, and artefact removal.

MPEG is now studying compression using learning-based codecs and reported on this at its most recent meeting.

MPEG founder Leonardo Chiariglione now runs the Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) initiative, and is developing a suite of AI driven systems and standards notably a end to end video codec called EVC.

But the gears grind may too slowly for the urgent demands of streamers

“We have built an entirely new end-to-end, data drive, perceptually optimised codec from the ground up using AI,” says Zafar who has produced a AI codec primer course here. “All modules such as motion estimation, prediction, and transform coding are captured within this one neural network.”

All this said, AI video compression is an emerging field with much R&D ahead. 

One potentially significant hurdle is that deploying AI-based codecs requires compatibility with existing video playback and streaming infrastructure. Another is that AI codecs currently lack universal standards, making industry-wide adoption more challenging.

Zafar says Deep Render are leaving the door open to standardising Deep Render. “A lot of inefficiencies come with the standardisation process and we prefer to move fast but standardisation is not completely out of the picture. It has some benefits like building confidence among customers.”

Nor is compressing the data in 8K UHD video possible with Deep Render until at least 2025 or beyond.

“AI codecs are at the beginning of their development cycle,” Zafar says. “We have internal research showing significantly superior performance. These will mature over the next year, providing unprecedented gains in compression performance. We’ve barely scratched the surface.”