Tuesday, 18 July 2023

AI Copyright Law Is Complete Chaos But It Doesn’t Have to Be

NAB

No issue exercises more minds in the industry just now than AI but as legendary scriptwriter William Goldman once said, “Nobody knows nothing.”

article here

That’s because AI is a runaway train, speeding ahead of existing protections such as those around copyright and in contract law, which actors and writer’s are demand rewriting to account for their own personal data being rewarded in perpetuity. AI tools like ChatGPT are also black boxes even to the folk at OpenAI, who developed them — no one seems sure of how it actually works, let alone what it is capable of.

It feels like a maelstrom right now although more than one commentator has pointed out that what is actually happening is a good old-fashioned fight for rights between labor and capital.

In which case, a good deal of what is being played out in the name of AI is a continuation of existing trends and inequalities that could be dealt with by existing law, or extensions of same.

Competition law specialists Geradin Partners writes in a blog post, “many of the problems experienced by media organizations are neither new nor specific to GenAI. Therefore, in many cases, the solution may not necessarily consist in adopting new rules, but in sensibly revising and extending the scope of existing rules.”

Another lawyer, Gregor Pryor of Reed Smith, explained, “AI is pushing existing legal concepts to their limits, inventing new ones and generally questioning the relationship between our legal systems and machines in an unprecedented manner.”

A factor determining the economic impact of generative AI, for example, is who owns data and models. Geradin Partners notes that ownership of data and models are often highly centralized, leading to market concentration.

At a recent government hearing Senator Cory Booker said, “One of my biggest concerns about this space is what I’ve already seen in the space of Web2, Web3 is this massive corporate concentration. It is really terrifying to see how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful.”

Alarmingly, OpenAI boss Sam Altman confirmed at the Senate hearing: “What’s happening in the open source community is amazing, but there will be a relatively small number of providers that can make models at the cutting edge.”

Given the monopoly of such power, it is likely that concerns about abuses of dominance and “gatekeeping” will arise,” thinks Geradin Partners, adding that practices such as tying/bundling, self-preferencing, default settings, and refusal to grant access to data may be employed to strengthen existing ecosystems. Examples include bundling search or social networks with generative AI tools, tying cloud services packages to AI services, etc.

Ensuring that generative AI evolves in a manner that is conducive to intellectual property rights (IPR) protection is arguably one of the greatest challenges to address from the perspective of the media and creative industries.

“It is widely accepted that inadequate IPR protection chills content creativity and innovation,” says the law firm. There are established rules for text and data mining but Geradin wonders as generative AI services proliferate and their popularity increases, whether the these are is fit for purpose.

Two of the most debated issues when discussing the regulation of the digital economy — not just AI — is the lack of transparency underpinning how technologies and applications work and the limited accountability of their providers. In the EU, the main instruments that (will) establish rules seeking to promote and transparency and accountability in the digital economy are the Digital Services Act and the AI Act.

This is happening as the industry pushes for regulations. Michael Nash, chief digital officer for Universal Music Group, tells Winston Cho at The Hollywood Reporter that AI programs training machine learning models by feeding them copyrighted works without permission from or payment to UMG’s artists “enables us to have a very important seat at the table around the evolution and use of these models, particularly with respect to developing new licensing opportunities.”

He underscores the adoption of AI is to “put these tools in the hands of artists” to see “how far their vision can take this technology.”

The Society of Composers and Lyricists, with creators of scores and song for film, TV and theater as members, maintains that AI firms should have to secure consent by creators for the use of their works to train AI programs and compensate them at fair market rates for the subsequent creation of any new work that’s created on top of providing the proper credit, Cho reports. The SCL stresses that any regulatory framework should not grant copyright protection to AI-generated works since doing so could flood the market with them, diluting the value of original pieces.

According to THR, UMG has been sending requests to take down AI-generated songs, but is fighting “an entire online community dedicated to making, sharing and teaching others how to create AI music.”

On Discord, members of a server called AI Hub released an album in April called UTOP-AI — a play on an upcoming project from Travis Scott — featuring the AI-generated voices of the rapper along with Drake, Playboy Carti and Baby Keem. It got nearly 200,000 views on YouTube and Soundcloud in just three hours before it was flagged for copyright infringement by Warner Music Group.

Tech companies entrenched in the M&E industry are taking a cautious approach. Some studios, Pixar among them, are building their own generative AI models but training them on their own back catalog of films. Others like Shutterstock, Valve and Adobe are flagging copyright concerns as a selling point.

Like Disney and other large studios, however, Adobe and Shutterstock are in the fortunate position of owning databanks of images, concept art and videos to train new AI models. Because they can be absolutely sure where the assets used come from, they can offer indemnification against any copyright lawsuits brought against their users.

 

“It is an extremely smart marketing technique for the companies, as they are both highlighting a fundamental issue with AI and highlighting how, by their nature, that issue will never arise for its users,” says Chris Sutcliffe at The Drum.

 

Valve, meanwhile, said it would not be hosting games that use AI-generated assets on its Steam platform. The company clarified to Victory Kennedy at Eurogamer that its decision to delist a game created by a solo developer due to its use of AI-generated assets was not simply its opinion on the tech, but a reflection of how it interprets the current copyright laws.

A team of 14 legal experts across disciplines has just published a paper on generative AI in Science magazine. One of the key questions that emerged was whether copyright laws can adequately deal with the unique challenges of generative AI.

“Generative AI might seem unprecedented, but history can act as a guide,” three of the paper’s authors conclude in an essay written for The Conversation. For example, the US Copyright Office has stated unequivocally that only humans can hold copyrights.

Matters are more complicated when outputs resemble works in the training data. If the resemblance is based only on general style or content, it is unlikely to violate copyright, because style is not copyrightable.

Even here there could be a solution. Since copyright law tends to favor an all-or-nothing approach, scholars at Harvard Law School have proposed new models of joint ownership that allow artists to gain some rights in outputs that resemble their works.

The issue is complex in the extreme but writer’s and actors (possibly directors and producers and production designers and concept artists and costume and make-up designers down the line) understandably want some assurance now that they are not going to be taken for a ride in future.

It almost requires an AI to compute all the possible consequences of how the technology might be used, and that use is being interpreted by humans with feelings and emotions and fears and values which in some ways are not predictable and not consistent with binary logic.

Back to Goldman. Would an AI really have devised the sparse but brilliant script for Butch Cassidy and the Sundance Kid, the film that set the template for every buddy movie since?

“Kid, the next time I say, ‘Let’s go some place like Bolivia,’ let’s go some place like Bolivia.”

 

Thursday, 13 July 2023

Red is in style with Quasar Science for Pier 59 Fashion Week

copy written for RED

Fashion models are used to traveling to exotic locations for a photo shoot or a runway show, but it is less usual for the environment to come to them. Now, the background has changed, and Manhattan’s Pier 59 Studios has opened the largest virtual production stage for advertising and live events in the U.S.

article here

The 65’ curved mega wall screen made its debut hosting the opening night party of New York Fashion Week with models Candice Swanepoel, Lais Ribeiro, Taylor Hill, Johannes Huebl and drag performer CT Hedden on the guestlist.

Center-stage was a catwalk and mock photoshoot designed to showcase how the mega wall can be used by fashion and commercial shoots as a backdrop for locations; on this opening night, those ranged from water settings to jungle, desert and forest environments.

In-camera effects were delivered live by cinematographer Tim Kang, associate member of the ASC and principal engineer, color & imaging, at lighting specialists Quasar Science.

“The production company Original Syndicate invited us to light and shoot the launch event for the stage during New York Fashion Week,” Kang explained. “Any time you do any sort of production using an LED wall, people who know what they are doing have to calibrate the camera and environment together. It is never plug-and-play.

“And many folks who know what they are doing choose RED KOMODO because it just removes a lot of the problems that DPs find when they start capturing in a volume.”

Kang says KOMODO is a perfect fit for any virtual production environment. “The camera has good dynamic range and color rendition, and it’s very lightweight, which makes for easy manoeuvrability. Most importantly, KOMODO has a global shutter.”

Kang explains, “I see a lot of cinematographers specifying the KOMODO for work on VP stages with good reason — there are zero issues with flicker associated with rolling shutter. Even if you don’t genlock — if you haven’t synced up the camera to the wall — KOMODO performs very well because of its global shutter.”

Both LED tiles and rolling shutter cameras will progressively scan the picture for display and onto the sensor. But when you combine two different imaging systems scanning in two different ways, you often get mismatches in exposure and a complete, continuous image.

“You might see lines across the screen which is especially noticeable when the camera tilts up or down,” Kang says. “It looks like someone took a knife and cut the screen horizontally.”

Slower shutter speeds accentuate the issue even if the camera is genlocked to the wall.

“That is why KOMODO has become very popular in the virtual production space. With a global shutter, you negate those issues, and you may not even need genlock. When the sensor is open, you capture all of the wall.”

For the NY Fashion Week party, they even shot double frame rates at 48 per second with no issues.

“It worked flawlessly. You’re not seeing any scan lines or issues with the wall. KOMODO really enables you to just go ahead and shoot.”

Even with a global shutter, cinematographers and their camera team will need to calibrate the camera to the wall every time they shoot. That’s because of the differences between the way professional digital cine cameras record color versus the generally inferior color quality of LED panels.

“It’s simply the reality of the situation, and the ASC and other organizations are working to standardize it,” reports Kang. “Although every wall is slightly different, they all use the same type of diodes and these have a very narrow spectrum for red, green and blue.

“The way those diodes line up with camera sensors sometimes doesn’t quite agree with what a light meter would say. So, if the meter is taking readings from the wall and says it’s daylight, the camera might record it as a bit cooler or warmer and magenta instead of being exactly right.

“If you’re going to do it right, you have to calibrate the wall to the camera.”

There are many ways of doing it. The simplest, conventional, but most practical way is to white balance by setting the camera to video white and dialing in the color control of the wall to match.

In addition to the traditional role of cinematographer, Kang was required to go deeper into the color and imaging pipeline to ensure the image coming out of the main presentation reflected the intentions of all the creative forces involved in this event.

For this, Quasar Science augmented the mega wall’s Litepanels Gemini fixtures with its R2 and RR fixtures and applied its Image-based lighting (IBL). IBL pipes video to large arrays of lighting fixtures to deliver more authentic real-time lighting on set.

“With volume production, the temptation is to light with tiles, but doing so has a lot of problems. The quality of light and spectrum can be poor. The output is often not bright enough, so placing arrays of fixtures on set that can display spectrum and playback video correctly with color fidelity allows for skin tones and fabrics to look more accurate to the camera.

“When it comes to a fashion shoot, those are surely the two most important things,” Kang says. “You can’t have that wrong.”

“Being able to light correctly in KOMODO meant everything, and we got naturalistic, rich filmic quality.”

 


Wednesday, 12 July 2023

Even the Industry’s Pioneers Are Still Working Out Virtual Production

NAB

We’ve only just scratched the surface of how to use light from virtual production displays, says Sam Nicholson, founder and CEO of Stargate Studios.

article here

Lighting the scene using LED screens is a big top of discussion in the unions right now, he says. Because it is technology that is neither designed as a light but is not purely a screen since scenes can be illuminated with it.

“I’m not going to take sides in that argument. But that’s something that we’re dealing with both playback and lighting right now, and trying to define them. It’s a new technology that’s right in between, nobody knows what to do with it.”

Nicholson is one of Hollywood’s leading virtual production and visual effects creatives. With nearly forty years as a visual effects supervisor, DP, director, and now virtual production supervisor, Nicholson and his company, Stargate Studios, have combined the latest in LED technology with their proprietary “ThruView” lighting system.

Interviewed at the 2023 NAB Show by Erik Weaver, director of ETC’s Adaptive Production project, Nicholson explained that VP is the process of capturing the real world and making it usable in such a way that it’ll play back in a volume.

“Virtual production is fabulous. [But] if you can afford to go to Rome to shoot live action, and get some great pasta, and have a great time, you know, go to Rome. If you can’t, then send a small team to capture Rome, the Vatican, and bring the data back [to your volume], and now you can control the situation.”

In his presentation, Nicholson talked about his journey with creating in-camera VFX, including working with legendary Doug Trumbull on Star Trek: The Motion Picture. He described creating the effect of a 60-foot high column of light shot in-camera and in real time on stage for director Robert Wise.

 

He worked through the green screen period, which he described as one where “basically the crew would get a lobotomy going in. Nobody knows what it’s going to look like. How do you light it? How do you light for daylight if you can’t see daylight behind me? Where’s the sun? Green screen was very messy, because all the actors didn’t know which way to look. Really bad for the actors and very difficult for a director of photography, and very frustrating for the director.”

He later worked on the groundbreaking TV series 24, elements of which were shot on green screen.

“It was kind of a game changer, because all of a sudden, they said it’s a lot cheaper to bring the location to the actors than taking the actors to the location. It’s very difficult to shoot in Washington DC. when you’ve got to get a permit from one [multiple] groups to shoot anywhere.

“With green screen the actors didn’t have to be out all night. But dammit, we hate being on green screen. I mean, we had Kiefer Sutherland, like, walk off the set because he hated shooting on green.”

Virtual production solves a lot of these problems, giving actors a cue as to the environment they are in. But it is still very early days in the technology’s development.

Covering dialogue scenes with a shallow depth of field on a longer lens is ideal, but the wider the lens the more difficult it gets.

In addition, and perhaps the biggest challenge, is that virtual production isn’t as flexible as you might be led to believe. If a director changes their mind about a shot after the event it remains difficult to fix that shot with all the backgrounds baked-in in post.

“If you have a director who doesn’t know what they want or you’re on a short prep schedule, don’t try to do virtual production because you’ll get burned. Be really aware that you don’t have an alpha channel. There is no matte. So you’re gonna wind up with a big old rotoscoping bill if you change your mind.”

His advice? Prep, prep and prep. “Virtual production is not a panacea. It’s a great new tool that does certain things like reflective objects really well. But it does other things horribly, like changing your mind.”


Tuesday, 11 July 2023

It’s Definitely a New Media Experience in the MSG Sphere

NAB

More than just another giant screen or an upgrade to 4D cinema, the latest Las Vegas attraction is being touted as a new experiential entertainment format.

article here

“We are redefining the future of entertainment through Sphere,” MSG Entertainment executive chairman and CEO James L. Dolan says. “Sphere provides a new medium for directors, artists, and brands to create multi-sensory storytelling experiences that cannot be seen or told anywhere else.”

“This will be a quantum leap forward in the sense of what a concert can be,” U2’s The Edge told Andy Greene at Rolling Stone. “Because the screen is so high-res and so immersive, we can actually change your perception of the shape of the venue. It’s a new genre of immersive experience, and a new art form.”

U2 will be the opening act for the Sphere on September 29, the first of a largely sold out 25-date residency running through the end of the year.

The developers of the 366-foot-tall, 516-foot-wide dome are aiming to reinvent every aspect of the live event experience and is the culmination of seven years of work, with a budget that reportedly stretched to $2.3 billion.

Virtual reality without the goggles was the elevator pitch, MSG Ventures CEO David Dibble recalls to Rolling Stone.

“We thought, ‘Wouldn’t it be great to have VR experiences without those damn goggles?’ That’s what the Sphere is,” says Dibble.

It had to have the world’s highest resolution screen, and so it does at 16K by 16K. There is no commercial camera capable of recording at that resolution without having to stitch together images from a camera array. So MSG Entertainment built its own camera system and a whole postproduction workflow, which together comprise a system it calls Big Sky.

Naturally, the Big Sky single lens camera boasts t316-megapixel sensor capable of a 40x resolution increase over 4K cameras. It can capture content up to 120 frames per second at the 18K square format, and even higher speed frame rates at lower resolutions. They designed a custom media recorder to capture all that data including uncompressed RAW footage at 30 Gigabytes per second with each media magazine containing 32 terabytes and holding approximately 17 minutes of footage.

According to David Crewe at Petapixel, who saw the tech first hand, “since the entire system was built in-house, the team at Sphere Studios had to build their own image processing software specifically for Big Sky that utilizes GPU-accelerated RAW processing to make the workflows of capturing and delivering the content to the Sphere screen practical and efficient. Through the use of proxy editing, a standard laptop can be used, connected to the custom media decks to view and edit the footage with practically zero lag.”

Specialist lenses have been built, too, including a 150-degree field of view, which is true to the view of the sphere where the content will be projected, and a 165-degree field of view which is designed for “overshoot and stabilization” and is particularly useful in filming situations where the camera is in rapid motion or in a helicopter.

The 164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience. This was designed by German start-up Holoplot following investment in the company by MSG.

According to Rolling Stone, the patented audio technology they created allows them to beam waves of sound wherever they want within the venue in stunningly precise fashion. This would allow, for example, one section of an audience to hear a movie in Spanish, and another side to hear it in English, without any bleed-through whatsoever, almost like fans are wearing headphones. “It can also isolate instruments,” says Dibble. “You can have acoustics in one area, and percussion in another.”

The venue can seat 17,600 people, and 10,000 of them will be in specially designed chairs with built-in haptics and variable amplitudes: Each seat is essentially a low-frequency speaker. There’s also the option to shoot cold air, hot air, wind, and even aromas into the faces of fans.

“There’s a noise-dampening system that we used in the nozzles of our air-delivery system that NASA found really interesting,” Dibble tells Rolling Stone. “They were like, ‘Do you mind if we adapted that for the space program?’ We went, ‘No, knock yourself out.’”

Director Darren Aronofsky (The Fountain, The Whale) was commissioned to shoot Postcard From Earth, the first piece of cinematic content for the Sphere with the Big Sky camera wielded by Oscar-nominated cinematographer Matthew Libatique.

“At its best, cinema is an immersive medium that transports the audience out of their regular life,” Aronofsky told The Hollywood Reporter. “The Sphere is an attempt to dial up that immersion.

He added, “Like anything, there are some things that Sphere works particularly well with and others that present new problems to solve. As different artists play with it, I’m sure they’ll find innovative ways to use it and affect audiences in different ways.”

He adds, “We just recently figured out how to shoot with macro lenses and we filmed a praying mantis resting on a branch. Imagine what that may feel like when we present it 20 stories high.”

The venue could house events like Mixed Martial Arts and will also be a centerpiece of the Formula One grand prix in October. MSG has announced plans to build similar venues in London and elsewhere.

It is too early to say but perhaps the highly bespoke nature of the venue and the workflow required to produce “experiences” for it may work against it. Will the technology prove more restrictive than flexible?

The Edge made this comment to Rolling Stone: “Unfortunately, because of the amount of time and expense in creating some of these set pieces visually, it’s quite hard to be as quick on our feet and spontaneous as we might have been on other tours.

“But we still are determined that there will be sections of the show that will be open to spontaneity.”

 


Behind the Scenes: Wimbledon, The Championships

IBC

The Championships, Wimbledon 2023 may feature more UHD and High Dynamic Range coverage than ever this year but it’s the volley of editorial firepower being served up that is the real story of the Championship’s technical production.

article here

“We know millions of people are watching the linear output but many are also doing so while their phones we’ve designed and expanded coverage to fulfil a diverse range of audience needs,” said Georgina Green, Broadcast and Production Manager, Wimbledon. “There is so much going at Wimbledon across both weeks we essentially aim to get as much content out to everybody as quickly as possible.”

SW19 is dormant from a broadcast point of view outside of the annual fortnight but the team at the All England Lawn Tennis Club’s (AELTC) in house host broadcast division Wimbledon Broadcast Services (WBS) spends the full year on development. The team is led by Paul Davies who has all but lost his voice when IBC is invited to meet them in the middle of the Championships.

That’s okay though since Davies has more than capable deputies in Green and Broadcast Technical Manager, James Muir. “Wimbledon is the pinnacle of the sport and we aim to match that as the host broadcast service whether that’s technical quality of pictures, quality of picture in the edit, the cameras angles we offer, new mic positions,” Muir said. “It’s about making it as good as it can be whether that’s for a broadcaster in Australia or a smaller digital centric rights holder in Asia.”

The value of Wimbledon to the AELTC as a business was laid bare when Covid cancelled the Championships in 2020, shrinking revenue to just £3.8million, compared to a £292m turnover in 2018/19. Armed with new contracts including with ESPN until 2035, Australian streaming service Stan Sports, and the BBC extended until 2027, the Club returned to a £288m revenue pot for the last financial year (the next report for 2022 is due end of this month).

To maximise the value of rights holder’s investments and to ensure that bids for the next round of rights remain keen, the onus is on WBS to keep pace with changing audience demand. That means more attention to digital and social media but it also means more behind the scenes coverage. Sports documentaries like Netflix series Break Point have created mainstream audience expectation for stories beyond the action on the court.

“The onus is on us to facilitate that access for rights holders and make sure we can give them as much as they want but with an element of control and trust on behalf of the Club so that they [the AELTC] can see the benefit of it,” said Muir.

This manifests itself most obviously in a new pool called Access All England produced alongside the World Feed’s on-court action and made available to rights holders to cherry pick from for their own presentation.

It features, for example, new camera positions and audio of the players arriving to Wimbledon (via tunnel) and more of their journey throughout the day from practice court to lunch to locker room. This is bolstered by a wealth of beauty cams and ENG crews wandering among the summer dressed throngs of the Wimbledon complex. It also includes footage and interviews of broadcast personnel inside the Media and Broadcast Centre.

Whisper is Wimbledon’s new production partner for this year and next, signed in part because of its track record in producing sport with a bias toward entertainment as much as action. It is producing the World Feed, international highlights, a creative preview film and an official film of The Championships, as well as Access All England.

Wimbledon Threads, produced by Whisper, is a strand of stories looking at fashion and clothing. The Purple Carpet is a series of short interviews with celebrities and those in the Royal Box. Second Serve is a second take on the day using different camera angles.

“We’re producing all that in broadcast quality putting it up on (MAM network) Mediabank and making it available in all formats - 16x9, 9x16, 1:1,” said Green. “Our digital team are making sure that what they shoot (on mobile phones) goes into Mediabank for use in socials. We’re working more collaboratively with digital and marketing teams to bring it all together.”

There’s even metaverse style activation to attract the next generation who will be weaned on games not BBC One. These include an online, branded Fortnite race game, featuring Andy Murray and a Roblox experience.

One measure of success will be ratings. Last year’s Men’s final Novak Djokovic Vs Nick Kyrgios peaked on BBC One at 7.5m and was also streamed live 2.6m times on iPlayer and BBC Sport online. In addition, the volume of hours consumed by audiences on TV was the highest since 2016 which saw Murray lift the title.

“Getting through it cleanly is always the priority,” said Muir. “Then there’s broadcaster feedback and so far it is very positive about what we are producing for them. Another indicator is that when there are big significant moments did we cover it correctly with the right editorial. We spend a year planning and there always learnings to take away and improve for next year.”

Two flagship courts in UHD HDR

Last year Centre was the only court in UHD HDR and this was processed as a separate workflow to protect the HD feed. The big change this year is the No.1 Court is also UHD HDR with a workflow solely for the format. The 16 other courts continue to be available in either 1080p HDR or 1080p SDR.

Sam Broadfoot, Technical Project Manager, NEP UK, commented: “With the need to continue to offer rights holders HD SDR feeds as we have in previous years, we’re now using 224 channels of conversion but we’re working in just one workflow and it means the setup of our trucks are similar.

“We’ve also found that the quality of the converted SDR feeds have improved as well, since it is now being captured with a higher dynamic range.”

The increase in UHD-HDR feeds is only part of NEP Group’s full global production ecosystem in play at The Championships. NEP’s full suite of solutions includes broadcast facilities and OB trucks, connectivity, live display and other broadcast services supporting the World Feed.

Mediabank, NEP’s MAM solution is used for remote access to match highlights and other content to be ingested, managed and distributed for rights holders.

NEP Connect is providing a 10G link to Oslo from IMG, with further support from NEP Netherlands, which is supplying 1PB of onsite storage. Additional broadcast services from NEP include 36 EVS VIA machines, 58 host Sony cameras, 150 talkback panels and over 90 km of cable installed each year. More than 300 NEP broadcast engineers, technicians and crew members are onsite supporting the host broadcast and other rights holders.

Robotic cameras

Seven courts are equipped with automatic camera system, Tr-Ace, from NEP division Fletcher. Tr-Ace cameras use image recognition and LIDAR to automatically track players on the court, meaning just one singular operator can control and manage the system for all seven courts.

Aerial Camera Systems (ACS) is supplying 46 specialist cameras to WBS, eight more than last year. These are mostly Sony HDC-P50s with SMARTheads. The new areas covered are the player’s arrivals area and some behind the scenes shots.

“It is important to WBS that each court looks exactly the same from its technical coverage,” explained Matt Coyd, Sales Director, ACS.

Centre Court features five robotic cameras including a 10m baseline track sitting behind the players and tracking their horizontal movement. It is designed into a special hide which is hard to spot on air.

Centre also has two compact cameras, one for each player, fitted discretely to the Umpire’s chair, and remote at camera position 11 and in the northeast corner of the stadia.

No.1 Court is the same minus the NE corner remote, No.2 has two positions and most of the other courts have at least one robotic camera taking a wide master on a high pole or on the side of a building all with bespoke mounts.

There’s also a couple more track systems on the southern court of 25m and on the broadcast centre roof covering the northern courts running 36m.

Various robotic SMARTheads capture beauty shots from the trophy balcony and clubhouse (which sports a 100:1 box lens), player’s balcony, crowd cam and even a ‘flower cam’ – the latter among those in UHD HDR. The press conference area also has a P50.

The practice area is also covered with robotic systems enabling rights holders to provide live coverage of players warming up.

Serving new data

To established data gathering and analytics partners SMT (scoring), Hawkeye (ball tracking) and IBM (ball speed and AI driven highlights compilations) a new addition comes from TennisViz, part of sports analytics company Ellipse Data which is also home to the CricViz and SoccerViz apps.

It ingests the raw ball and player tracking data from Hawkeye and turns it around in less than a second into a range of new data points and insights that it claims have never been available before.

It does this for every point in every match and is used to support the TV broadcast and digital media coverage and, separately, to provide granular analysis for players and coaches.

One of the new metrics is Shots Played In Attack, a key aspect of the game for which there has never been an objective measure calculated in real time, according to Thomas Corrie, Head of Performance and lead analyst, TennisViz.

“Deciding whether a player is in attack or defence is not as simple as pinpointing their court position,” Corrie, a former LTA coach, explained. “The opponent’s position - left, right, forward or back - needs to be accounted for as well as the quality of the ball received.

“If I’m striking a ball and you are out wide then that gives me an advantage in that point,” he elaborated. “It’s not as simple as saying that if you are up the court you are in attack. It’s about the contact point at which you play.

“We consider the quality of the incoming shot because even I am inside the service line playing a volley, if I pick that ball up from my toes I am defending even though I am at the net. Or I could be playing a volley at the net but on the stretch.”

A Conversion Score shows the percentage of time when a player is in attack that they go on and win the point. “You won’t necessarily win if you are the more aggressive player, so you have to be clinical and convert those chances,” Corrie explained.

This can be correlated by another metric, the Steal Score, showing when points are won when the player is in defence.

“The average for Steal Score in the gentleman’s draw is 31%, for the ladies draw it is 33% of points but some players – like Alcaraz, Djokovic and Swiatek - win approx. 40-42% of points in defence. That will appear on screen and it will be commentator’s job to educate the audience that it is not normal for a player to win above a third of their total points when in defence.”

TennisViz also measures shot quality. It does this by breaking down the shot into dozens of data points including from basic serve, return, forehand and backhand to speed, height and spin of the ball as it crosses the net, the depth into court, its width, and bounce angle. It records this for every shot hit and the impact it has on the opponent and the algorithm offers up an instant score out of ten.

“A dropshot is not measured against the same quality parameters as a forehand drive, for example,” Corrie said. “Different types of forehand shot are also measured differently to each other and in context of the impact on the other player. The game has different nuances and this is reflected in the score.”

Every shot is aggregated so that over the course of the match stats can provide information about the quality of any type of shot.

TennisViz algorithms take account of different playing surfaces. The Wimbledon application is trained on 5 million shots from the last two Wimbledon championships. The information and insights are presented as lower third graphics on screen but the next stage, perhaps for 2024, is to use the data points to build CG highlights to be used in pre- and post-game production.

Behind the Scenes: Wimbledon 2023 - BBC

Major rights holders the BBC and ESPN essentially take the rushes of the World Feed but apply a generous serving of their own presentational cream.

The BBC has over 70 vision feeds produced by WBS available in its NEP-supplied production truck and supplements this with its own jib-cam behind Court 18 for those sweeping shots over Henman Hill towards St Mary’s court. It is fielding two ENG crews with radio-cams to reflect more of the atmosphere of the event outside of the court.

In this endeavour they are aided by new lead presenter Clare Balding. The BBC’s main studio position is in the Broadcast Centre. Three other positions are deployed for instance for weather forecasts and crowd colour.

The BBC has also made a change to its highlights format this year. This was traditionally a live programme that tended to get delayed in the schedule or not broadcast at all because priority was given to late finishing live matches.

This year’s hour-long highlights show are post produced onsite, transmitting every night at 9pm, also available on iPlayer and Red Button to guarantee viewers can see it.

 

Monday, 10 July 2023

So What Does Everyone Else Think About AI? (It’s the Beginning or the End or Both)

NAB

AI is out in the wild and being used most extensively for creative experiments, according to a new survey.

article here

People are generating music and videos, creating stories, and tinkering with photos using free AI engines like ChatGPT.

Above all, people have simply been using AI systems to answer questions — suggesting chatbots like ChatGPT, Bing, and Bard may replace search engines, for better or worse.

The report, “Hope, Fear and AI,” from The Verge and Vox Media, polled 2,000 Americans about their attitudes to towards artificial intelligence.

One finding is particularly clear: AI is expanding what people can create. In every category polled, people who used AI said they used these systems to make something they couldn’t otherwise, with artwork being the most popular category within these creative fields.

“This makes sense given that AI image generators are much more advanced than tools that create audio or video,” note the survey authors.

There is general awareness of the ethical issues around AI and art, but less clarity about what to do about it. For example, most people think artists should get compensated when an AI tool clones their style, but a majority also don’t want these capabilities to be limited. Indeed, almost half of respondents said they’d tested the system by generating art in the style or voice of a writer, artist or other well known figure.

A new survey from The Verge and Vox Media shows broad support for regulations on AI. Cr: The Verge

More than three-quarters of respondents agreed with the statement: “Regulations and laws need to be developed regarding the development of AI.”

These laws are currently in the works, with the EU AI Act working its way through final negotiations and the US recently holding hearings to develop its own legal framework.

 

The report highlights that there’s strong demand for higher standards in AI systems and disclosure of their use. Strong majorities are in favor of labeling AI-generated deepfakes, for example. But many principles with wide support would be difficult to enforce, including training AI language models on fact-checked data and banning deepfakes of people made without their consent.

The use of generative AI tools doesn’t stretch much beyond experimentation at this stage and in fact only one in three survey respondents have used them. When they do, brand recognition for ChatGPT is highest, though few people are familiar with the companies and startups behind the tools.

That said, people have high expectations for AI’s impact on the world — beyond those of other emergent technologies. Nearly three-quarters of respondents said AI will have a large or moderate impact on society. That’s compared to 69% for electric vehicles and a paltry 34% for NFTs.

More than 60% of survey participants predicted job losses as a result of AI and other societal dangers, including threats to privacy and government (ranked at 68%) and corporate misuse (67%).

“These dangers are weighted more heavily than potential positive applications, like new medical treatments (51%) and economic empowerment (51%). And when asked how they feel about the potential impact on their personal and professional life and on society more generally, people are pretty evenly split between worried and excited. Most often, they’re both.”

Fifty-six percent of respondents think “people will develop emotional relationships with AI,” and roughly half expect that a sentient AI will emerge at some point in the future (two-thirds don’t have an issue with companies trying to make one).

Yet, nearly 40% think that AI will wipe out human civilization.

Perhaps that’s why more people are worried than not.


Tuesday, 4 July 2023

DMC: A Pan Tone for Cinema

IBC

The efforts made to accurately represent diverse skin tones on screen are finally breaking through in the form of Digital Melanin Cinematography (DMC).

article here

From the roots of its invention the fabric of cinema has contained bias. Stemming from a deliberate decision to prioritise the aesthetic beauty of Caucasian skin over darker skin tones in the chemical composition of colour film stocks, the accuracy of how non-white people look on screen has largely gone unchecked. The bias is also ingrained in digital cinema systems perpetuating the false assumption that darker skin tones require more light or are harder to film.

A group of South African filmmakers and scientists aim to change that by creating a new universally accepted standard in the approach to filming, photographing and processing melanin rich skin.

“This inherent bias has turned film into a political weapon in that [film] was never made for non-white communities,” said Mandla Dube, director and cinematographer (Silverton Siege) who has pioneered Digital Melanin Cinematography (DMC) with fellow filmmaker Ndumiso Mnguni.

DMC, they explain, is a study of how the appearance of skin from the people of the African and Indian diaspora is affected in media. The results of its research are hoped to be presented in a white paper at IBC this year. One of its aims is to counterbalance the weight of R&D that has led to prevailing standards for capturing skin tones on film.

“The film industry been around for more than 100 years during which people have oriented around a body of knowledge,” Dube explained. “But [non-Caucasians] were not active in producing film stock. Film was expensive and not easily accessible, limiting representation and how we could portray ourselves in our own stories. When the industry migrated to digital that research simply didn’t exist. The status quo continued.”

DMC: Improving on manufacturer LUTs

They have partnered with South Africa’s Council for Scientific and Industrial Research (CISR) to devise a tool that cinematographers can use to more accurately calibrate a digital sensor to photograph skin tones of all hues. Digital cine cameras with 17-stops or more of dynamic range and wide colour spaces should have the sensitivity to capture any skin tone, yet when lensing their own work Dube and Mnguni avoid using manufacturer LUTs (the default Look Up Tables that come with digital cine cameras).

“We’ve found they don’t necessarily map around African skin tones accurately. So, we’re building up our own profile. One of our attempts is to build a colour chart that can sample skin tones at a higher rate and help the camera render a wide variety of skin tones.”

Skin is the largest organ of our bodies and something all of us are very conscious of. It provides us with important non-verbal information such as perceived ideas of age, health, and cultural background.

“Understanding how pigment is created and perceived through the human experience gives the study of melanin cinematography a foundation before translating into something that sensors can understand and reproduce accurately and beautifully,” he said.

They have conducted tests with different camera systems and compiled a database for the software which could be applied on different projects.

“We are multiple using data sources,” said Mnguni. “Some we think are good skin tone renditions and others are bad skin tones. It’s important not to get just one perspective which would echo one’s own bias but to achieve a universal sense of agreement with a larger sample. But where is the data going to come from as far as digital melanin skin tone is concerned unless we feed it?”

The filmmakers hope SMPTE and The Academy of Motion Picture Arts and Sciences will take notice. The Academy’s ACES colour image encoding system is itself nearly a decade old and is arguably due for an upgrade. Discussions would also be important within British and American cinematographer societies.

DMC: Industry collaboration is key

Reference monitors would also need calibrating to the same standard in order for cinematographers and colourists to see the results of their work. Ideally, all display panels from TVs to smartphones would also be built to take account of digital melanin’s skin tone accuracy.

“Clearly, this is not going to happen overnight,” said Dube. “We will need the collaboration of engineers, filmmakers and studios globally if we are going to be intentional about changing the rendering of all skin tones.”

There has been positive feedback from some camera and grading systems manufacturers including Sony and Blackmagic though none has yet adopted Digital Melanin’s schema into their product.

Google made improvements to the camera on its Pixel smartphone in 2021 to better capture darker skin tones. ‘Real Tone’ ensures that the auto white balance in photos is improved, so that darker skin tones don’t look paler than in real life. Announcing the development, Google said it would focus on making images of people of colour “more beautiful and more accurate.”

Just as important as technology change for Dube and Mnguni is to provoke dialogue among the cinema community in the hope of establishing best practices for the curation of images that respect all skin tones.

“We’re not saying that African skin has never been rendered beautifully through cinema history,” said Dube, “but we are saying there’s a lack of consistency of standards and that we have a framework that will yield the best results. Digital Melanin is a chance to interject new information into the curation of the digital negative and become part of the growing body of knowledge in film.”

They argue that when cinematographers prepare to shoot a project with the intent of skin tone accuracy it is a process of trial and error. The proposed DMC tool would give filmmakers a starting point that removes the guesswork.

“We’re not taking away from the filmmaker’s own creative process but giving them a good basis of truth from which to begin,” said Mnguni.

DMC: Positive progress to date

Their work has already made an impact. Sony Pictures’ The Woman King was shot in 2021 in South Africa by Polly Morgan ASC BSC with Digital Melanin’s guidance.

“Historically some filmmakers haven’t done the best job in lighting properly for dark skin,” Morgan explained. “From a light meter to a camera sensor every exposure tool [cinematographer’s use] is based on 18% middle grey. Since 18% grey is matched with lighter skin tones, when you are exposing everything is related to Caucasian skin. The aim of Digital Melanin is to help cinematographers everywhere to light dark skin with accurate tonality. It is important for everybody to have this conversation and not be shy of shooting black skin or nervous about broaching this topic.”

Netflix is also supporting Digital Melanin’s initiative. The streamer has invested $160 million in African originals since 2016 including commissioning Dube’s Silverton Siege.

“Netflix is talking to us about how to integrate our ethos into work shot for Netflix Africa,” said Dube. “Naturally, a lot of content shot here now mostly features African skin tones in front of camera.”

The ‘Digital Melanin’ approach doesn’t just concern lighting but encompasses a production-wide embrace of wanting to see people correctly on screen. Make-up artists, for example, have a role to play in learning about the best application for different skin tones.

“How people are portrayed in cinema has a profound effect on how people see themselves,” said Dube. “Growing up I was watching Caucasian people rendered in a way that looked beautiful and larger than life. That has an effect on your self-esteem. How someone looks aesthetically on camera can have a huge impact on how they perceive their sense of self-worth. This is gradually starting to change and we want to show how we can part of moving that further upstream. This is the philosophy of DMC.”

Other notable cinematographers like Tommy Maddox-Upshaw, ASC (White Men Can’t Jump) are making vocal and visual statements about this nuanced and global approach to tonality.

“When I look at shows, a lot of the Black folks look monochromatic and that’s not right. I have four sisters and we are all different hues of brown. I am a different complexion to my daughter.”

He extended this sensibility universally. “People in the UK have a different skin pigmentation from Caucasians in South Africa or the Mediterranean. All digital cameras interpret skin tones a certain way but my take is that I should be the one in control of manipulating skin tone if I want to.”

Cinematographers, encouraged by film schools, will still often take their cues from the classics of European art. Caravaggio, Rembrandt and Vermeer are revered for their lighting, true, but also their portraits of white skin tones.

“What does that say to me as a black cinematographer when working with African skin tones?” asked Mnguni. “Which painter do I refer to as a baseline?”

Inherent bias

Historically the industry standard for capturing images was centred around ‘Shirley cards’ which were used to calibrate skin-tones and light. They only featured Caucasian people up until the 1970s where photographers shooting commercials raised concerns over not being able to capture the ranges and variations of colour found in wood and chocolate.

Shirely cards were distributed by Eastman Kodak to photo labs and featured photos of “similar-looking alabastrine, brunette white women,” according to Kaitlyn McNab writing for Allure.

“Shirley became the standard for colour correction, the yardstick used for processing by technicians, and now, a symbol of the skin colour bias deeply rooted within the world of photography,” she writes.

While FujiFilm began to alter its colour stocks to better reproduce Asian skin tones, the R&D from Kodak the world’s dominant film stock manufacturer persisted in the transition to digital.