Wednesday, 29 March 2023

Virtual Production Might Be Part of the Process, But At This Point, Everybody Needs Education

NAB

Virtual production is on the rise and most filmmakers are excited about using it, but many express dissatisfaction with many of the current tools to do it.

article here

These are some of the insights revealed in a fresh survey conducted by Chicago-based film-tech outfit Showrunner, which claims the report is the first-of-its kind in the industry.

Nearly 800 filmmakers were surveyed, along with 72 virtual production facility operators for the State of Virtual Production report.

The first set of stats that jumps out is about the industry’s level of enthusiasm.

Among the general population of filmmakers surveyed, about half are “excited” about the rise of this new technology, with 22.3% “skeptical.”

Those skeptics tend to fade away once they’ve got to grips with the technology.

“The real kicker, though, is when we filter for those professionals with three or more years of experience with the new technology. At this point, the more experience folks have with a new technology correlates with increased excitement about it,” says Showrunner’s CEO, Shane Snow. “It’s pretty clear indicator that the tech is here to stay.”

When asked, more than 75% of filmmakers said they anticipated doing at least some work using virtual production technology this year.

 And more than three-quarters of studio owners and operators indicated that they would be doing more VP work this year than last. Fifty-seven percent said they anticipated doing “a lot more.”

About one-third of virtual production work currently being produced is for car/auto shoots. Film and TV projects each make up nearly 20%, with commercials, music videos and live events making up the rest.

The survey asked a variety of questions about specific tools and techniques in virtual production, with the feedback indicating that work needs to be done before everyone in the industry might fully embrace the new tech.

Th stats show just how unsure folks are about the quality and technical difficulty of virtual production,” said Snow. “This says to me that some filmmakers are going to still play the ‘wait and see’ game until other, more bullish early adopters figure out these answers definitively.”

Nearly 80% of virtual production studio owners anticipate booking more work this year than last year, per the survey.

“Virtual production is one of the fastest-growing trends in filmmaking right now. But as the data shows, the industry needs to make sure the hype translates into tools and education that make filmmakers confident that they can actually do it.”

 


Where the Metaverse Meets the Real World

NAB

The metaverse. Remember that? You can’t escape the feeling that in reality it isn’t really anything life- or business-changing, at least not yet at any rate.

article here

That said, aspects of what we might call the metaverse are already here and businesses are basing investment decisions on the future of the internet.

Investors for example bought $650 million of virtual real estate last year, we learn from a two-part “Future of the Metaverse” podcast hosted by chip maker ARM.

“They’re buying stuff that doesn’t exist,” says Matthew Griffin, the founder and CEO of the World Futures Forum. “We’re talking about a thing that’s only recently got a definition that technically doesn’t actually exist but that actually has real impact in the real world.”

The other participant on the podcast is ARM’s director of innovation, Remy Pottier. He points to examples of the metaverse being built today by companies including Autodesk, NVIDIA and BMW, which are creating digital twins to train robots and educate workers.

Other examples from the gaming world are Roblox, Unity and Epic Games. “These experience creation platforms and 3D engines are generating billions in revenues generally just from the platform itself,” says Griffin.

He also suggests that the metaverse is a new marketing channel where brands can access existing or potentially new customers.

“It makes a lot of sense for brands to switch part of their digital marketing budget to metaverse-related budget and test virtual product just before they go and they go and build it in the physical world.”

For example, drugs designed in virtual reality landscapes that are then eventually manufactured in the real world, but occurring at speeds that were unimaginable even just a couple of years ago.

“For consumers, it means that a lot of live or play experiences will switch to the metaverse,” Griffin posits.  “This means potentially new immersive, personalized experience. Wherever you go it means, 3D virtual options that open up for film, television, and music and that we will be able to consume.”

Physical hardware such as VR headgear remains a primary barrier to consumer experiences in the metaverse, but Griffin thinks that “increasingly in the next couple of years, from a gadgets perspective, we could actually see slipping into the metaverse as easy as putting on a pair of sunglasses.”

The next step is easier user control via gesture, voice or other forms of haptic interface.

Pottier outlines four categories of use case driving metaverse development.

The first is about transcending time and place — digital teleportation.  “It’s about adding a way to travel the world in a digital way without moving from your chair.”

It’s not just about “tourism,” but business, too. He points to a next generation of immersive video conferencing that enables one to “teleport.”

“It can be training, education, product development… to co-create and co-develop product in the metaverse from sites that will be in India, or in China, or Europe, and people can meet in this collaboration room, look at the device touch it and, decide whether or not it’s the right device they want to create.”

Another use case is dubbed “window into the unseen.” This is a realistic simulation of what’s happening inside, for instance, a running motor or an internal view of organs for surgeons.

Number three is alternate reality. One example is Pokémon Go, where digital worlds are overlaid on the real in real time.

A fourth is expanding human capabilities. “We are weak today and very limited when you think about the metaverse capability, which is infinite. Just think about real time translation of everything you do. You will be able to speak any language without even having to think about it. It could augment your five senses, from everything to the digital world or become your super digital assistant, that basically knows everything you did.  The digital assistant just knows, because it has been able to store everything.”

In the second part of the podcast, Pottier and Griffin discuss the how closely the metaverse  could resemble sci-fi movies.

Griffin argues that a prerequisite to any form of successful metaverse is a set of laws and legislation to build it on.

“For example, if I’m going to be building my metaverse and I choose a particular platform to build it, and bear in mind that can incorporate an entire city, as some countries are trying to do, what happens when the company hosting that platform goes bust?

“We’ve seen a lot of virtual reality trademarks already being registered for company, for third parties that have nothing to do with the original trademarks.

“What happens when somebody else goes onto that same platform, into my virtual world, and then starts building their own virtual reality world in my world? Where do we end up with this kind of this Disney multiverse madness?”

He goes on, “How do we actually audit what’s going on in the metaverse? Because we’re doing all these different things, we have no idea how to keep track of them, how to monitor them, how to report on them. Adidas, for example, has been selling NFTs [but] how do you actually report on that?”

The pair then went on to discuss more far-fetched concepts such as the blending of the virtual with the real, a familiar concept from trippy sci-fi classics like Total Recall and The Matrix.

The gist of the discussion is that many of the elements that could lead to such fictional scenarios are already science fact.

Pottier says, “To actually get into The Matrix scenario, I think first we need to actually already agree to live in The Matrix in a virtual actual reality, at least part of, maybe most of our time. It means that we are already living in some kind of Ready Player One kind of world, and we already agreed to do that.”

Another idea: most people are used to the idea of a brain machine interface, like a skull cap that reads our brain signals and converts it into text or images.

“Three years ago, we managed to prove that you can actually use artificial intelligence and brain machine interfaces to push information into people’s heads,” says Pottier. “So in The Matrix, when Trinity goes up to the Huey and says, I need to learn how to fly a Huey now and all of a sudden the knowledge is uploaded to her brain, and she gets in and then flies Neo out over the skyscrapers and the skyline. We’ve already done that.

“What scientists have figured out in the labs is how to upload or transmit knowledge to your brain using technology. So when we actually have a look at that Matrix, we are way beyond that already.

“We’ve got a Neuroprosthetic chip that is used in Alzheimer’s able to read your biological brain signals and convert them into ones and zeros. For Alzheimer’s patients, this improves their memory retention by 30. But if I can convert your brain signals into ones and zeros and store it on a computer chip in your head as a memory, isn’t that memory downloading? And then couldn’t I take those ones and zeros and push them into the cloud?”

 


Making the Fame Monster of “Swarm”

NAB

The first series from Donald Glover following the conclusion of Atlanta, Swarm obviously aims to provoke — or, in a more on-theme metaphor, pack some sting.

article here

Glover’s new show is designed to make headlines, proclaims Alison Herman at The Ringer. Most critics agree with her that, while packing more punch than most, the series has so much demanding our attention that it ultimately lacks focus.

The pop star character and her fan entourage at the center of the seven-episode, 30-minute limited series is figuratively if not quite literally intended to be Beyoncé and her Beyhive.

Before each episode, a riff on a standard disclaimer declares, “This is not a work of fiction. Any similarity to actual persons, living or dead, or actual events, is intentional.”

The character, Ni’Jah, is a musician whose fans call her “queen” and “goddess,” and who surprise-drops visual albums that take over the internet. “[She] doesn’t resemble Beyoncé,” Vulture’s Roxana Hadadi finds. “She is Beyoncé, and Swarm has no interest in pretending otherwise.”

The series investigates stardom — or, rather, “stan-dom” — the obsessive nature of fans and celebrity cults. Swarm is Glover’s first project under his lucrative deal with Amazon Prime Video and is co-created with Janine Nabers.

They say they drew inspiration from real events that occurred between 2016 and 2018, which does happen to include the release of Beyonce’s 2016 visual album Lemonade and the #WhoBitBeyonce internet debate.

The show is, in some ways, its predecessor’s inverse, observes Herman. “Atlanta, too, was about music and mega-fame, but its point of view belonged to the performer. Swarm switches to that of an obsessed ultra-fan: Dre who’s been part of the Beyhive — sorry, Ni’Jah’s ‘Swarm’ — since she was a teenager.”

Asked by Variety’s Angelique Jackson just how far they pushed the truth of these events — and whether they ever worried about how far Amazon would let them go — Nabers said: “Everything is legally combed through. If we pushed it, we pushed it to the very, very, very edge, but it’s legal and we’re proud of that.”

It is in fact Dre, played by Dominique Fishback, who is the series’ protagonist, and by the end of the first episode she is revealed to be more than a little deranged.

She goes on a killing spree in honor of her dead best friend and to protect, as she sees it, Ni’Jah.

Glover explained to Jackson that the concept of a Black woman serial killer was born a tweet he read.

“I remember them saying like, ‘Why are we always lawyers and, like, best friends? We can be murderers, too.’ And I was like, ‘That is true,’” Glover said.

Nabers follows this thought up with Ben Travers at IndieWire, referencing how Dahmer recently became a huge Netflix hit.

“I think as Americans, we’re so conditioned to seeing white men be angry. We’re giving them that space for violence on film and TV.”

She added, “Our writers’ room was completely Black,” she said. “All our directors are Black, [and] most of our producers are Black.”

In imagining what it would look like if the serial killer subgenre focused on a Black woman instead of a white man, the terminology they used was “alien.”

“[Dre] is an alien in her own world,” Nabers told Selome Hailu at Variety. “If you look at the pilot, when she gets to Khalid’s house, there’s aliens on TV. That’s a through line with her throughout the series. We looked to [Michael Haneke’s] The Piano Teacher for inspiration. Donald introduced that movie to me, and it blew my mind. It centers around a woman who has a very everyday way of living her life on the surface, and then when you peel back the layers of her complicated psychology, you unearth a completely different type of human that is very alien-feeling.

“But me being from Houston and Donald being from Atlanta, we wanted to filter it through a Southern, Black female perspective. It is a little bit like a sister Atlanta when you look at the weird family relationships.”

The show’s casting is one not so subtle way of grabbing headlines. Paris Jackson, Michael Jackson’s daughter, plays a character who presents as white but calls herself Black because she has one Black grandparent. Casting director Carmen Cuba apparently pitched Paris Jackson.

“We were like, “Exactly. That’s exactly what we’re talking about,” Nabers told Hailu. “I’m a Jewish woman, she’s identifies as Jewish, so we bonded about that. She really just owned this character of a light-passing biracial woman who is really intent on letting everyone know about her Blackness.”

Chloe Bailey plays Dre’s sister and a protégé of Queen Bey herself, increasing Swarm’s connection to its all-but-explicit subject.

Episode four guest stars Billie Eilish, who makes her acting debut on the show as the leader of a women’s cult — an intentional parallel to her role as a pop star.

Critics largely give the show a thumbs up for its ambition and subversive qualities.

“That Swarm is only intermittently successful doesn’t make it any easier to look away from the screen,” says Mike Hale in The New York Times.

He astutely observes that Swarm inhabits the space between horror and comedy where Atlanta often thrived.

Hale adds, “It’s not hard to understand why more and more filmmakers are choosing the horror genre for stories set in contemporary America, particularly those involving the lives of people outside the white-male protective bubble.”

“Think the Coen Brothers meets Atlanta meets Carrie, with some Basic Instinct and Perfect Blue thrown in there too,” writes Pitchfork’s Alphonse Pierre. “Celebrities are worshiped — and they often turn a blind eye to their obsessed fans’ worst behavior while milking their fanaticism for every last dollar.”

It also has some the stylistic trademarks of Atlanta which, like that show, have also made it uneven. Like Atlanta’s mockumentary episode for instance one episode of Swarm is done in true-crime documentary style.

Swarm needs much more clarity on what it wants to say about fandom in general and the specific fan at its center,” finds Herman in The Ringer. “Violent, vicious, and extremely online, Swarm obviously aims to provoke. Once the buzz dies down, though, there’s not much substance to sustain the hype.”

Vulture’s Hadadi says, “Swarm feels boldest when it wonders when person-to-person devotion becomes abstract glorification, and what inner mechanics inspire someone to give themselves over to another.”

“Thankfully, as the series progresses, it reveals itself to be much more than a stylized parody centered around what many might consider obvious internet bait,” writes Kyndall Cunningham of The Daily Beast. “Beneath the Beyoncé of it all, Swarm is ultimately a story about grief and isolation.”

Hale is particularly critical, believing that Swarm doesn’t work through or make strong dramatic use of all its ideas and “ends in a formless, non-sequiturish manner. It feels as if no one really knew where they wanted to take things,” he says.

“In the balance of the season, the viscous, seductive ambience and dream-logic storytelling mostly fade out, replaced by high-concept, tonally garish episodes that hold your attention but stand alone like neon billboards, adding little to our understanding of Dre beyond the facts of her back story, doled out in typical streaming-series style.”

Nabers seems to defend their approach, saying that they deliberately steered clear of definitive messaging.

“I don’t think that, as a brand, Donald and I believe in a message,” she commented during a Q&A following the film’s premiere at SXSW, as Variety’s Hailu reported in a separate article. “People can interpret it the way that they want to. We hope it inspires people in some way to create weird punk shit, or to talk

 


How Hollywood Is Handling the Climate Crisis

NAB

Disaster movies like 2012, Greenland and Don’t Look Up aside, Hollywood has barely touched on the existential crisis that has been facing the planet for decades.

Perhaps climate change stories don’t sell, although the acclaim greeting Extrapolations suggests otherwise.

article here

The Apple TV+ series includes children struggling with a lethal condition called “summer heart,” wildfire smoke semi-permanently blotting out the sun, and people wading into churches to worship, and, according to Sammy Roth at the Los Angeles Times, has shown that “a haunting, rage-inducing, totally necessary series about the climate dangers on the horizon” is exactly what we need.

The subject matter attracted an all-star cast, too, including Marion Cotillard, David Schwimmer, Edward Norton, Meryl Streep and Forest Whitaker.

“We need more climate stories. We need more diverse climate stories. And there’s tons of climate people who are willing to work with folks in Hollywood to get the stories right,” says climate policy expert and advocate Leah Stokes.

Stokes publishes the environmentally themed podcast, A Matter of Degrees, with Katharine Wilkinson, and was interviewed by J. Clara Chan at The Hollywood Reporter.

“The vast majority of Americans think that climate change is real — it’s happening now,” Stokes says. “Deniers are maybe 10% of the population. Our show is really for folks who want to go deeper on the climate issue and are concerned about it, which is the vast majority of American people, and we want to get into the details in an accessible way that people can understand.”

Communicating about practical changes we can all make shouldn’t be talked about as a “sacrifice,” she argues.

“So much of the branding, from those who don’t want us to transition off of fossil fuels, has been painting what we’re doing as being sacrifice. I have an EV, I have solar on my roof, I have two heat pumps — one for my water, one for heating and cooling my home. I have all these things and guess what? I can still take a hot shower; I can still drive around. I can still do all the things that I could do with fossil fuels,” she says.

“That’s when we’re going to win, when people really understand that, actually, it’s just better to not poison myself while I cook myself lunch by combusting gas in my house. And it’s just better to drive an EV because it’s cheaper and I don’t have to worry about high [gas prices].”

In terms of production, Hollywood can do more, too, for example by electrifying sets away from diesel generation. Federal government tax incentives can be tapped, for instance, to gain 30% of the cost back for solar and batteries.


Tuesday, 28 March 2023

Should Generative AI Be Held to the Same Copyright Laws as Human Creators?

NAB

Anyone suggesting generative AI systems are unfairly exploiting the works of creators is wrong, says Daniel Castro, director of the Center For Data Innovation.

article here

He argues that generative AI systems should not be exempt from complying with intellectual property (IP) laws, but neither should they be held to a higher standard than human creators.

Castro’s report refutes the arguments made about how generative AI is unfair to creators and also acknowledges that there are legitimate IP rights at stake.

Training AI Models

The biggest debate when it comes to copyright is whether generative AI systems should be allowed to train their models on text, audio, images, and videos that are legally accessible to Internet users but are also protected by copyright.

Some creators argue that it is unfair for developers to train their AI systems on content they have posted on the Internet without their consent, credit, or compensation.

Castro says that people do not have the right to use copyrighted content any way they want just because they can legally access it on the Internet. However, their not having the right to use it any way they want does not mean they cannot do anything with this content. For example, search engines can legally crawl websites without violating copyright laws.

“While it will ultimately be up to the courts to decide whether a particular use of generative AI infringes on copyright, there is precedent for them to find most uses to be lawful and not in violation of rightsholders’ exclusive rights.”

Is training AI systems on copyrighted content just theft? Online piracy is clearly theft, says Castro, but seeking inspiration and learning from others is not.

“In fact, all creative works are shaped by past works, as creators do not exist in a vacuum. Calling this process theft is clearly inaccurate when applied to the way humans observe and learn, and it is equally inaccurate to describe training a generative AI system.”

Is it wrong to train AI systems on copyrighted content without first obtaining affirmative consent from the copyright holder?

According to Castro, copyright owners have the right to decide whether to display or perform their works publicly. But if they choose to display their work in public, others can use their works in certain ways without their permission. For example, photographers can take pictures of sculptures or graffiti in public places even when those works are protected by copyright.

“There is no intrinsic rationale for why users of generative AI systems would need to obtain permission to train on copyrighted content they have legal access to,” he says. “Learning from legally accessed works does not violate a copyright owner’s exclusive reproduction and distribution rights. Unless human creators will be required to obtain permission before they can study another person’s work, this requirement should not be applied to AI.”

Critics of generative AI are also likely to overestimate individual contributions. In figures given in the report, Stable Diffusion trained on a dataset of 600 million images. Of those, out of a sample of 12 million of the most “aesthetically attractive images” (which presumably skew more toward works of art than other random images from the Internet), the most popular artist (Thomas Kinkade) appeared 9,268 times. Put differently, the most popular artist in the dataset likely represented only 0.0015% of all images in the dataset.

Or consider LaMDA, a large language model created by Google, that trained on 1.56 trillion words scraped from the Internet.

“Given the size of these models, the contribution of any single person is miniscule,” Castro concludes.

Critics also contend that generative AI systems should not be able to produce content that mimics a particular artist’s distinctive visual style without their permission. “However, once again, such a demand would require holding AI systems to a different standard than humans,” fires back Castro. “Artists can create an image in the style of another artist because copyright does not give someone exclusive rights to a style. For example, numerous artists sell Pixar-style cartoon portraits of individuals.”

And it is perfectly legal to commission someone to write an original poem in the style of Dr. Seuss or an original song in the style of Louis Armstrong. Users of generative AI systems should retain the same freedom, he says.

Legitimate IP Issues of Concern

Nonetheless there are legitimate IP issues for policymakers to consider. Castro dives into them.

Individuals who use AI to create content deserve copyright protection for their works. The US Copyright Office has developed initial guidance for registering works created by using AI tools. The Copyright Office should not grant copyright requests to an AI system itself or for works in which there is no substantial human input.

He argues that copyright protection for AI-generated content should function similarly to that of photographs wherein a machine (such as a camera) does much of the mechanical work in producing the initial image, but it is a variety of decisions by the human photographer (subject, composition, lighting, post-production edits, etc.) that shape the final result.

Likewise, individuals who use AI tools to create content do more than just click a button, such as experimenting with different prompts, making multiple variations, and editing and combining final works.

Just as it is illegal for artists to misrepresent their works as that of someone else, so too is it unlawful to use generative AI to misrepresent content as being created by another artist.

“For example, someone might enjoy creating drawings of their own original characters in the style of Bill Watterson, the cartoonist behind the popular Calvin and Hobbes comic strip, but they cannot misrepresent those drawings as having been created by Watterson himself.

“Artists can and should continue to enforce their rights in court when someone produces nearly identical work that unlawfully infringes on their copyright, whether that work was created entirely by human hands or involved the use of generative AI.”

Impersonating Individuals

Generative AI has not changed the fact that individuals should continue to enforce their publicity rights by bringing cases against those who violate their rights.

This right is especially important for celebrities, as it enables them to control how others use their likeness commercially, such as in ads or in film and TV.

Castro says, “While deepfake technology makes it easier to create content that impersonates someone else, the underlying problem itself is not new. Courts have repeatedly upheld this right, including for cases involving indirect uses of an individual’s identity.”

Generative AI also raises questions about who owns rights to certain character elements. For example, if a movie studio wants to create a sequel to a film, can it use generative AI to digitally recreate a character (including the voice and image) or does the actor own those rights? And does it matter how the film will depict the character, including whether the character might engage in activities or dialogue that could reflect negatively on the actor?

Castro thinks these types of questions will likely be settled through contracts performers sign addressing who has rights to a performer’s image, voice and more.

Moving Ahead

Castro finds that while there are many important considerations for how generative AI impacts IP rights and how policymakers can protect rightsholders, critics are wrong to claim that such models should not be allowed to train on legally accessed copyrighted content.

Moreover, imposing restrictions on training generative AI models to only lawfully accessed content could unnecessarily limit its development.

“Instead, policymakers should offer guidance and clarity for those using these tools, focus on robust IP rights enforcement, create new legislation to combat online piracy, and expand laws to protect individuals from impersonation.”

 


AI Is Booming… But Also Burning Carbon (Fast)

NAB

AI is going to be being ubiquitous in just about everything we do — but at what cost to the planet?

article here

While some commentators continue to raise red flags about the Cyberdyne Systems’ Skynet we are building, a more frightening and near-term concern is surely the impact computer processing artificial intelligence is having on climate change.

According to a Bloomberg report, AI uses more energy than other forms of computing, and training a single model can use more electricity than 100 US homes use in an entire year.

Google researchers found that AI made up 10% to 15% of the company’s total electricity consumption in 2021, which was 18.3 terawatt hours.

“That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta,” Bloomberg’s Josh Saul and Dina Bass report.

Yet the sector is growing so fast — and has such limited transparency — that no one knows exactly how much total electricity use and carbon emissions can be attributed to AI.

AI developers, including OpenAI whose latest ChatGPT model has just hit the market, use cloud computing that relies on thousands of chips inside servers in massive data centers to train AI algorithms and analyzing data to help them “learn” to perform tasks.

Emissions vary of course depending on what type of power is used to run them. A data center that draws its electricity from a coal or natural gas-fired plant will be responsible for much higher emissions than one that uses solar, wind or hydro.

The point is that no-one really knows — and the major cloud providers are not playing ball. The problem is not unique to AI. Data centers are a black box relative to the more transparent carbon footprint accounting being reported by the rest of the Media & Entertainment industry.

According to Bloomberg, while researchers have tallied the emissions from the creation of a single model, and some companies have provided data about their energy use, they don’t have an overall estimate for the total amount of power the technology uses.

What limited information is available has been used by researchers to estimate CO2 waste by AI — and it is alarming.

Training OpenAI’s GPT-3 took 1.287 gigawatt hours, according to a research paper published in 2021, or about as much electricity as 120 US homes consume in a year. That training generated 502 tons of carbon emissions, according to the same paper, or about as much CO2 as 110 US cars emit in a year.

While training a model has a huge upfront power cost, researchers found in some cases it’s only about 40% of the power burned by the actual use of the model, with billions of requests pouring in for popular programs.

Plus, the models are getting bigger. OpenAI’s GPT-3 uses 175 billion parameters, or variables, through its training and retraining. Its predecessor used just 1.5 billion. Version 4 will be many more times as big with a knock-on cost in compute power.

The situation is analogous to the early days of cryptocurrency where bitcoin in particular was hammered for the huge carbon waste from mining.

That negative publicity has led to change in crypto mining operations – and the same pressure could be applied to AI developers and the cloud providers that service them.

We may also conclude that using large AI models for “researching cancer cures or preserving indigenous languages is worth the electricity and emissions, but writing rejected Seinfeld scripts or finding Waldo is not,” Bloomberg suggests.

But we don’t have the information to judge this.

So where does this sit with the net carbon zero pledges of the major cloud providers like Microsoft, Amazon and Google?

Responding to Bloomberg’s inquiry, an OpenAI spokesperson said: “We take our responsibility to stop and reverse climate change very seriously, and we think a lot about how to make the best use of our computing power. OpenAI runs on Azure, and we work closely with Microsoft’s team to improve efficiency and our footprint to run large language models.”

Bland rhetoric with no detail on what the costs to the earth are now, or exactly what efforts the company is taking to reduce them.

Google’s response was similar and Microsoft highlighted its investment into research “to measure the energy use and carbon impact of AI while working on ways to make large systems more efficient, in both training and application.”

Ben Hertz-Shargel of energy consultant Wood Mackenzie suggests that developers or data centers could schedule AI training for times when power is cheaper or at a surplus, thereby making their operations more green.

The article identifies the computing chips used in AI as “one of the bigger mysteries” in completing the carbon counting puzzle. NVIDIA is the biggest manufacturer of GPUs and defends its record to the paper.

“Using GPUs to accelerate AI is dramatically faster and more efficient than CPUs — typically 20x more energy efficient for certain AI workloads, and up to 300x more efficient for the large language models that are essential for generative AI,” the company said in a statement.

While NVIDIA has disclosed its direct emissions and the indirect ones related to energy, according to this report it hasn’t revealed all of the emissions it is indirectly responsible for. NVIDIA is not alone in failing to account for Scope 3 Greenhouse Gas emissions, which includes all other indirect emissions that occur in the upstream and downstream activities of an organization.

When NVIDIA does share that information, researchers think it will turn out that GPUs burn up as much power as a small country.

 


Evan Shapiro: M&E Is Being Reassembled in Real Time

NAB

“Nobody knows anything.” The famous William Goldman aphorism about Hollywood can just as aptly be applied to the business brains leading the world’s biggest media and entertainment companies.

article here

The traditional certainties around consumption and distribution have been upended and no one really has a clue what new formula will work.

“We are reassembling this ecosystem in real time, but because none of even the big players, not Apple or Amazon, know exactly where it’s going, they’re not really assembling for something,” said analyst Evan Shapiro. “They’re just throwing business models at platforms.

“There’s no way Netflix thought they’d be taking ads right now. There’s no way that Facebook thought they would be peaked by now. There’s no way that Disney thought that they would surpass Disney Netflix in total subs worldwide in less than three years.”

Shapiro, who calls himself a “Media Universe Cartographer,” was speaking at SXSW on stage with Steven Rosenbaum, head of the Sustainable Media Center.

“No one can predict the future of media — especially right now,” said Shapiro, who nonetheless tried.

“In the last couple of years, the underpinnings of the media economy have come undone,” he said. “In its place we have this free-form system of asymmetrical consumption, an unlimited supply of content on many different devices all the time, not all tethered to fundamental economics or sound business principles.”

Cable TV and the triple play bundle with broadband and voice services was the mainstay of the TV ecosystem for decades — but not any more.

“The entire system has become unmoored,” Shapiro said. “Everybody is trying to move into television and audio and gaming and social media simultaneously, and they’re trying to raise the same dollars and eyeballs. But… no one can figure out their own business model anymore,” he added.

“They’re hoping that when they add advertising to Connected TV, it’s going to replace the old model. They’re hoping that when they add subscription to premium ad free streaming that it’s going to replace the old cable system. It’s not going to do that.”

Even so, Shapiro took a stab at which companies will be the eventual winners and losers.

Apple, for instance, will be a winner by investing more in content to get consumers to buy its hardware. The company is going after Spotify and is also going after gamers, he thinks.

Another winner is Google. That’s because “the fastest growing operating system on connected televisions on the face of the Earth is Google TV, the same company that controls 70% of your phones. So, Google is going to be incredibly influential for at least the next 10 years.”

He also picks Alphabet as a winner because of the continued dominance of YouTube as viewing shifts to Connected TV.

“YouTube is by far and away the biggest platform out here. So as everything moves off linear television to CTV, who do you think’s going to win this battle? Google has the largest share of all video on CTV. That duopoly they have in phones, they’re trying to recreate in TVs.”

And Netflix? Well until the end of last year it was still a one-revenue business, unlike Alphabet, which also has a cloud business and many other revenue streams besides.

“Because they have many different elements to their business they have an opportunity to see the other side of what’s being reshaped,” he said.

Amazon will be fine, too, because “the number one fastest growing sector of the advertising economy is retail media. Amazon has grown an ad business that’s bigger than all of Paramount and all of Warner Bros. Discovery. The $37 billion in advertising last year on Amazon, was predominantly because this is a retail media business.”

Winners will also cater to multiple generations of consumer.

“By the end of this decade, Generation A will be starting to come into the workforce,” Shapiro said. “So you have to think about them not just as consumers. Generation A and Generation Y are responsible for TikTok, they’re responsible for Roblox, they’re responsible for Fortnite, they’re responsible for enormous consumption shifts.”

The losers on the other hand include social media companies whose entire revenue stream has been predicated on advertising in a model that is not necessarily designed for the next turn.

“I don’t see how they all come out of it entirely whole. I think TikTok implodes under its own weight [because of regulatory issues].”

He added, “I don’t know how Spotify survives as a standalone company a year from now. I have trouble seeing how Roku [survives, since it’s another company] entirely tied to one business model. Roku have no expansion outside the United States and it’s really basically hampering their ability to grow.”

All streaming services are subject to churn. He called this the biggest issue facing the ecosystem.

“Serial churning is the new channel changing. If you’re not scratching the itch of the consumer on a day to day basis, then you’re f***ed.”