Thursday 30 March 2023

Digital tools to help reduce your firm’s carbon footprint (part 2)

TechInformed

article here

It’s clear that in the world’s quest to reach net zero, all eyes are on President Biden and the US’s landmark Inflation Reduction Act, which is pledging billions to invest in green technology, green energy and green jobs.

The UK’s reaction to Biden’s hard fought green deal has been divided – mainly along political lines.

Ahead of the UK Government’s revised net carbon emission strategy on Thursday, UK opposition party Labour has urged ministers to deliver a growth plan of a similar nature to Biden’s – but Tory leaders don’t seem keen on some aspects of the act, with the trade minister Kemi Badenoch branding the subsidies on offer to US firms as  ‘protectionist’ and anti-trade.

Whatever the outcome of Thursday’s strategy, an investment in green technology and cleaner energy sources appears to be something all sides agree on.

While many deep tech ideas are still in development, this week we bring you the second of our two-part green tech digital tools deep dive, which examines existing software tools and IT services that might be worth investing in to help ‘green’ enterprises’ operations.

Green APIs

APIs are a key means by which companies can easily and quickly add a variety of software apps across their systems.

So-called Green APIs are built with the intent of advancing sustainability, environmental awareness, or specific climate action initiatives. In an increasingly internet connected world the use of them can provide the ‘glue’ joining disparate data silos together.

Green APIs, for instance, can help monitor air and water quality, expose carbon emissions data and enable smart connections for analysis.

“APIs play a critical role by enabling the growth of distributed generation technologies such as solar by accessing rate and incentive calculators, solar resource data, to streamline quoting and sales, and by making it possible to integrate incentive and interconnection application processes,” says Heather Van Schoiack, a senior marketing manager for Clean Power Research.

The Clean Power Research group suite of APIs include the PowerBill API for analysing energy value, and the Clean Power Estimator for financial analysis of solar projects.

Clean Power’s SolarAnywhere API, for instance, provides irradiance data (sunlight predictions based off geographic positioning) to be integrated into applications that encourage solar alternatives.

Another Green API go-to is the Green Web Foundation, a Dutch non-profit pushing for a ‘fossil-free’ internet infrastructure from data centres to web hosting. Its API allows developers to update information about the digital infrastructure a company is using, the services it provides to others, and to see the status of providers in their own supply chain.

Cloud computing

The easiest way for many companies to improve their carbon footprint is to move their IT systems to the cloud. “It’s the equivalent of joining a carpool or using public transport, rather than using their own vehicles,” describes Ashish Arora, VP, Cloud and Infrastructure Services at Indian IT consultancy HCLTech. “Having your own servers on-premises requires hardware, facilities equipped with power supplies and cooling units to avoid overheating.”

AWS estimates that this can reduce carbon emissions by as much as 88% compared to on-premise systems that are inefficiently utilised and need constant cooling.

Analyst IDC estimates that cloud computing could eliminate a billion metric tons of CO2 emissions by the end of 2024. This is because cloud-based services are hosted at much larger data centres which use newer, more energy efficient hardware, and have carbon reduction measures in place. Cloud providers also use high, and increasing, proportions of emission free energy.

Google has been vocal about the progress it has made, claiming to be the first organisation of its size to operate with 100% renewable energy. Google’s data centres run on wind farms and solar panels, and AI/ML are used to adjust cooling technologies to ensure servers are protected, but also that energy is not wasted.

By 2030 Microsoft aims to be carbon negative, and by 2050 it has pledged to remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975. Cloud is one of the steps such progress will be made.

AWS data centres in Virginia meanwhile, account for almost three quarters of the world’s internet traffic and Amazon says all its facilities will be powered by renewable sources by 2025. It has also pledged to reach net-zero carbon across its entire business by 2040.

However, there are concerns companies might be downgrading their commitments because they feel that moving to the cloud ticks their ‘green’ box: a digital leadership report found that surprisingly few leaders in the UK (22%) were electing to use tech to measure their carbon footprint.

“Simply moving from an on-premises virtualised infrastructure to a [cloud] vendor’s hypervisor will not accomplish this goal,” says W. Curtis Preston, chief technical evangelist at data-protection-as-a-service provider Druva.

“While you may move the problem of power acquisition to a different entity you don’t remove it altogether.”

An alternative, he says, is for companies to “refactor on-premises applications” to make use of on-demand infrastructure (on-demand VMs, containers and serverless applications, for instance), and reduce overall power consumption, while also reducing overall IT spend: “If enough organisations did this, it could make a real dent in the power crisis,” Preston adds.

Tools like GreenOps from Cycloid also help organisations improve the sustainability of their cloud infrastructure by automating the process of turning servers on and off when not in use.

 

Digital waste monitors

According to the Global E-waste Monitor 2020, 54 million metric tonnes of e-waste was produced in 2019 and it is projected to reach 74.7m by 2030. Governments are cracking down on illegal waste exports through stricter background checks and compulsory digital waste tracking.

Legislation in the UK’s Environmental Act 2021 requires that firms record information from the point waste is produced to the stage it is disposed of, recycled and reused.

Data analytics platform Topolytics is one of the firms that has won funding from the UK government to develop WasteMap, a technology that helps firms track manufacturing waste and identifies assets that can be extracted and returned to production.

Its research found that most manufacturers lack visibility into the waste material once it enters downstream, but 90% said that knowing more about what happens to their waste is a high priority.

E-waste is also the fastest growing waste stream in the EU. Two new EU directives – the Corporate Sustainability Reporting Directive and the Corporate Sustainability Due Diligence Directive – are due to come into effect between 2024 and 2026 and will compel thousands of companies with an EU presence – including US and UK multinationals – to provide detailed information about how they address environmental (and human rights) risks across their entire value chain.

Circular technology lifecycle management offers a blueprint for aligning tech strategies to some of the critical points of the new regulatory framework but requires a different way of thinking about devices – as something to use and reuse, not own and discard.

Not having a strategy that looks at the entire lifecycle of a device – from procurement to information technology asset disposal (ITAD) – is simply no longer an option.

“It is increasingly urgent that enterprises, public sector organisations and governments consider sustainable alternatives that extend device life,” says Russ Ernst, CTO, Blancco Technology Group, a provider of secure sustainable data erasure and mobile lifecycle solutions.

The staggering e-waste mountain stems from “mishandling IT equipment and devices that have reached end-of-life,” Ernst says.

Organisations are also advised to look for ISO-certified processes to guarantee compliance with international laws, as well as best-practice handling of data and environmental procedures.

Low code

The overhaul of processes and systems that have been in place for many years can be a daunting prospect, one that many organisations believe will be far too complicated and costly to manage. But it doesn’t have to be.

“For too long companies have thrown more metal at IT problems, instead what needs to happen is code optimisation,” says Goldfarb. “Tech teams need to write better code and run it on efficient servers.”

By leveraging Platform-as-a-Service (PaaS) tools such as low-code, businesses needn’t take a rip and replace approach to legacy systems, she argues.

Instead, existing systems can be updated and built upon using a building block approach that allows for iterative and pragmatic development using a host of Intelligent Automation (IA) tools like Artificial Intelligence (AI), Robotic Process Automation (RPA) and Machine Learning (ML) to name a few.

Low code at its widest means software tools to enable employees to develop processes using drag and drop interfaces. More narrowly, low-code Application Platforms enable enterprises to develop processes and applications between three and ten times more quickly than traditional approaches.

Such an approach can help companies achieve greater efficiency toward ‘green goals’ while retaining legacy equipment.

“There are some very simple and direct emissions benefits that are easy to calculate,” says Richard Farrell, chief innovation officer at IT firm Netcall. “For example, in healthcare, the use of a low-code platform provides patient information digitally, reducing the need for printing and postage while continuing to use legacy Patient Administration Systems (PAS).

Netcall has partnered with low-code specialist DI Blue to develop the my.FirstClimate app for First Climate which supports organisations in achieving their climate objectives.

“Through the app, customers can calculate their carbon footprint, which in turn helps them to reduce their future emissions and offset any remaining emissions,” says Farrell.

Low-code platforms are typically hosted in the cloud. Connectivity with company systems is ideally achieved via APIs, although other techniques including file transfers and use of RPA (are sometimes required for less open or older systems).

APIs can be made available from some platforms (including Netcall Liberty Create), so that authorised applications in an organisation can interact with low-code data and processes.

Digital Twins

Digital Twins – real time digital replicas of real-world entities and processes – establish an environment for analysis to answer questions, suggest alterations and help identify the optimal decision – all with the objective of improving sustainability.

One of the key benefits is that the right decision can be identified up to 80% faster than with more traditional methods, according to Slingshot Simulations.

The UK startup is using its digital twin specialism to help organisations like The Rainforest Trust protect endangered natural environments globally, including private areas, national parks, community forests and indigenous property.

These changes can be implemented through digital twin technology, testing if the changes will help to make a difference to the proposed natural environment. If this is not the case, a different strategy can then be introduced.

Pete Mills, Slingshot’s development director explains, “We create a virtual digital copy of what exists in the physical world to help better plan how to tap into resources and reduce conflict without causing large-scale destruction.

The data is then shared with local communities and stakeholders – “the more eyeballs there are on these hotspots the more pressure comes to bear to effect change and the greater the incentive is to feed in more data to improve the digital twin,” Mills says.

One of the biggest open development platforms on which to build enterprise level digital twins for industrial and scientific use is the Nvidia Omniverse. California headquartered electronics designer Cadence, for example, allows users of its software in the Omniverse to create digital twins of data centres.

“This enable teams to plan, test, and validate every aspect before the physical data centre is built,” explains senior product manager, Mark Fenton.

“Our software enables engineers to simulate data centre cooling design changes and conduct ‘what-if’ analysis ultimately reducing the need to build new facilities until absolutely necessary.”

These 3D models are connected to real-time data and accurately present multiple real-world physics, including mechanical and thermal, fluid dynamics.

Once a data centre is fully constructed, the sensors, control system, and telemetry can be connected to the digital twin inside the Omniverse, enabling real-time monitoring of operations.

Engineers can then simulate power peaking or cooling system failures, optimise layout, and validate software and component upgrades before deployment.

However, there’s a ‘dark’ side to all this data generated that must be addressed. With more data being produced than ever before, estimates suggest that 80% remains ‘dark ‘– data that is not used to derive insights or decision making.

Worse, the energy required to simply store dark data results in millions of tons of CO2 emissions a year. Slingshot – which has tools to model dark data – estimates that up to 52% of all information an organisation produces and stores is dark.

So for sustainability’s sake as well as for operational and security reasons it’s regularly worth assessing what data needs to be kept for the short, medium and long term and which data can be marked for (dare we say it) deletion.

Blockchain

Organisations traditionally track supplier performance using paper records, auditing and a degree of trust. Not only a labour-intensive process, but there are also inevitable gaps in the chain and the data can be easy to falsify.

With information often unconnected across suppliers, obtaining a comprehensive and holistic and transparent picture is a challenge.  Blockchain and distributed ledger technologies promise to address the lack of accountability in the supply chain.

“Blockchain can build trust in a system by providing traceability and auditability,” says blockchain and energy entrepreneur Simone Acconero, also CEO at FlexiDAO. “During the last year, traction has increased dramatically in regard to blockchain sustainability initiatives.”

Blockchain technology enables all participants in a brand’s supply chain to record information about their activities in a single, chronological and unchangeable record.

Blocks of data are stored in a digital chain within a distributed ledger. Every time a new transaction occurs on the blockchain, a record is added to every participant’s ‘ledger’ in a way that makes it near impossible to change, hack, or cheat the system.

FlexiDAO’s blockchain claims to enable companies and governments to operate on carbon-free energy by certifying and tracing their electricity and its true carbon content around the clock.

“This is possible through a digital process called ‘tokenisation of electricity’, through which units of electricity become digital goods, assets, or environmental commodities,” explains Acconero.

“This permits automatic certificate generation (timestamping), as well as transfers and ownership-tracking based on cryptographic proof.”

Digital time-stamped energy certificates can only be cancelled once, preventing double counting of renewable energy. Auditors can trace electricity consumed in the supply chain back to any stage of its life cycle via blockchain.

“Ultimately, when requested, we can tokenize this electricity produced by a specific renewable asset on a specific grid at a specific time, and accurately match this with a company’s consumption using blockchain as a digital notary.”

FlexiDAO counts energy buyers like Google, Microsoft, and Vodafone using its system as well as energy sellers like Acciona and Fortum.

Blockchain can also be used to track carbon offsetting commitments. Clothing brand Tentree plants 10 trees for every item sold and tracks this on the blockchain. Its partners input data such as GPS coordinates, site images, planting details, along with ground-based sensors with timestamps that are permanently recorded on the blockchain.

Meanwhile web3 technologists Trst01 and Rubix have joined forces in India to offer companies operating there a blockchain authenticated ‘plastics credit’ system. Plastic credits are described by Trst01 as measurable, verifiable and transferable units representing a specific quantity of plastic collected from the environment or recycled.

This is intended to help companies authenticate their conformity with national recycling standards.

 


Wednesday 29 March 2023

Virtual Production Might Be Part of the Process, But At This Point, Everybody Needs Education

NAB

Virtual production is on the rise and most filmmakers are excited about using it, but many express dissatisfaction with many of the current tools to do it.

article here

These are some of the insights revealed in a fresh survey conducted by Chicago-based film-tech outfit Showrunner, which claims the report is the first-of-its kind in the industry.

Nearly 800 filmmakers were surveyed, along with 72 virtual production facility operators for the State of Virtual Production report.

The first set of stats that jumps out is about the industry’s level of enthusiasm.

Among the general population of filmmakers surveyed, about half are “excited” about the rise of this new technology, with 22.3% “skeptical.”

Those skeptics tend to fade away once they’ve got to grips with the technology.

“The real kicker, though, is when we filter for those professionals with three or more years of experience with the new technology. At this point, the more experience folks have with a new technology correlates with increased excitement about it,” says Showrunner’s CEO, Shane Snow. “It’s pretty clear indicator that the tech is here to stay.”

When asked, more than 75% of filmmakers said they anticipated doing at least some work using virtual production technology this year.

 And more than three-quarters of studio owners and operators indicated that they would be doing more VP work this year than last. Fifty-seven percent said they anticipated doing “a lot more.”

About one-third of virtual production work currently being produced is for car/auto shoots. Film and TV projects each make up nearly 20%, with commercials, music videos and live events making up the rest.

The survey asked a variety of questions about specific tools and techniques in virtual production, with the feedback indicating that work needs to be done before everyone in the industry might fully embrace the new tech.

Th stats show just how unsure folks are about the quality and technical difficulty of virtual production,” said Snow. “This says to me that some filmmakers are going to still play the ‘wait and see’ game until other, more bullish early adopters figure out these answers definitively.”

Nearly 80% of virtual production studio owners anticipate booking more work this year than last year, per the survey.

“Virtual production is one of the fastest-growing trends in filmmaking right now. But as the data shows, the industry needs to make sure the hype translates into tools and education that make filmmakers confident that they can actually do it.”

 


Where the Metaverse Meets the Real World

NAB

The metaverse. Remember that? You can’t escape the feeling that in reality it isn’t really anything life- or business-changing, at least not yet at any rate.

article here

That said, aspects of what we might call the metaverse are already here and businesses are basing investment decisions on the future of the internet.

Investors for example bought $650 million of virtual real estate last year, we learn from a two-part “Future of the Metaverse” podcast hosted by chip maker ARM.

“They’re buying stuff that doesn’t exist,” says Matthew Griffin, the founder and CEO of the World Futures Forum. “We’re talking about a thing that’s only recently got a definition that technically doesn’t actually exist but that actually has real impact in the real world.”

The other participant on the podcast is ARM’s director of innovation, Remy Pottier. He points to examples of the metaverse being built today by companies including Autodesk, NVIDIA and BMW, which are creating digital twins to train robots and educate workers.

Other examples from the gaming world are Roblox, Unity and Epic Games. “These experience creation platforms and 3D engines are generating billions in revenues generally just from the platform itself,” says Griffin.

He also suggests that the metaverse is a new marketing channel where brands can access existing or potentially new customers.

“It makes a lot of sense for brands to switch part of their digital marketing budget to metaverse-related budget and test virtual product just before they go and they go and build it in the physical world.”

For example, drugs designed in virtual reality landscapes that are then eventually manufactured in the real world, but occurring at speeds that were unimaginable even just a couple of years ago.

“For consumers, it means that a lot of live or play experiences will switch to the metaverse,” Griffin posits.  “This means potentially new immersive, personalized experience. Wherever you go it means, 3D virtual options that open up for film, television, and music and that we will be able to consume.”

Physical hardware such as VR headgear remains a primary barrier to consumer experiences in the metaverse, but Griffin thinks that “increasingly in the next couple of years, from a gadgets perspective, we could actually see slipping into the metaverse as easy as putting on a pair of sunglasses.”

The next step is easier user control via gesture, voice or other forms of haptic interface.

Pottier outlines four categories of use case driving metaverse development.

The first is about transcending time and place — digital teleportation.  “It’s about adding a way to travel the world in a digital way without moving from your chair.”

It’s not just about “tourism,” but business, too. He points to a next generation of immersive video conferencing that enables one to “teleport.”

“It can be training, education, product development… to co-create and co-develop product in the metaverse from sites that will be in India, or in China, or Europe, and people can meet in this collaboration room, look at the device touch it and, decide whether or not it’s the right device they want to create.”

Another use case is dubbed “window into the unseen.” This is a realistic simulation of what’s happening inside, for instance, a running motor or an internal view of organs for surgeons.

Number three is alternate reality. One example is Pokémon Go, where digital worlds are overlaid on the real in real time.

A fourth is expanding human capabilities. “We are weak today and very limited when you think about the metaverse capability, which is infinite. Just think about real time translation of everything you do. You will be able to speak any language without even having to think about it. It could augment your five senses, from everything to the digital world or become your super digital assistant, that basically knows everything you did.  The digital assistant just knows, because it has been able to store everything.”

In the second part of the podcast, Pottier and Griffin discuss the how closely the metaverse  could resemble sci-fi movies.

Griffin argues that a prerequisite to any form of successful metaverse is a set of laws and legislation to build it on.

“For example, if I’m going to be building my metaverse and I choose a particular platform to build it, and bear in mind that can incorporate an entire city, as some countries are trying to do, what happens when the company hosting that platform goes bust?

“We’ve seen a lot of virtual reality trademarks already being registered for company, for third parties that have nothing to do with the original trademarks.

“What happens when somebody else goes onto that same platform, into my virtual world, and then starts building their own virtual reality world in my world? Where do we end up with this kind of this Disney multiverse madness?”

He goes on, “How do we actually audit what’s going on in the metaverse? Because we’re doing all these different things, we have no idea how to keep track of them, how to monitor them, how to report on them. Adidas, for example, has been selling NFTs [but] how do you actually report on that?”

The pair then went on to discuss more far-fetched concepts such as the blending of the virtual with the real, a familiar concept from trippy sci-fi classics like Total Recall and The Matrix.

The gist of the discussion is that many of the elements that could lead to such fictional scenarios are already science fact.

Pottier says, “To actually get into The Matrix scenario, I think first we need to actually already agree to live in The Matrix in a virtual actual reality, at least part of, maybe most of our time. It means that we are already living in some kind of Ready Player One kind of world, and we already agreed to do that.”

Another idea: most people are used to the idea of a brain machine interface, like a skull cap that reads our brain signals and converts it into text or images.

“Three years ago, we managed to prove that you can actually use artificial intelligence and brain machine interfaces to push information into people’s heads,” says Pottier. “So in The Matrix, when Trinity goes up to the Huey and says, I need to learn how to fly a Huey now and all of a sudden the knowledge is uploaded to her brain, and she gets in and then flies Neo out over the skyscrapers and the skyline. We’ve already done that.

“What scientists have figured out in the labs is how to upload or transmit knowledge to your brain using technology. So when we actually have a look at that Matrix, we are way beyond that already.

“We’ve got a Neuroprosthetic chip that is used in Alzheimer’s able to read your biological brain signals and convert them into ones and zeros. For Alzheimer’s patients, this improves their memory retention by 30. But if I can convert your brain signals into ones and zeros and store it on a computer chip in your head as a memory, isn’t that memory downloading? And then couldn’t I take those ones and zeros and push them into the cloud?”

 


Making the Fame Monster of “Swarm”

NAB

The first series from Donald Glover following the conclusion of Atlanta, Swarm obviously aims to provoke — or, in a more on-theme metaphor, pack some sting.

article here

Glover’s new show is designed to make headlines, proclaims Alison Herman at The Ringer. Most critics agree with her that, while packing more punch than most, the series has so much demanding our attention that it ultimately lacks focus.

The pop star character and her fan entourage at the center of the seven-episode, 30-minute limited series is figuratively if not quite literally intended to be Beyoncé and her Beyhive.

Before each episode, a riff on a standard disclaimer declares, “This is not a work of fiction. Any similarity to actual persons, living or dead, or actual events, is intentional.”

The character, Ni’Jah, is a musician whose fans call her “queen” and “goddess,” and who surprise-drops visual albums that take over the internet. “[She] doesn’t resemble Beyoncé,” Vulture’s Roxana Hadadi finds. “She is Beyoncé, and Swarm has no interest in pretending otherwise.”

The series investigates stardom — or, rather, “stan-dom” — the obsessive nature of fans and celebrity cults. Swarm is Glover’s first project under his lucrative deal with Amazon Prime Video and is co-created with Janine Nabers.

They say they drew inspiration from real events that occurred between 2016 and 2018, which does happen to include the release of Beyonce’s 2016 visual album Lemonade and the #WhoBitBeyonce internet debate.

The show is, in some ways, its predecessor’s inverse, observes Herman. “Atlanta, too, was about music and mega-fame, but its point of view belonged to the performer. Swarm switches to that of an obsessed ultra-fan: Dre who’s been part of the Beyhive — sorry, Ni’Jah’s ‘Swarm’ — since she was a teenager.”

Asked by Variety’s Angelique Jackson just how far they pushed the truth of these events — and whether they ever worried about how far Amazon would let them go — Nabers said: “Everything is legally combed through. If we pushed it, we pushed it to the very, very, very edge, but it’s legal and we’re proud of that.”

It is in fact Dre, played by Dominique Fishback, who is the series’ protagonist, and by the end of the first episode she is revealed to be more than a little deranged.

She goes on a killing spree in honor of her dead best friend and to protect, as she sees it, Ni’Jah.

Glover explained to Jackson that the concept of a Black woman serial killer was born a tweet he read.

“I remember them saying like, ‘Why are we always lawyers and, like, best friends? We can be murderers, too.’ And I was like, ‘That is true,’” Glover said.

Nabers follows this thought up with Ben Travers at IndieWire, referencing how Dahmer recently became a huge Netflix hit.

“I think as Americans, we’re so conditioned to seeing white men be angry. We’re giving them that space for violence on film and TV.”

She added, “Our writers’ room was completely Black,” she said. “All our directors are Black, [and] most of our producers are Black.”

In imagining what it would look like if the serial killer subgenre focused on a Black woman instead of a white man, the terminology they used was “alien.”

“[Dre] is an alien in her own world,” Nabers told Selome Hailu at Variety. “If you look at the pilot, when she gets to Khalid’s house, there’s aliens on TV. That’s a through line with her throughout the series. We looked to [Michael Haneke’s] The Piano Teacher for inspiration. Donald introduced that movie to me, and it blew my mind. It centers around a woman who has a very everyday way of living her life on the surface, and then when you peel back the layers of her complicated psychology, you unearth a completely different type of human that is very alien-feeling.

“But me being from Houston and Donald being from Atlanta, we wanted to filter it through a Southern, Black female perspective. It is a little bit like a sister Atlanta when you look at the weird family relationships.”

The show’s casting is one not so subtle way of grabbing headlines. Paris Jackson, Michael Jackson’s daughter, plays a character who presents as white but calls herself Black because she has one Black grandparent. Casting director Carmen Cuba apparently pitched Paris Jackson.

“We were like, “Exactly. That’s exactly what we’re talking about,” Nabers told Hailu. “I’m a Jewish woman, she’s identifies as Jewish, so we bonded about that. She really just owned this character of a light-passing biracial woman who is really intent on letting everyone know about her Blackness.”

Chloe Bailey plays Dre’s sister and a protégé of Queen Bey herself, increasing Swarm’s connection to its all-but-explicit subject.

Episode four guest stars Billie Eilish, who makes her acting debut on the show as the leader of a women’s cult — an intentional parallel to her role as a pop star.

Critics largely give the show a thumbs up for its ambition and subversive qualities.

“That Swarm is only intermittently successful doesn’t make it any easier to look away from the screen,” says Mike Hale in The New York Times.

He astutely observes that Swarm inhabits the space between horror and comedy where Atlanta often thrived.

Hale adds, “It’s not hard to understand why more and more filmmakers are choosing the horror genre for stories set in contemporary America, particularly those involving the lives of people outside the white-male protective bubble.”

“Think the Coen Brothers meets Atlanta meets Carrie, with some Basic Instinct and Perfect Blue thrown in there too,” writes Pitchfork’s Alphonse Pierre. “Celebrities are worshiped — and they often turn a blind eye to their obsessed fans’ worst behavior while milking their fanaticism for every last dollar.”

It also has some the stylistic trademarks of Atlanta which, like that show, have also made it uneven. Like Atlanta’s mockumentary episode for instance one episode of Swarm is done in true-crime documentary style.

Swarm needs much more clarity on what it wants to say about fandom in general and the specific fan at its center,” finds Herman in The Ringer. “Violent, vicious, and extremely online, Swarm obviously aims to provoke. Once the buzz dies down, though, there’s not much substance to sustain the hype.”

Vulture’s Hadadi says, “Swarm feels boldest when it wonders when person-to-person devotion becomes abstract glorification, and what inner mechanics inspire someone to give themselves over to another.”

“Thankfully, as the series progresses, it reveals itself to be much more than a stylized parody centered around what many might consider obvious internet bait,” writes Kyndall Cunningham of The Daily Beast. “Beneath the Beyoncé of it all, Swarm is ultimately a story about grief and isolation.”

Hale is particularly critical, believing that Swarm doesn’t work through or make strong dramatic use of all its ideas and “ends in a formless, non-sequiturish manner. It feels as if no one really knew where they wanted to take things,” he says.

“In the balance of the season, the viscous, seductive ambience and dream-logic storytelling mostly fade out, replaced by high-concept, tonally garish episodes that hold your attention but stand alone like neon billboards, adding little to our understanding of Dre beyond the facts of her back story, doled out in typical streaming-series style.”

Nabers seems to defend their approach, saying that they deliberately steered clear of definitive messaging.

“I don’t think that, as a brand, Donald and I believe in a message,” she commented during a Q&A following the film’s premiere at SXSW, as Variety’s Hailu reported in a separate article. “People can interpret it the way that they want to. We hope it inspires people in some way to create weird punk shit, or to talk

 


How Hollywood Is Handling the Climate Crisis

NAB

Disaster movies like 2012, Greenland and Don’t Look Up aside, Hollywood has barely touched on the existential crisis that has been facing the planet for decades.

Perhaps climate change stories don’t sell, although the acclaim greeting Extrapolations suggests otherwise.

article here

The Apple TV+ series includes children struggling with a lethal condition called “summer heart,” wildfire smoke semi-permanently blotting out the sun, and people wading into churches to worship, and, according to Sammy Roth at the Los Angeles Times, has shown that “a haunting, rage-inducing, totally necessary series about the climate dangers on the horizon” is exactly what we need.

The subject matter attracted an all-star cast, too, including Marion Cotillard, David Schwimmer, Edward Norton, Meryl Streep and Forest Whitaker.

“We need more climate stories. We need more diverse climate stories. And there’s tons of climate people who are willing to work with folks in Hollywood to get the stories right,” says climate policy expert and advocate Leah Stokes.

Stokes publishes the environmentally themed podcast, A Matter of Degrees, with Katharine Wilkinson, and was interviewed by J. Clara Chan at The Hollywood Reporter.

“The vast majority of Americans think that climate change is real — it’s happening now,” Stokes says. “Deniers are maybe 10% of the population. Our show is really for folks who want to go deeper on the climate issue and are concerned about it, which is the vast majority of American people, and we want to get into the details in an accessible way that people can understand.”

Communicating about practical changes we can all make shouldn’t be talked about as a “sacrifice,” she argues.

“So much of the branding, from those who don’t want us to transition off of fossil fuels, has been painting what we’re doing as being sacrifice. I have an EV, I have solar on my roof, I have two heat pumps — one for my water, one for heating and cooling my home. I have all these things and guess what? I can still take a hot shower; I can still drive around. I can still do all the things that I could do with fossil fuels,” she says.

“That’s when we’re going to win, when people really understand that, actually, it’s just better to not poison myself while I cook myself lunch by combusting gas in my house. And it’s just better to drive an EV because it’s cheaper and I don’t have to worry about high [gas prices].”

In terms of production, Hollywood can do more, too, for example by electrifying sets away from diesel generation. Federal government tax incentives can be tapped, for instance, to gain 30% of the cost back for solar and batteries.


Tuesday 28 March 2023

Should Generative AI Be Held to the Same Copyright Laws as Human Creators?

NAB

Anyone suggesting generative AI systems are unfairly exploiting the works of creators is wrong, says Daniel Castro, director of the Center For Data Innovation.

article here

He argues that generative AI systems should not be exempt from complying with intellectual property (IP) laws, but neither should they be held to a higher standard than human creators.

Castro’s report refutes the arguments made about how generative AI is unfair to creators and also acknowledges that there are legitimate IP rights at stake.

Training AI Models

The biggest debate when it comes to copyright is whether generative AI systems should be allowed to train their models on text, audio, images, and videos that are legally accessible to Internet users but are also protected by copyright.

Some creators argue that it is unfair for developers to train their AI systems on content they have posted on the Internet without their consent, credit, or compensation.

Castro says that people do not have the right to use copyrighted content any way they want just because they can legally access it on the Internet. However, their not having the right to use it any way they want does not mean they cannot do anything with this content. For example, search engines can legally crawl websites without violating copyright laws.

“While it will ultimately be up to the courts to decide whether a particular use of generative AI infringes on copyright, there is precedent for them to find most uses to be lawful and not in violation of rightsholders’ exclusive rights.”

Is training AI systems on copyrighted content just theft? Online piracy is clearly theft, says Castro, but seeking inspiration and learning from others is not.

“In fact, all creative works are shaped by past works, as creators do not exist in a vacuum. Calling this process theft is clearly inaccurate when applied to the way humans observe and learn, and it is equally inaccurate to describe training a generative AI system.”

Is it wrong to train AI systems on copyrighted content without first obtaining affirmative consent from the copyright holder?

According to Castro, copyright owners have the right to decide whether to display or perform their works publicly. But if they choose to display their work in public, others can use their works in certain ways without their permission. For example, photographers can take pictures of sculptures or graffiti in public places even when those works are protected by copyright.

“There is no intrinsic rationale for why users of generative AI systems would need to obtain permission to train on copyrighted content they have legal access to,” he says. “Learning from legally accessed works does not violate a copyright owner’s exclusive reproduction and distribution rights. Unless human creators will be required to obtain permission before they can study another person’s work, this requirement should not be applied to AI.”

Critics of generative AI are also likely to overestimate individual contributions. In figures given in the report, Stable Diffusion trained on a dataset of 600 million images. Of those, out of a sample of 12 million of the most “aesthetically attractive images” (which presumably skew more toward works of art than other random images from the Internet), the most popular artist (Thomas Kinkade) appeared 9,268 times. Put differently, the most popular artist in the dataset likely represented only 0.0015% of all images in the dataset.

Or consider LaMDA, a large language model created by Google, that trained on 1.56 trillion words scraped from the Internet.

“Given the size of these models, the contribution of any single person is miniscule,” Castro concludes.

Critics also contend that generative AI systems should not be able to produce content that mimics a particular artist’s distinctive visual style without their permission. “However, once again, such a demand would require holding AI systems to a different standard than humans,” fires back Castro. “Artists can create an image in the style of another artist because copyright does not give someone exclusive rights to a style. For example, numerous artists sell Pixar-style cartoon portraits of individuals.”

And it is perfectly legal to commission someone to write an original poem in the style of Dr. Seuss or an original song in the style of Louis Armstrong. Users of generative AI systems should retain the same freedom, he says.

Legitimate IP Issues of Concern

Nonetheless there are legitimate IP issues for policymakers to consider. Castro dives into them.

Individuals who use AI to create content deserve copyright protection for their works. The US Copyright Office has developed initial guidance for registering works created by using AI tools. The Copyright Office should not grant copyright requests to an AI system itself or for works in which there is no substantial human input.

He argues that copyright protection for AI-generated content should function similarly to that of photographs wherein a machine (such as a camera) does much of the mechanical work in producing the initial image, but it is a variety of decisions by the human photographer (subject, composition, lighting, post-production edits, etc.) that shape the final result.

Likewise, individuals who use AI tools to create content do more than just click a button, such as experimenting with different prompts, making multiple variations, and editing and combining final works.

Just as it is illegal for artists to misrepresent their works as that of someone else, so too is it unlawful to use generative AI to misrepresent content as being created by another artist.

“For example, someone might enjoy creating drawings of their own original characters in the style of Bill Watterson, the cartoonist behind the popular Calvin and Hobbes comic strip, but they cannot misrepresent those drawings as having been created by Watterson himself.

“Artists can and should continue to enforce their rights in court when someone produces nearly identical work that unlawfully infringes on their copyright, whether that work was created entirely by human hands or involved the use of generative AI.”

Impersonating Individuals

Generative AI has not changed the fact that individuals should continue to enforce their publicity rights by bringing cases against those who violate their rights.

This right is especially important for celebrities, as it enables them to control how others use their likeness commercially, such as in ads or in film and TV.

Castro says, “While deepfake technology makes it easier to create content that impersonates someone else, the underlying problem itself is not new. Courts have repeatedly upheld this right, including for cases involving indirect uses of an individual’s identity.”

Generative AI also raises questions about who owns rights to certain character elements. For example, if a movie studio wants to create a sequel to a film, can it use generative AI to digitally recreate a character (including the voice and image) or does the actor own those rights? And does it matter how the film will depict the character, including whether the character might engage in activities or dialogue that could reflect negatively on the actor?

Castro thinks these types of questions will likely be settled through contracts performers sign addressing who has rights to a performer’s image, voice and more.

Moving Ahead

Castro finds that while there are many important considerations for how generative AI impacts IP rights and how policymakers can protect rightsholders, critics are wrong to claim that such models should not be allowed to train on legally accessed copyrighted content.

Moreover, imposing restrictions on training generative AI models to only lawfully accessed content could unnecessarily limit its development.

“Instead, policymakers should offer guidance and clarity for those using these tools, focus on robust IP rights enforcement, create new legislation to combat online piracy, and expand laws to protect individuals from impersonation.”

 


AI Is Booming… But Also Burning Carbon (Fast)

NAB

AI is going to be being ubiquitous in just about everything we do — but at what cost to the planet?

article here

While some commentators continue to raise red flags about the Cyberdyne Systems’ Skynet we are building, a more frightening and near-term concern is surely the impact computer processing artificial intelligence is having on climate change.

According to a Bloomberg report, AI uses more energy than other forms of computing, and training a single model can use more electricity than 100 US homes use in an entire year.

Google researchers found that AI made up 10% to 15% of the company’s total electricity consumption in 2021, which was 18.3 terawatt hours.

“That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta,” Bloomberg’s Josh Saul and Dina Bass report.

Yet the sector is growing so fast — and has such limited transparency — that no one knows exactly how much total electricity use and carbon emissions can be attributed to AI.

AI developers, including OpenAI whose latest ChatGPT model has just hit the market, use cloud computing that relies on thousands of chips inside servers in massive data centers to train AI algorithms and analyzing data to help them “learn” to perform tasks.

Emissions vary of course depending on what type of power is used to run them. A data center that draws its electricity from a coal or natural gas-fired plant will be responsible for much higher emissions than one that uses solar, wind or hydro.

The point is that no-one really knows — and the major cloud providers are not playing ball. The problem is not unique to AI. Data centers are a black box relative to the more transparent carbon footprint accounting being reported by the rest of the Media & Entertainment industry.

According to Bloomberg, while researchers have tallied the emissions from the creation of a single model, and some companies have provided data about their energy use, they don’t have an overall estimate for the total amount of power the technology uses.

What limited information is available has been used by researchers to estimate CO2 waste by AI — and it is alarming.

Training OpenAI’s GPT-3 took 1.287 gigawatt hours, according to a research paper published in 2021, or about as much electricity as 120 US homes consume in a year. That training generated 502 tons of carbon emissions, according to the same paper, or about as much CO2 as 110 US cars emit in a year.

While training a model has a huge upfront power cost, researchers found in some cases it’s only about 40% of the power burned by the actual use of the model, with billions of requests pouring in for popular programs.

Plus, the models are getting bigger. OpenAI’s GPT-3 uses 175 billion parameters, or variables, through its training and retraining. Its predecessor used just 1.5 billion. Version 4 will be many more times as big with a knock-on cost in compute power.

The situation is analogous to the early days of cryptocurrency where bitcoin in particular was hammered for the huge carbon waste from mining.

That negative publicity has led to change in crypto mining operations – and the same pressure could be applied to AI developers and the cloud providers that service them.

We may also conclude that using large AI models for “researching cancer cures or preserving indigenous languages is worth the electricity and emissions, but writing rejected Seinfeld scripts or finding Waldo is not,” Bloomberg suggests.

But we don’t have the information to judge this.

So where does this sit with the net carbon zero pledges of the major cloud providers like Microsoft, Amazon and Google?

Responding to Bloomberg’s inquiry, an OpenAI spokesperson said: “We take our responsibility to stop and reverse climate change very seriously, and we think a lot about how to make the best use of our computing power. OpenAI runs on Azure, and we work closely with Microsoft’s team to improve efficiency and our footprint to run large language models.”

Bland rhetoric with no detail on what the costs to the earth are now, or exactly what efforts the company is taking to reduce them.

Google’s response was similar and Microsoft highlighted its investment into research “to measure the energy use and carbon impact of AI while working on ways to make large systems more efficient, in both training and application.”

Ben Hertz-Shargel of energy consultant Wood Mackenzie suggests that developers or data centers could schedule AI training for times when power is cheaper or at a surplus, thereby making their operations more green.

The article identifies the computing chips used in AI as “one of the bigger mysteries” in completing the carbon counting puzzle. NVIDIA is the biggest manufacturer of GPUs and defends its record to the paper.

“Using GPUs to accelerate AI is dramatically faster and more efficient than CPUs — typically 20x more energy efficient for certain AI workloads, and up to 300x more efficient for the large language models that are essential for generative AI,” the company said in a statement.

While NVIDIA has disclosed its direct emissions and the indirect ones related to energy, according to this report it hasn’t revealed all of the emissions it is indirectly responsible for. NVIDIA is not alone in failing to account for Scope 3 Greenhouse Gas emissions, which includes all other indirect emissions that occur in the upstream and downstream activities of an organization.

When NVIDIA does share that information, researchers think it will turn out that GPUs burn up as much power as a small country.

 


Evan Shapiro: M&E Is Being Reassembled in Real Time

NAB

“Nobody knows anything.” The famous William Goldman aphorism about Hollywood can just as aptly be applied to the business brains leading the world’s biggest media and entertainment companies.

article here

The traditional certainties around consumption and distribution have been upended and no one really has a clue what new formula will work.

“We are reassembling this ecosystem in real time, but because none of even the big players, not Apple or Amazon, know exactly where it’s going, they’re not really assembling for something,” said analyst Evan Shapiro. “They’re just throwing business models at platforms.

“There’s no way Netflix thought they’d be taking ads right now. There’s no way that Facebook thought they would be peaked by now. There’s no way that Disney thought that they would surpass Disney Netflix in total subs worldwide in less than three years.”

Shapiro, who calls himself a “Media Universe Cartographer,” was speaking at SXSW on stage with Steven Rosenbaum, head of the Sustainable Media Center.

“No one can predict the future of media — especially right now,” said Shapiro, who nonetheless tried.

“In the last couple of years, the underpinnings of the media economy have come undone,” he said. “In its place we have this free-form system of asymmetrical consumption, an unlimited supply of content on many different devices all the time, not all tethered to fundamental economics or sound business principles.”

Cable TV and the triple play bundle with broadband and voice services was the mainstay of the TV ecosystem for decades — but not any more.

“The entire system has become unmoored,” Shapiro said. “Everybody is trying to move into television and audio and gaming and social media simultaneously, and they’re trying to raise the same dollars and eyeballs. But… no one can figure out their own business model anymore,” he added.

“They’re hoping that when they add advertising to Connected TV, it’s going to replace the old model. They’re hoping that when they add subscription to premium ad free streaming that it’s going to replace the old cable system. It’s not going to do that.”

Even so, Shapiro took a stab at which companies will be the eventual winners and losers.

Apple, for instance, will be a winner by investing more in content to get consumers to buy its hardware. The company is going after Spotify and is also going after gamers, he thinks.

Another winner is Google. That’s because “the fastest growing operating system on connected televisions on the face of the Earth is Google TV, the same company that controls 70% of your phones. So, Google is going to be incredibly influential for at least the next 10 years.”

He also picks Alphabet as a winner because of the continued dominance of YouTube as viewing shifts to Connected TV.

“YouTube is by far and away the biggest platform out here. So as everything moves off linear television to CTV, who do you think’s going to win this battle? Google has the largest share of all video on CTV. That duopoly they have in phones, they’re trying to recreate in TVs.”

And Netflix? Well until the end of last year it was still a one-revenue business, unlike Alphabet, which also has a cloud business and many other revenue streams besides.

“Because they have many different elements to their business they have an opportunity to see the other side of what’s being reshaped,” he said.

Amazon will be fine, too, because “the number one fastest growing sector of the advertising economy is retail media. Amazon has grown an ad business that’s bigger than all of Paramount and all of Warner Bros. Discovery. The $37 billion in advertising last year on Amazon, was predominantly because this is a retail media business.”

Winners will also cater to multiple generations of consumer.

“By the end of this decade, Generation A will be starting to come into the workforce,” Shapiro said. “So you have to think about them not just as consumers. Generation A and Generation Y are responsible for TikTok, they’re responsible for Roblox, they’re responsible for Fortnite, they’re responsible for enormous consumption shifts.”

The losers on the other hand include social media companies whose entire revenue stream has been predicated on advertising in a model that is not necessarily designed for the next turn.

“I don’t see how they all come out of it entirely whole. I think TikTok implodes under its own weight [because of regulatory issues].”

He added, “I don’t know how Spotify survives as a standalone company a year from now. I have trouble seeing how Roku [survives, since it’s another company] entirely tied to one business model. Roku have no expansion outside the United States and it’s really basically hampering their ability to grow.”

All streaming services are subject to churn. He called this the biggest issue facing the ecosystem.

“Serial churning is the new channel changing. If you’re not scratching the itch of the consumer on a day to day basis, then you’re f***ed.”