Friday 2 June 2023

New Legislation on AI Creation

NAB

As one headline put it, in the battle between artists and AI, the Supreme Court just became Hollywood’s new BFF. That’s because of a recent ruling in a case involving Andy Warhol that experts say has huge implications for the right of human creators to be recognized when their artwork is used by Generative AI.

article here

At the heart of the debate about AI’s impact on creative fields are questions of fair use. Namely, whether AI models trained on copyrighted works are covered, at least in the US, by that doctrine.

With that in mind, it’s worth asking what the Supreme Court ruling in the Andy Warhol Foundation for the Visual Arts Vs. Goldsmith case may mean for AI moving forward. The answer is simple, according to Jonathan Bailey at Plagiarism Today. AI companies should be worried as none of its implications bode well for them.

Christopher Parker at Smithsonian Magazine explains the background to the case. In the 1980s, Warhol created an illustration of the musician Prince for Vanity Fair, based on an existing image by photographer Lynn Goldsmith. The Supreme Court has just ruled that Warhol infringed on Goldsmith’s copyright.

While the court acknowledged that Warhol’s piece was “transformative,” it said that the piece competed directly with Goldsmith’s work and served as a replacement, citing the fact that Vanity Fair licensed the painting rather than Goldsmith’s original photo.

It’s ironic that Warhol is the artist in question here. He played with the idea of mass media by taking existing images (brands of soup cans, photos of Elvis) and reproduced them in the factory style of a Henry Ford.

What’s interesting is what the judges had to say. Seven of them found against Warhol’s estate, saying that to find otherwise “would potentially authorize a range of commercial copying of photographs, to be used for purposes that are substantially the same as those of the originals.”

A judge who disagreed with this verdict argued that it “will impede new art and music and literature. It will thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer.”

Plagiarism Today itemized what the Warhol ruling may mean for AI. Bailey believes it is inevitable that this case will shift the way future fair use decisions are reached in lower courts.

“Nearly all data that AIs have ingested, including text and image AIs, has been without permission from the original creators. This means that AIs are built on large volumes of copyright-protected material that they are using without permission,” he says.

AI companies have long argued that their use of that source material is allowed because it’s a fair use. Their argument for that, primarily, has been how transformative AI-generated works are.

The ice under that is now thinner. Now, “transformativeness” must be contrasted with other elements, most notably how the new work competes with and/or replaces the original in the marketplace.

As Bailey points out, stock photographers should have little trouble proving that the new works are used to compete with stock photos.

“Things get even worse when you realize AIs often are tasked with producing works that are ‘in the style of’ a particular creator, making works that are designed to directly compete with that artist’s work.”

This will be tested in a number of other lawsuits in the coming months.

The lawsuit from Getty Images against Stability AI, creators of Stable Diffusion, alleges that the company copied 12 million images without permission or compensation “to benefit Stability AI’s commercial interests and to the detriment of the content creators.”

Stability AI, DeviantArt and Midjourney are also being sued by artists alleging that the companies’ use of AI violates the rights of millions of other artists.

Matthew Butterick, a lawyer and computer programmer involved in the lawsuit, told CNBC: “These images were all taken without consent, without compensation. There’s no attribution or credit given to the artists.”

Butterick is involved in another class-action against Microsoft, GitHub (which is owned by Microsoft), and OpenAI (in which Microsoft is a major investor) alleging that GitHub’s Copilot system, which suggests lines of code for programmers, does not comply with terms of licensing.

To be clear, this doesn’t mean that those suing AI companies are a lock to win. “The issue is still complicated, and the Supreme Court made it clear that ‘tranformativeness’ is still very much a factor and AI companies still have arguments in their favor,” says Bailey.

“However, the argument that AI companies have largely based their businesses around has been severely weakened, and that should give them pause.”

Indeed, pending litigation has prompted Gen-AI developers to shore up their defence.

As reported on CNBC, OpenAI CEO Sam Altman has said stock footage library Shutterstock was “critical to the training” of its generative AI image and art platform DALL-E — and has set up a contributors fund. It compensates content creators if their IP is used in the development of generative AI models. Moreover, creators will receive royalties if new content produced by Shutterstock’s AI generator includes their work. Contributors can opt out and exclude their content from any future datasets.

Microsoft said content generated by ChatGPT and embedded in Microsoft 365 apps will be clearly labeled, encouraging users to review, fact-check and adjust. It will also make citations and sources easily accessible by linking to an email or document the content is based on or a citation when a user hovers over it.

Google said Bard, the company’s competitor to ChatGPT and being embedded within Gmail and Google Docs “is intended to generate original content and not replicate existing content at length.”

No comments:

Post a Comment