NAB
The meteoric rise of AI applications has left the industry and onlookers
wondering how this rapidly developing technology will interact with copyright
law and whether the law can keep up. The legal landscape is muddy but there is
legal advice for the developers of AI tools and artists working with them or
believing their work is being stolen.
article here
There are two main questions to consider about AI art. The first is,
“Can AI art be copyrighted?” The other question surrounds the legal status of
artists who claim to have had their art stolen (euphemistically called
“sampled”) to supply the data for AI diffusion models.
Thuan Tran,
associate at Dunlap Bennett & Ludwig, answers the first question, stating that the
US Copyright Office will reject a request to allow an AI to copyright a work of
art. This is because it will not register works produced by a machine “or mere
mechanical intervention” from a human author.
Courts interpreting the Copyright Act, including the Supreme Court, have
consistently restricted copyright protection to “the fruits of intellectual
labor” that “are founded in the creative powers of the [human] mind.”
However, this interpretation is being tested. In a case currently before
the Supreme Court, artist Kris Kashtanova is contesting a decision by the
Copyright Office not to register a copyright for graphic novel that she created
using an AI.
Kashtanova is emphasizing in how she “engaged in a creative, iterative
process” that involved multiple rounds of composition, selection, arrangement,
cropping, and editing for each image in her work, which makes her the author of
the work.
“While the outcome of the proceeding is not yet finalized and Kashtanova
has a chance to appeal its decision, many are eagerly awaiting what may be very
precedential for the future of AI art.”
The second
question is also taken up by Tran,
and is also being framed in the court of law. There are several cases of
artists suing generative AI platforms for unauthorized use of their work
Image licensing service Getty, for example, has filed a suit against the
creators of Stable Diffusion alleging the improper use of its photos, both
violating copyright and trademark rights it has in its watermarked photograph
collection.
The outcome
of these cases is expected to hinge on the interpretation of the fair use
doctrine. This is the legal concept that allows for the use of copyrighted
material without permission from the copyright holder, in certain
circumstances.
Tran explains that Fair use is determined on a case-by-case basis, and
courts consider four factors: (1) the purpose and character of the use; (2) the
nature of the copyrighted work; (3) the amount and substantiality of the
portion used; and (4) the effect of the use upon the potential market for or
value of the copyrighted work.
“One argument in favor of AI-generated art falling under fair use is
that the use of copyrighted material by AI algorithms is transformative,” he
says. “Transformative use is a key factor in determining fair use. It refers to
the creation of something new and original that is not merely a copy or
imitation of the original work.”
AI algorithms create new works by processing and synthesizing existing
works, resulting in a product that could be considered distinct from the
original. “As a result, AI-generated art can be seen as a form of
transformative use, which would weigh in favor of fair use,” Tran says. “On the
other hand, this argument is not without its limitations. Many argue that
AI-generated art is simply a recombination or manipulation of existing works,
without adding significant creative output. “
There is also the larger philosophical debate as to whether a machine
can give “creative input” to its work. In such cases, it may be more difficult
to argue that the use of copyrighted material is transformative and
subsequently falls under fair use.
All this uncertainty presents a slew of challenges for companies that
use generative AI. There are risks regarding infringement — direct or
unintentional — in contracts that are silent on generative AI usage by their
vendors and customers.
The Harvard Business Review gives
some advice for AI vendors, their customers, and artists.
“AI developers should ensure that they are in compliance with the law in
regards to their acquisition of data being used to train their models,” advise
Gil Appel, Assistant Professor of Marketing at the GW School of Business,
Juliana Neelbauer, partner at law firm Fox Rothschild LLP, and David A.
Schweidel, Professor of Marketing at Emory University’s Goizueta Business
School. “This should involve licensing and compensating those individuals who
own the IP that developers seek to add to their training data, whether by
licensing it or sharing in revenue generated by the AI tool.”
Developers should also work on ways to maintain the provenance of
AI-generated content, which would increase transparency about the works
included in the training data. This would include recording the platform that
was used to develop the content, tracking of seed-data’s metadata, and tags to
facilitate AI reporting, including the specific prompt that was used to create
the content.
“Developing these audit trails would assure companies are prepared when
customers start including demands for them in contracts as a form of insurance
that the vendor’s works aren’t willfully, or unintentionally, derivative
without authorization.
“Looking further into the future, insurance companies may require these
reports in order to extend traditional insurance coverages to business users
whose assets include AI-generated works.
Creators
When it comes individual content creators and brands, the onus is on
them to take steps to protect their IP.
Stable Diffusion developer Stability.AI, for example, announced that
artists will be able to opt out of the next generation of the image generator.
“But this puts the onus on content creators to actively protect their
IP, rather than requiring the AI developers to secure the IP to the work prior
to using it — and even when artists opt out, that decision will only be
reflected in the next iteration of the platform. Instead, companies should
require the creator’s opt-in rather opt-out.”
According to Appel, Neelbauer and Schweidel, this involves “proactively
looking for their work in compiled datasets or large-scale data lakes,
including visual elements such as logos and artwork and textual elements, such
as image tags.”
Obviously, this could not be done manually through terabytes or
petabytes of content data, but they think existing search tools “should allow
the cost-effective automation of this task.”
Content creators are also advised to monitor digital and social channels
for the appearance of works that may be derived from their own.
Longer term, content creators that have a sufficient library of their
own IP on which to draw “may consider building their own datasets to train and
mature AI platforms.”
The resulting generative AI models need not be trained from scratch but
can build upon open-source generative AI that has used lawfully sourced
content. This would enable content creators to produce content in the same
style as their own work with an audit trail to their own data lake, or to
license the use of such tools to interested parties with cleared title in both
the AI’s training data and its outputs.
Customers
Customers of AI tools should ask providers whether their models were
trained with any protected content, review the terms of service and privacy
policies, “and avoid generative AI tools that cannot confirm that their
training data is properly licensed from content creators or subject to
open-source licenses with which the AI companies comply.”
Businesses
If a business user is aware that training data might include unlicensed
works or that an AI can generate unauthorized derivative works not covered by
fair use, a business could be on the hook for willful infringement, which can
include damages up to $150,000 for each instance of knowing use.
Consequently, businesses should evaluate their transaction terms to
write protections into contracts. As a starting point, they should demand terms
of service from generative AI platforms that confirm proper licensure of the
training data that feed their AI.
Appel, Neelbauer and Schweidel add that they understand the real threat
of generative AI to part of the livelihood of members of the creative class,
“at the same time both creatives and corporate interests have a dramatic
opportunity to build portfolios of their works and branded materials, meta-tag
them, and train their own generative-AI platforms that can produce authorized,
proprietary, (paid-up or royalty-bearing) goods as sources of instant revenue
streams.”
No comments:
Post a Comment