NAB
article here
Jaron Lanier, an influential computer scientist who works for Microsoft, wants to calm down the increasingly polarized debate about how we should manage artificial intelligence.
In fact, he says, we shouldn’t use
the term “AI” at all because doing so is misleading. He would rather we
understand the tech “as an innovative form of social collaboration.”
He set out his ideas in a piece published in The
New Yorker, “There Is No AI,” and elaborated on them further in a conversation
recorded for University of California Television (UCTV), “Data Dignity and the Inversion of AI,” co-hosted by the UC Berkeley College of
Computing, Data Science, and Society and the UC Berkeley Artificial
Intelligence Research (BAIR) Lab.
Lanier is also an avowed Humanist and
wants to put humans at the center of the debate. He calls on commentators and
scientists not to “mythologize” a technology that is actually only a tool.
“My attitude doesn’t eliminate the
possibility of peril: however we think about it, we can still design and
operate our new tech badly, in ways that can hurt us or even lead to our
extinction. Mythologizing the technology only makes it more likely that we’ll
fail to operate it well — and this kind of thinking limits our imaginations,”
he argues.
“We can work better under the
assumption that there is no such thing as AI The sooner we understand this, the
sooner we’ll start managing our new technology intelligently.”
So if the new tech isn’t true AI,
then what is it? In Lanier’s view, the most accurate way to understand what we
are building today is as an innovative form of social collaboration.
AI is just a computer program, albeit
one that mashes up work done by human minds.
“What’s innovative is that the
‘mashup’ process has become guided and constrained, so that the results are
usable and often striking,” he says.
“Seeing AI as a way of working
together, rather than as a technology for creating independent, intelligent
beings, may make it less mysterious — less like Hal 9000,” he contends.
It is hard but not impossible to keep
track of the input of humans into the data sets that an AI uses to create
something new. Broadly speaking this is the idea of “data dignity,” a concept
doing the rounds among the computer scientist community for helping us out of
the impasse when it comes to AI’s ability to work for us, not against us.
As Lanier explains, “At some point in
the past, a real person created an illustration that was input as data into the
model, and, in combination with contributions from other people, this was
transformed into a fresh image. Big-model AI is made of people — and the way to
open the black box is to reveal them.”
Such “data dignity” appeared long
before the rise of big-model AI as an alternative to the familiar arrangement
in which people give their data for free in exchange for free services, such as
internet searches or social networking. It is sometimes known as “data as
labor” or “plurality research.”
“In a world with data dignity,
digital stuff would typically be connected with the humans who want to be known
for having made it. In some versions of the idea, people could get paid for
what they create, even when it is filtered and recombined through big models,
and tech hubs would earn fees for facilitating things that people want to do.”
He acknowledges that some people will
be horrified by the idea of capitalism online, but argues that his strategy
would be a more honest capitalism.
Nor is he blind to the difficulties
involved in implementing such a global strategy. It would require technical
research and policy innovation.
Yet, if there’s a will, there will be
a way, and the benefits of the data-dignity approach would be huge. Among them,
the ability to trace the most unique and influential contributors to an AI
model and to renumerate those individuals.
“The system wouldn’t necessarily
account for the billions of people who have made ambient contributions to big
models,” he caveats. “Over time, though, more people might be included, as
intermediate rights organizations — unions, guilds, professional groups, and so
on — start to play a role.”
People need collective-bargaining
power to have value in an online world — and that’s a loophole he doesn’t
address. Lanier’s humanist side gets the better of him as his imagination runs
to liberal thinking.
He continues, “When people share
responsibility in a group, they self-police, reducing the need, or temptation,
for governments and companies to censor or control from above. Acknowledging
the human essence of big models might lead to a blossoming of new positive
social institutions.”
There are also non-altruistic reasons
for AI companies to embrace data dignity, he suggests. The models are only as
good as their inputs.
“It’s only through a system like data
dignity that we can expand the models into new frontiers,” he says.
So it is in Silicon Valley’s interest
to renumerate humans whose data they collect, in order to then create better
and bigger AI models that have an edge over competitors.
“Seeing AI as a form of social
collaboration gives us access to the engine room, which is made of people,”
says Lanier.
He doesn’t deny there are risks with
AI, but he also doesn’t subscribe to the most apocalyptic end of species
scenarios of some of his peers. Addressing the issue of deepfakes, the misuse
of AI by a bad actor, he gives a stark example of how data dignity might come
to the rescue.
Suppose, he says, that an evil
person, perhaps working in an opposing government on a war footing, decides to
stoke mass panic by sending all of us convincing videos of our loved ones being
tortured or abducted from our homes. (The data necessary to create such videos
are, in many cases, easy to obtain through social media or other channels.)
“Chaos would ensue, even if it soon became clear that the videos were faked. How could we prevent such a scenario? The answer is obvious: digital information must have context. Any collection of bits needs a history. When you lose context, you lose control.”
No comments:
Post a Comment