NAB
It is not AI we should be worried about per se, but
the humans who work with the technology. That holds true for those with one
hand on a nuclear button as well as for big business looking out for their
bottom line. This was the argument made by a pair of AI experts in a
conversation hosted at TheWrap’s annual conference, TheGrill.
article here
“AI is just a technology [about]
which you should not be necessarily terrified, but [you should be] concerned
about who wields the power of AI,” said Mehran Sahami, computer science
professor and chair at Stanford University.
He instanced a recent chat with
someone in the publishing industry who cancelled hiring six new jobs because
their existing staff could take on that same work using AI technology.
“It was not AI that makes the
decision whether or not the jobs exist or not, it’s human beings that make that
decision,” said Sahami. “So what AI enables is more possibilities, [and] one of
those possibilities that it creates is job displacement. But people ultimately
make that decision. This is going to shift the economic landscape, but the
decisions are still ours.”
Daniela Rus, director for MIT’s
computer science and AI laboratory, characterized some humans wielding the
power tools of AI as “supervillains.”
“AI can give us a lot of benefits,
but the same benefits can empower supervillains,” she said.
They both argued sought to reassure
those are might be terrified of AI. “They’re not that powerful,” she said.
“They cannot replace human creativity. They are not our equals, they could be
our assistants, they could empower us to do more with what we can do already,
they can help us be more productive. They can help us with knowledge, they can
help us with insight, but the tools themselves are kind of simple tools that
work on based on statistical rules.”
AI is good at some tasks — especially
the language and computer vision components that are empowering us to do more
but in terms of dominating human creativity, we are not there, she said.
“What I believe is that humans and
machines can work together to empower the humans. So we have to find ways for
AI to support the production of movies,” she continued.
“With AI you can help with some
routine tasks like fixing color across the film or anticipating different types
of storylines, that people could then evaluate. You can help with error
correction as you generate the video. But all of these are really routine
tasks. They’re really not where the creative element sits. At best, they could
generate maybe B or C level scripts, but they cannot generate the kind of
stories that capture the important aspects of the human condition, or provide
political commentaries.
“I cannot imagine the Barbie script
being generated by a machine,” she said. “Maybe an individual character could
be shaped but the whole story is a is necessarily a human story.”
Sahami made the point that it doesn’t
matter whether an AI can generate empathy or emotion when it comes to the
creative arts since these are attributes that we each bring with us to the
experience.
“One thing that AI is getting much
better at is basically having an interaction that we ascribe meaning to.
Generative AI generates words and pictures, which are exactly the things we
give meaning to. So it can certainly evoke emotional responses from human
beings, because we’re the ones who create that.”
He agrees, however, that AI can be
manipulated (prompted) to generate outputs that evoke particular responses
which may be designed to subvert the truth.
“That’s what misinformation is all
about. How do I get people angry enough that they vote for the person I want
them to vote for? What are the guardrails we put around AI that allows us to
know that the emotions that are being generated [within us] are being generated
by this thing,” he says.
“When we go to a movie, we don’t come
in there thinking, Oh, I’m just gonna sit here and have no emotional reaction.
We want an emotional reaction. But at the same time, we realize that it’s fake,
that it’s a movie. We don’t necessarily realize that when we read something on
social media. So those are the places where we [need] to have some indication
[of AI involvement].”
Regulation to Watch the
Watchmen
Much of the discussion was
taken over to pondering whether and to what extent regulation of AI was needed,
not least to guard against the issue of inherent bias in the data on which an
algorithm is trained.
“We’re worried about bias,” said Rus,
who serves as one of the US representatives in a group titled Global
Partnerships in AI. “The research community is not giving up. In fact, there is
a very energized movement to align [data/AI] to think about human values, and
ensure the algorithms that get deployed are aligned with human values.”
She noted that AI driven facial
recognition is likely to have been trained either mostly on “white blond
faces,” so the system is going to produce mostly a white blond outcome or more
nefariously trained to accentuate differences from that “norm” so it would
actively highlight people with different colored skin.
But she said, “You can mathematically
rebalance the data. You can mathematically evaluate whether the system as a
whole has bias. And then you can fix it, using mathematics [which is] readily
available now.”
Sahami argued for comprehensive
federal privacy legislation to regulate AI “because at the end of the day, it’s
a question of who chooses, and who chooses right now is a small group of
executives at some very large companies.”
He likes the idea suggested by Sam
Altman, head of OpenAI, of having a sort of AI equivalent to the Atomic Energy
Agency to act as a buffer.
“You need to think about risk
mitigation, who has the power over these tools? How do you actually put
guardrails and inspections around these things so that they don’t get used for
purposes that they weren’t intended,” he said.
Rus argued for a “delicate balance”
between over-regulation and stifling innovation. “I think it’s important to
find a good balance that allows innovation to continue,” she said. “Especially
for [the US], we are leaders in the space and if we over regulate we may lose
our leadership. But at the same time, AI deployments have to be safe, they have
to be carefully done. I believe that we really need to ensure consumer
confidence in the safety of the output of the system.
“I don’t really mind if my AI
personal system is not fully tested or makes mistakes, if the task is to, to
label my vacation photos, but if the task is to do something like deciding who
gets hired in a company or who gets convicted, then then we really have to be
thoughtful.”
Machines may not be up to speed when
it comes to matching human creativity, but what about down the line? AI is
improving at such a rate, surely it is only a matter of time before jobs are
lost because of it.
Sahami believes there will be “labor
displacement” in entertainment “and it’s going to be uneven.”
He said, “It’s true that human
creativity is not replaceable, in some sense. But human creativity can be
augmented.”
He gives a simple example. When
people say that AI doesn’t generate anything that hasn’t already been
generated, they disregard the fact that most themes in entertainment are
regurgitated in some form.
George Lucas famously leaned on Joseph Campbell’s
book The Hero With a Thousand Faces to join together classic
storytelling tropes into the mythology of Star Wars.
The same is arguably true of all art,
painted, filmed, written or played — it leans on the shoulders of giants.
“There’s these universal themes that
come up over and over, but they have variations,” said Sahami. “Basically,
they’re an amalgam of a bunch of different ideas, and AI can potentially do
that. That doesn’t mean it’s necessarily going to generate the next great
script. But it could generate ideas that empower a smaller group of people to
generate the next great script. And then that becomes a question for a studio
executives, are you going to have more people in the room or fewer people with
a bunch of power tools — that essentially is a human decision at the end of the
day.”
No comments:
Post a Comment