NAB
After a period
lasting about a year when all things AI were generally considered a benign
boost to human potential (the worst that could happen was you could lose your
job to ChatGPT), the dystopian winds have blown back in.
article here
A number of
commentators are calling for a brake on AI development while the risks of its
use are assessed — an unlikely scenario since there appears no appetite for it
among governments, legislators, big tech, or the AI coding community itself.
With tech companies
rushing out AI models without any apparent safeguards (like, for accuracy), cognitive
scientist Gary Marcus thinks this is a moment of immense peril. In The
Atlantic he writes, “The potential scale of this problem is cause for
worry. We can only begin to imagine what state-sponsored troll farms with large
budgets and customized large language models of their own might accomplish.
“Bad actors could
easily use these tools, or tools like them, to generate harmful misinformation,
at unprecedented and enormous scale.”
He quotes Renée
DiResta, the research manager of the Stanford Internet Observatory, warning (in
2020) that the “supply of misinformation will soon be infinite.”
Marcus says that
moment has arrived.
“Each day is
bringing us a little bit closer to a kind of information-sphere disaster,” he
warns, “in which bad actors weaponize large language models, distributing their
ill-gotten gains through armies of ever more sophisticated bots.”
His comments
coincide with the results of a annual survey of machine learning researchers
called AI Impacts. In its latest report, 14% of participants said they
expected AI outcomes to be ‘extremely bad’ and nearly a third (31%) think AI
will, on the whole, make the world markedly worse.
Just to be clear, AI Impacts chief researcher Katja Grace, says that they are talking about human level extinction.
She added, “The
most notable change to me is the new big black bar of doom: people who think
extremely bad outcomes are at least 50% have gone from 3% of the population to
9% in six years.
In another
op-ed, The New York Times columnist Ezra Klein expresses dismay
at what he sees as the lack of urgency among AI developers to do anything about
the Frankenstein they are unleashing.
He says the people
working on AI in the Bay Area inhabit “a community that is living with an
altered sense of time and consequence. They are creating a power that they do
not understand at a pace they often cannot believe.
“We do not
understand these systems, and it’s not clear we even can,” he continues. “That
is perhaps the weirdest thing about what we are building: The ‘thinking’ for lack
of a better word, is utterly inhuman, but we have trained it to present as
deeply human. And the more inhuman the systems get — the more billions of
connections they draw and layers and parameters and nodes and computing power
they acquire — the more human they seem to us.”
Klein seems less
concerned about machine-led mass human extinction than he is about having even
more of his data — and therefore media consumption — turned into advertising
dollars by Microsoft, Google and Meta.
Since these tech
giants hold the keys to the code he thinks they will eventually “patch the
system so it serves their interests.”
His wider point
though is to warn that we are so stuck on asking what the technology can do
that we are missing the more important questions: How will it be used? And who
will decide?
“Somehow, society
is going to have to figure out what it’s comfortable having AI doing, and what
AI should not be permitted to try, before it is too late to make those
decisions.”
Marcus offers some
remedies. Watermarking is one idea to track content produced by large language
models but acknowledges this is likely insufficient
“The trouble is
that bad actors could simply use alternative large language models to create
whatever they want, without watermarks.”
A second approach
is to penalize misinformation when it is produced at large scale.
“Currently, most
people are free to lie most of the time without consequence,” Marcus says. “We
may need new laws to address such scenarios.”
A third approach
would be to build a new form of AI that can detect misinformation, rather than
simply generate it. Even in a system like Bing’s, where information is sourced
from the web, mistruths can emerge once the data are fed through the machine.
“Validating
the output of large language models will require developing new approaches to
AI that center reasoning and knowledge, ideas that were once popular but are
currently out of fashion.”
Both Marcus and
Klein express alarm at the lack of urgency to deal with these issues. Bearing
in mind that the 2024 Presidential elections will be upon us soon, Marcus warns
they “could be unlike anything we have seen before.”
Klein thinks time
is rapidly running out to do anything at all.
“The major tech
companies are in a race for AI dominance. The US and China are in a race for AI
dominance. Money is gushing toward companies with AI expertise. To suggest we
go slower, or even stop entirely, has come to seem childish. If one company
slows down, another will speed up. If one country hits pause, the others will
push harder. Fatalism becomes the handmaiden of inevitability, and
inevitability becomes the justification for acceleration.”
One of two things
must happen, he suggests, and Marcus agrees: “Humanity needs to accelerate its
adaptation to these technologies or a collective, enforceable decision must be
made to slow the development of these technologies. Even doing both may not be
enough.”
No comments:
Post a Comment