NAB
With the advance of systems like OpenAI’s Dall-E 2, DeepMind’s Gato and large language models like Facebook’s OPT, some experts believe that we are now within striking distance of “artificial general intelligence,” otherwise known as AGI. This is an often-discussed benchmark that refers to powerful, flexible AI that can outperform humans at any cognitive task.
article here
Nonsense, says Rob Toews, a partner at VC firm Radical
Ventures, but so too are the arguments of those who dismiss AI’s breathtaking
possibilities.
“There is no such thing as artificial general intelligence,”
he writes at Forbes. “AGI is neither possible nor impossible. It is,
rather, incoherent as a concept.”
The public discourse needs to be reframed.
“Both the overexcited zealots who believe that
super-intelligent AI is around the corner, and the dismissive skeptics who
believe that recent developments in AI amount to mere hype, are off the mark in
some fundamental ways in their thinking about modern AI.”
His argument follows that of influential philosopher Thomas
Nagel in that AI is and will be fundamentally unlike human intelligence.
Nagel (writing in 1974) claimed that it is impossible to
know, in a meaningful way, exactly what it is like to be another organism or
species. The more unlike us the other organism or species is, the more
inaccessible its internal experience is.
He used bats as an example to illustrate this point. He
chose bats because, as mammals, they are highly complex beings, yet they
experience life dramatically differently than we do: they fly, they use sonar
as their primary means of sensing the world, and so on.
“It is a mistake to analogize AI too directly to human
intelligence,” says Toews. “Today’s AI is not simply a ‘less evolved’ form of
human intelligence; nor will tomorrow’s hyper-advanced AI be just a more
powerful version of human intelligence.”
The problem with the entire discussion about the presence or
absence of sentience is by definition unprovable, unfalsifiable, unknowable.
“When we talk about sentience, we are referring to an
agents’ subjective inner experiences, not to any outer display of intelligence.
No one (not Google’s sacked AI engineer Blake Lemoine, nor his bosses that
dismissed both him as well as his claims) can be fully certain about what a
highly complex artificial neural network is or is not experiencing internally.”
AI, he maintains, is best thought of not as an imperfect
emulation of human intelligence, but rather as a distinct, alien form of
intelligence, whose contours and capabilities differ from our own in basic ways.
To make this more concrete, consider the state of AI today.
Today’s AI far exceeds human capabilities in some areas — and woefully
underperforms in others.
For example, AI models have produced a solution to the
protein folding problem, a fiendishly complicated riddle that requires forms of
spatial understanding and high-dimensional reasoning “that simply lie beyond
the grasp of the human mind.”
Meanwhile, any healthy human child possesses “embodied
intelligence” that according to Toews, far eclipses the world’s most
sophisticated AI.
“From a young age, humans can effortlessly do things like
play catch, walk over unfamiliar terrain, or open the kitchen fridge and grab a
snack. Physical capabilities like these have proven fiendishly difficult for AI
to master.”
So we need to conceive of intelligence differently. It is not a single, well-defined, generalizable capability, nor even a particular set of capabilities.
To define general AI as an AI that can do what humans do —
but better — is shortsighted, Toews asserts. “To think that human intelligence
is general intelligence is myopically human-centric,” he says. “If we use human
intelligence as the ultimate anchor and yardstick for the development of
artificial intelligence, we will miss out on the full range of powerful,
profound, unexpected, societally beneficial, utterly non-human abilities that
machine intelligence might be capable of.”
The point is that AI’s true potential lies with the
development of novel forms of intelligence that are utterly unlike anything
that humans are capable of. If AI is able to achieve goals like this, who cares
if it is “general” in the sense of matching human capabilities overall?
“Artificial intelligence is not like human intelligence.
When and if AI ever becomes sentient — when and if it is ever ‘like something’
to be an AI, it will not be comparable to what it is like to be a human. AI is
its own distinct, alien, fascinating, rapidly evolving form of cognition.”
What matters is what artificial intelligence can achieve.
Delivering breakthroughs in basic science (like the protein research
Alphafold), tackling species-level challenges like climate change, advancing
human health and longevity, deepening our understanding of how the universe works
— outcomes like these are the true test of AI’s power and sophistication.
c
No comments:
Post a Comment