NAB
Google may have fired computer programmer Blake Lemoine and
hoped to draw a line under the debate about whether its AI is sentient or not
(Google says not). But that’s a mistake.
article here
Lemoine should be applauded for opening up a can of
philosophical worms that will frame debates about intelligence, machine
consciousness, language and human-AI interaction in the coming years.
Most thinkers on the topic do not conclude that LaMDA is
conscious in the ways that Lemoine believes it to be, concluding that his
inference is based in motivated anthropomorphic projection. At the same time,
it is also possible that AI models are “intelligent” — and even “conscious” in
some way — depending on how those terms are defined.
“For example, an AI may be genuinely intelligent in some way
but only sentient in the restrictive sense of sensing and acting deliberately
on external information,” Benjamin Bratton, a philosopher of technology and
professor at the University of California, San Diego, and Blaise Agüera y
Arcas, a VP and fellow at Google Research, write in an article for NOĒMA.
“Perhaps the real lesson for philosophy of AI is that
reality has outpaced the available language to parse what is already at hand. A
more precise vocabulary is essential.”
Bratton and Agüera y Arcas argue that we need more specific
and creative language that can cut the knots around terms like “sentience,”
“ethics,” “intelligence,” and even “artificial,” in order to name and measure
what is already here and orient what is to come.
This has come to a head because of the advance in artificial
intelligence that LaMDA, the Google AI, has achieved. It is doing a lot more
than just reproducing pre-scripted responses. It is instead constructing new
sentences, tendencies, and attitudes “on the fly” in response to the flow of
conversation.
“For LaMDA to achieve this means it is doing something
pretty tricky: it is mind modelling,” explains Bratton. “It seems to have
enough of a sense of itself — not necessarily as a subjective mind, but as a
construction in the mind of Lemoine — that it can react accordingly and thus
amplify his anthropomorphic projection of personhood.”
Put differently, there may be some kind of real intelligence
here, not in the way Lemoine asserts, but in how the AI models itself according
to how it thinks Lemoine thinks of it.
Some neuroscientists posit that the emergence of
consciousness is the effect of this exact kind of mind modeling. Michael
Graziano, a professor of neuroscience and psychology at Princeton is one of
them. He suggests that consciousness is the evolutionary result of minds
getting good at empathetically modeling other minds and then, over evolutionary
time, turning that process inward on themselves.
Put differently, it is no less interesting that a
non-sentient machine could perform so many feats deeply associated with human
sapience as that has profound implications for what sapience is and is not.
Here’s a conundrum: Is it anthropomorphism to call what a
light sensor does machine “vision,” or should the definition of vision include
all photoreceptive responses, even photosynthesis?
And another: At what point is calling synthetic language
“language” accurate, as opposed to metaphorical?
The way we talk about and label the world has, well,
real-world implications. You don’t have to have studied your Wittgenstein to
know this.
As Bratton and Agüera y Arcas put it, “In the history of AI
philosophy, from Turing’s Test to Searle’s Chinese Room, the performance of
language has played a central conceptual role in debates as to where sentience
may or may not be in human-AI interaction. It does again today and will
continue to do so. As we see, chatbots and artificially generated text are
becoming more convincing.”
Trying to peel belief and reality apart is always difficult.
Here the question is not whether the person is imagining things in the AI but
whether the AI is imagining things about the world, and whether the human
accepts the AI’s conclusions as insights or dismisses them as noise. The
philosophical term for this is the Artificial Epistemology Confidence Problem.
It has been suggested, Bratton and Agüera y Arcas note, that
there should be a clear line prohibiting the construction of AIs that
convincingly mimic humans due to the evident harms and dangers of rampant
impersonation.
“A future filled with deepfakes, evangelical scams,
manipulative psychological projections, etc. is to be avoided at all costs.
These dark possibilities are real, but so are many equally weird and less
unanimously negative sorts of synthetic humanism.
“The path of augmented intelligence, whereby human sapience
and machine cunning collaborate as well as a driver and a car or a surgeon and
her scalpel, will almost certainly result in amalgamations that are not merely
prosthetic, but which fuse categories of self and object, me and it.”
In other words our definitions of me, myself and I plus it
are about to get a whole more pixelated.
No comments:
Post a Comment