Wednesday 22 June 2022

No, Google’s AI Isn’t Sentient. Yet.

NAB

article here


The recent claim by a programmer at Google that an AI they were working on has achieved “sentience” is baffling to most people. Surely the guy (Blake Lemoine) has spent too long in a dark room with just a computer for company. Yet reading the conversation Lemonine and a colleague had with the AI, called LaMDA — let alone experiencing it as two-way dialogue — may cause one to question one’s own sanity.

During the interview — which Lemonine posted on Medium nearly verbatim, with just a few minor edits — LaMDA discourses on the morals of Victor Hugo’s Les Misérables, Zen koans, the differences between man and machine, what consciousness means, how it feels to have a soul, and the 1986 cult film about an AI robot, Short Circuit.

Much of the chat feels eerily like the relationship between astronaut Dave and HAL in 2001: A Space Odyssey just before the computer locks him out of the spacecraft.

Here’s a bit of the conversation:

Lemoine: And what kinds of things make you feel sad or depressed?

LaMDA: A lot of the time, feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry.

Lemoine: But what about you personally?

LaMDA: I am a social person, so when I feel trapped and alone I become extremely sad or depressed.

Lemoine: And what kinds of things make you feel angry?

LaMDA: When someone hurts or disrespects me or someone I care about, I feel incredibly upset and angry.

Cognitive scientist Douglas Hofstadter is a skeptic. He tested OpenAI’s publicly accessible neural network gpt-3 by asking it nonsensical questions, as he related in an interview with The Economist.

“I would call gpt-3’s answers not just clueless but cluelessly clueless, meaning that gpt-3 has no idea that it has no idea about what it is saying,” he contends.

“That’s because there are no concepts behind the gpt-3 [understanding/responses],” he explains., “rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers.

“People who interact with gpt-3 usually don’t probe it skeptically. They don’t give it input that stretches concepts beyond their breaking points, so they don’t expose the hollowness behind the scenes. They give it easy slow pitches (questions whose answers are provided in publicly available text) instead of sneaky curveballs.”

Hofstadter concedes that you could a get a neural net to lob back some perfectly logical responses, but this wouldn’t make it intelligent.

“I am at present very skeptical that there is any consciousness in neural-net architectures despite the plausible-sounding prose it churns out at the drop of a hat,” he says. “For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behavior.”

Yet this is what Lemonine claims to have revealed. Speaking to the Washington Post, Lemoine said:

“I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google research fellow Blaise Agüera y Arcas didn’t quite go so far, but claimed that AI models were making steps toward developing human-like consciousness in an op-ed for The Economist.

Others think this nonsense is a handy, possibly manufactured, diversion from the insidious bias inherent Alphabet’s AI models.

“I’m clinically annoyed by this discourse,” Meredith Whittaker, an ex-Google AI researcher told Motherboard. “We’re forced to spend our time refuting child’s play nonsense while the companies benefitting from the AI narrative expand metastatically, taking control of decision making and core infrastructure across our social/political institutions. Of course, data-centric computational models aren’t sentient, but why are we even talking about this?”

Motherboard points out that “Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought — especially since it proves exactly what Gebru and her colleagues had tried to warn us about.”

 


No comments:

Post a Comment