Monday, 8 May 2023

Why Scientists Are Worried About Superintelligence

NAB

For decades, the most exalted goal of artificial intelligence has been the creation of an artificial general intelligence, or AGI, capable of matching or even outperforming human beings on any intellectual task. The introduction of OpenAI’s large language model GPT-4 has got people questioning whether we’ve already arrived.

article here

It’s tricky to say for sure since we don’t really have a definition for what intelligence is, according to AI expert Christof Koch, who is chief scientist of the Mindscope Program at Seattle’s Allen Institute.

Quizzed by Glenn Zorpette at IEEE Spectrum, Koch says that by one definition of human intelligence, ChatGPT is already a match.

Most people think about AGI in terms of human intelligence, but with infinite memory and with totally rational abilities to think — unlike us,” Koch says. “Where it can take those very smart people, like Albert Einstein, years to complete their insights and finish their work. But an AGI may be able to do this in a single second. If that’s the case, AI may as well be superintelligent.”

Large language models demonstrate “quite clearly” that you do not have to have a human-level type of understanding in order to compose text “that to all appearances was written by somebody who has had a secondary or tertiary education,” he says.

ChatGPT reminds Koch of a widely read, smart, undergraduate student who has an answer for everything, “but who’s also overly confident of his answers and, quite often, his answers are wrong. I mean, that’s a thing with ChatGPT. You can’t really trust it.”

But even this weakness — this so-called tendency to hallucinate or make assertions that seem semantically and syntactically correct, but actually aren’t — has until now been considered a human trait.

“People do this constantly,” Koch says. “They make all sorts of claims and often they’re simply not true. So again, this is not that different from humans. I grant you, for practical applications right now, you can’t depend on it. You always have to check other sources. But that’s going to change.”

The elephant in the room, of course, is consciousness. Does the AI think like human? Does it reflect on its own existence? Is it self-aware?

Koch says the concepts of consciousness and intelligence are different. “Intelligence ultimately is about behaviors, about acting in the world. If you’re intelligent, you’re going to do certain behaviors and you’re not going to do some other behaviors. Consciousness is more a state of being. You’re happy, you’re sad, you see something, you smell something, you dread something, you dream, fear, you imagine something. Those are all different conscious states.”

At least in biological creatures, consciousness and intelligence seem to go hand in hand. But for engineered artifacts like computers, that does not at all have to be the case. Just because you can build a machine that simulates the behavior associated with consciousness, including speech, doesn’t mean that it actually feels anything.

But does that matter? Perhaps not if the goal of the superintelligent machine is simply practical like predicting the weather or writing code. All the machine needs to do is to be able to predict and then based on that prediction, do certain things.

“It’s not consciousness that we need to be concerned about,” Koch warns. “It’s their motivation and high intelligence that we need to be concerned with.”

He’s not talking about the threat to creative jobs here. Koch is pretty matter of fact in talking about the doomsday scenario of AI turning on humankind — or at very least of states and terrorists using AI to kill.

“We are building creatures that are clearly getting better and better at mimicking one of our unique hallmarks — intelligence. The military, independent state actors, terrorist groups, will want to marry that advanced intelligent machine technology to warfighting capability. It’s going to happen sooner or later. And then you have machines that might be semiautonomous or even fully autonomous and that are very intelligent and also very aggressive. And that’s not something that we want to do without very, very careful thinking about it.”

He isn’t sure of the timeframe for this. Already it would seem likely that drones, such as those used by Russia to bomb Ukraine, could be fitted with AI-driven GPS for even more precise targeting and evasion.

“But the only thing I can think of that could happen in 2023 is using a large language model for some sort of concerted propaganda campaign or disinformation. I mean, I don’t see it controlling a lethal robot, for example.

“Right now, what could happen? You could get all sorts of nasty deepfakes or people declaring war or an imminent nuclear attack. I mean, whatever your dark fantasy gives rise to. It’s the world we now live in.”

 

No comments:

Post a Comment