Wednesday, 27 March 2024

Don’t Treat AI Like Pandora’s Box, Warns Jaron Lanier

NAB

If you believe Jaron Lanier, there’s no intelligence in our current AI but we should be scared nonetheless. The renowned computer scientist and virtual reality pioneer is a humanist and says he speaks his own mind even while on the Microsoft payroll.

article here

“The way I interpret it is there’s no AI there. There’s no entity. From my perspective, the right way to think about the LLMs like ChatGPT is, as a collaboration between people. You take in what a bunch of people have done and you combine it in a new way, which is very good at finding correlations. What comes out is a collaboration of those people that is in many ways more useful than previous collaborations.”

Lanier was speaking with Brian Greene as part of “The Big Ideas” series, supported in part by the John Templeton Foundation. He argued that treating AI as “intelligent” gives it an agency it technically does not have while absolving us of our own responsibility to manage it.

“There’s no AI, there’s just the people collaborating in this new way,” he reiterated. “When I think about it that way, I find it much easier to come up with useful applications that will really help society.”

He acknowledges that anthropomorphizing AI is natural when confronted with something we can’t quite comprehend.

At present, because we have large language models that seem to work in the same way that natural biological neurons do, we have assigned both machine and human to the same category. Erroneously in Lanier’s view.

“Perceiving an entity is a matter of faith. If you want to believe your plant is talking to you, you can you know. I’m not going to go and judge you. But this is similar to that like it.”

The risk of not treating AI as a human driven tool is that the dystopian fiction of Terminator will be a self-fulfilling prophesy.

“I have to really emphasize that it’s all about the people. It’s all about humans. And the right question is to assess could humans use this stuff in such a way to bring up about a species threatening calamity? And I think the clear answer is yes,” he says.

“Now, I should say that I think that’s also true of other technologies, and has been true for a while. The truth is that the better we get with technologies, the more responsible we have to be and the less we are beholden to fate,” he continues.

“The power to support a large population means the power to transform the Earth, which means the power to transform the climate, which means the responsibility to take charge for the climate when we didn’t before.

“And there’s no way out of that chain that [doesn’t] lead to greater responsibility.”

Ultimately, the way to prevent The Matrix from ever happening is to frame AI as human responsibility.

“The more we hypothesize that we’re creating aliens who will come and invade, the less we’re taking responsibility for our own stuff.”

Lanier adds, “There are plenty of individuals at Microsoft who wouldn’t accept everything I say. So this is just me. But at any rate, what I can say is that Microsoft and OpenAI and the broader community that does seriously work on guardrails to keep it from being terrible. That’s the reason why nothing terrible has happened so far in the first year and a half of AI.”

No comments:

Post a Comment