NAB
Does the recent chaos at OpenAI shine a spotlight on the only debate about AI that matters: Its existential threat to humanity?
article here
Thom Waite explores this: Depending on who you ask, the future of humanity — in a world populated by extremely intelligent machines — looks very different.
On one hand, you have techno-utopias, where AI has solved all of humanity’s most difficult problems, from the climate crisis, to interstellar travel, and disease. On the other, you have scenes from a Terminator-style timeline where Skynet has been built and we are all slaves to the machine.
OpenAI’s Theoretical, Existential Crisis
Could this raging philosophical debate be at the heart of the schism that ripped apart OpenAI a few weeks ago and saw CEO Sam Altman dramatically ousted — and then welcomed back?
Even if the disagreement (albeit a seemingly fundamental one) was about matters more prosaic (stocks and shares, maybe) it is worth pursuing where this leads.
One reading of the summary dismissal was that the OpenAI board has a commitment that states its “primary fiduciary duty is to humanity.” In other words, if it sees something that might harm humanity, it’s allowed to make any necessary leadership changes to keep the threat contained.
Superficially, that’s a bit like Google’s (now Alphabet) ex-corporate motto “Don’t Be Evil,” which has always had more marketing spin about it than any corporate social policy (such as syphoning user data from its platform and Android OS to sell ads).
Perhaps Altman and his researchers had made some sort of breakthrough, which made artificial general intelligence likely sooner rather than later. AGI is the type of super-AI that matches human thinking so exactly that there is in fact no discernible difference (ergo, what is the point of humans). There were rumors of this, but these have died on the vine since Altman went back to the lab.
Nevertheless, as Waite puts it, it is easy to see how the developments have reignited the debate about the future of AI, with “doomers” at one end of the spectrum and believers in “effective accelerationism” at the other, preaching a version of AI utopianism.
Waite assays what each party believes.
Doomers vs. Accelerationists
Unsurprisingly, “doomer” is a label for people who believe in a high probability that AGI will be a bad thing. high p(doom) – in other words. As a result, they often advocate slowing down AI development, or even putting it on pause until guardrails can be put in place.
Emmett Shear, the former chief executive of livestreaming site Twitch and the short-lived interim CEO of OpenAI when Altman was fired, dubbed himself a “doomer” earlier this year.
According to Waite, members of the board who were instrumental in the recent shake-up have also expressed deep concerns about the future of the technology, which sets the battleground for the supposed conflict.
As is obvious by their moniker, accelerationists want AI development to ‘Go Go Go’. Influential entrepreneurs like Marc Andreessen are in this camp along with other tech evangelists who last year were motoring on about the greater good of Web3, NFTs and crypto. Or those with a quest for bodily immortality (like Peter Thiel) or those actually serious about fusing their consciousness into the network in a singularity (which didn’t end well for Johnny Depp’s scientist in Transcendence).
The movement isn’t monolithic, of course. Some believe that it’s important to achieve AGI as soon as possible because it will usher in a post-scarcity society, radically improving people’s living conditions across the globe and, at its core, reducing humanity’s net suffering.
Others argue that it’s not about reducing human suffering at all. Says Waite, “They say that society’s only responsibility is to build superior beings that can take our place and spread their superintelligence throughout the universe. In this scenario, the survival of the human species is irrelevant.”
This type of thinking borders on the cult and, as Waite points out, doesn’t have many adherents.
The debate maybe flippantly presented here, but it matters because the dividing lines are hardening.
“To some extent, this makes sense,” Waite muses. “If you truly believe that AI can right all of humanity’s wrongs, find cures for diseases, save us from climate catastrophe, and bring about an era of abundance — as the most ardent accelerationists do — then it’s basically a moral imperative to make sure it happens as soon as possible.”
Like fundamental anti-abortionists who believe that every sperm is sacred, anyone standing in the way would, hypothetically, have millions of deaths on their hands.
On the other extreme are those who have perhaps watched/read too many dystopian sci-fi stories and believe that machines will not only gain intelligence but doing so will inevitably mean the end of us and our outmoded muscle-and-bone technology.
“If you believe this, then the critical mission is to stop development, or at least slow it down until we can work out how to do it safely,” Waite notes. “OpenAI reinstating Sam Altman is considered by many to be a failure of this mission, since it appears to override the original aims of the company’s board – to protect humanity from the worst consequences of a rushed AI system.”
Naturally, there’s a middle ground, so there’s hope that never the twain can meet. This belief stems from a broader reading of human history, which understands the survival of our species to be inextricably linked to new technology.
(The jump cut between an ape smashing bones in a tantrum to a spaceship orbiting the moon can serve as a useful metaphor. We use tech to advance.)
… And Somewhere in the Middle
It’s also the case that in the middle of these two polarized camps lie the vast majority of AI researchers and billionaire funders.
“Hopefully, they can take some of the arguments from both sides and work out how to get the best out of the technology while limiting the damage it might cause, through measures like industry regulation and international safety deals,” says Waite.
Though this, to me, stray,s a little too much onto the side of the argument that says AI is just a tool and it is how we use it that counts, for good or for evil, but that good will prevail.
Are you so sure?
No comments:
Post a Comment