NAB
Predictions of Hollywood’s extinction at the hands of generative AI pale into insignificance beside warnings about the fate of humanity if evil scientists get their way in building artificial general intelligence (AGI).
article here
AGI is a hypothetical system that can perform at human or superhuman levels. Some scientists think it either unachievable or possible only decades and decades away. Others think it could be here in the 2030s.
One of them is Leopold Aschenbrenner, who is behind an AGI startup but he warns that the lust for profit is driving developers to accelerate AGI at such a pace that guardrails are being ignored.
There are even suggestions that western liberal democracy will fall if China gets its hands on the secret to AI superintelligence. It is nothing less than an arms race and Aschenbrenner is here to save us before it’s too late.
“The AGI race has begun,” he writes in a 50,000 word essay. “We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.”
Aschenbrenner is not part of some doomsaying cult. He stands with those, like Elon Musk, calling for greater “situational” awareness of AI’s potential and for intervention at state level to curb the power of those wielding it.
He doesn’t name them but his principal target is OpenAI — where he was part of a team exploring safeguards around the firm’s AGI development until he was fired last April, for apparently questioning the company’s deviation from its society before profit goal.
The heads of the governance division where he worked at OpenAI have also quit. Another employee, Daniel Kokotajlo, who also believed OpenAI could be steered toward safe deployment of AI, shares Aschenbrenner’s views.
He told Sigal Samuel at Vox, “OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care.”
Kokotajlo said he “gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”
Aschenbrenner, a German national who lives in San Francisco, makes clear that predictions in his polemic “is based on publicly-available information, my own ideas, general field-knowledge, or SF-gossip.”
The OpenAI whistleblower employs dog-whistle politics in his ‘reds under the beds’ warning about the perils to the US if China gets hold of AGI first.
He thinks US security services will “wake-up” and start to develop their own AGI to counter bad state actors from 2027. “The Free World Must Prevail” he says, ignoring the real and present non-AI threat to democracy on America’s own doorstep.
Here is a summary of his red flag predictions:
AGI by 2027 is “strikingly plausible.” GPT-2 to GPT-4 took us from preschooler to smart high-schooler abilities in 4 years. Tracing trendlines in compute, algorithmic efficiencies and gains (from chatbot to agent), “we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.”
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress into a year. “We would rapidly go from human-level to vastly superhuman AI systems. The power — and the peril — of superintelligence would be dramatic.”
As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade in an “extraordinary techno-capital acceleration.”
The nation’s leading AI labs treat security “as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track.”
Reliably controlling AI systems much smarter than we are is an unsolved technical problem, he warns. “While it is a solvable problem, things could easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.”
Superintelligence will give a decisive economic and military advantage, he suggests. “In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?”
In a reality check on these claims, Axios managing editor Scott Rosenberg says that the wider consensus among experts is that AGI won’t happen this fast or in this direction.
“That’s not pessimism: The consensus sees so much value and utility in AI where it is now, and where it’s headed long before it gets to AGI, that AGI isn’t really the point.”
No comments:
Post a Comment