Thursday, 20 July 2023

Ted Chiang: Who Has the Power to Determine AI’s Impact?

NAB 

Esteemed sci-fi author Ted Chiang says that we should reframe debate about AI as one about the ethics of labor exploitation.

article here

Rather than think of AI as some nuclear level threat to humanity he proposes we think of it as a management consultancy, albeit a faceless bureaucratic entity in hock to capital.

On the plus side, this means we do have it within our power to control and shape its impact on society and the workforce in particular.

On the debit side, change demands that executives of already powerful tech companies take responsibility for guiding the ethical future of AI and by extension humanity.

That’s the really scary thought: That we’re reliant on Zuckerberg and Musk and the titans at Microsoft, Apple, Amazon and Google to take the right decisions that are not purely to rack up their profit.

Chiang explains his argument in an essay for The New Yorker, “Will AI Become the New McKinsey?“

“I would like to propose another metaphor for the risks of AI [and] suggest that we think about AI as a management consulting firm, along the lines of McKinsey & Company.

 “Just as AI promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America. Even in its current rudimentary form, AI has become a way for a company to evade responsibility by saying that it’s just doing what ‘the algorithm’ says, even though it was the company that commissioned the algorithm in the first place.”

He says we should ask: how do we prevent AI from assisting corporations in ways that make people’s lives worse?

“It will always be possible to build AI that pursues shareholder value above all else, and most companies will prefer to use that AI instead of one constrained by your principles. Is there a way for AI to do something other than sharpen the knife blade of capitalism?

When Chiang refers to capitalism he is specifically criticizing “the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.”

He says if we cannot come up with ways for AI to reduce the concentration of wealth, then I’d say it’s hard to argue that AI is a neutral technology, let alone a beneficial one.

“By building AI to do jobs previously performed by people, AI researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in.”

He says, “The doomsday scenario is not a manufacturing AI transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s AI-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.”

If AI is as powerful a tool as its proponents claim, they should be able to find other uses for it besides intensifying the ruthlessness of capital, he argues.

But the “they” here need to do a lot of work. With power comes great responsibility, he argues.

“The tendency to think of AI as a magical problem solver is indicative of a desire to avoid the hard work that building a better world requires. That hard work will involve things like addressing wealth inequality and taming capitalism.

“For technologists, the hardest work of all — the task that they most want to avoid — will be questioning the assumption that more technology is always better, and the belief that they can continue with business as usual and everything will simply work itself out.”

Interviewed by Alan Berner at Vanity Fair, Chiang says no software that anyone has built is smarter than humans. What we have created, he says, are vast systems of control.

“Our entire economy is this kind of engine that we can’t really stop. It probably is possible to get off, but we have to recognize that we are all on this treadmill of our own making, and then we have to agree that we all want to get off. We are only building more tools that strengthen and reinforce that system.”


No comments:

Post a Comment