Tuesday, 24 May 2022

AI Ethics Are Vital, So Why Aren’t More of Us Talking About It?

NAB

article here

One of the areas the pandemic shocked into life was a rush to deploy AI algorithms in our national health systems. You can understand why; states jumped on anything to get the virus under control and so we now have AIs that track and trace our health, triggering a new economic sector in the flow of biodata.

In and of itself that may be no cause for concern. What should be a worry for all of us is whose hand is on the tiller. The lack of progress on AI governance should be setting off alarm bells across society, argue a pair of esteemed academics at the Carnegie Council for Ethics in International Affairs.

Anja Kaspersen and Wendell Wallach, directors of the Carnegie Artificial Intelligence and Equality Initiative (AIEI), say that despite the proliferation of interest in and activities surrounding AI, us humans have been unable to address the fundamental problems of bias and control inherent in the way we have developed and used AI. What’s more, it’s getting a little late in the day to do much about it.

In their paper, “Why Are We Failing at the Ethics of AI?” the pair attack the way that “leading technology companies now have effective control of many public services and digital infrastructures through digital procurement or outsourcing schemes.”

They are especially troubled by “the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about [them], either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.”

This “engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation,” Kaspersen and Wallach say.

So, why hasn’t more been done?

They think that partly it’s because society only tends to notice a problem with AI in the later stages of its development or when it’s already been deployed. Or we focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging.

“This is the problem known as ‘ethics washing’ — creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.”

Another issue that is blocking what they would see as correct AI governance is quite simply the lack of any effective action.

Lots of hot air has yet to translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives. The use of AI remains the domain of a few companies or organizations “in small, secretive, and private spaces” where decisions are concentrated in a few hands all while inequalities grow at an alarming rate.

Major areas of concern include the power of AI systems to enable surveillance, pollution of public discourse by social media bots, and algorithmic bias.

“In a number of sensitive areas, from health care to employment to justice, AI systems are being rolled out that may be brilliant at identifying correlations but do not understand causation or consequences.”

That’s a problem Kaspersen and Wallach argue because too often those in charge of embedding and deploying AI systems “do not understand how they work, or what potential they might have to perpetuate existing inequalities and create new ones.”

There’s another big issue to overcome as well. All of this chatter and concern seems to be taking place in academic spheres or among the liberal elite. Kaspersen and Wallach call it the ivory tower.

The public’s perception of AI is generally of the sci-fi variety where robots like Terminator take over the world. Yet the influx of algorithm bias into our day to day lives is more of a dystopian poison.

“The most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives.”

Patronizingly, they say that concepts such as ethics, equality, and governance “can be viewed as lofty and abstract,” and that “non-technical people wrongly assume that AI systems are apolitical,” while not comprehending how structural inequalities will occur when AI is let out into the wild.

“There is a critical need to translate these concepts into concrete, relatable explanations of how AI systems impact people today,” they say. “However, we do not have much time to get it right.”

Moreover, the belief that incompetent and immature AI systems once deployed can be remedied “is an erroneous and potentially dangerous delusion.”

Their solution to all of this is, as diginomica’s Neil Raden critiques, somewhat wishy-washy.

It goes along the lines of urging everyone — including the likes of Microsoft, Apple, Meta, and Google — to take ethics in AI a lot more seriously and to be more transparent in educating everyone else about its use.

Unfortunately, as Raden observes, the academics broadside on the AI community has failed to hit home.

“It hasn’t set off alarm bells,” he writes, “more like a whimper from parties fixated on the word ‘ethics’ without a broader understanding of the complexity of current AI technology.”

 


No comments:

Post a Comment