NAB
One of the age-old themes illuminated in Shakespeare’s Macbeth is
that of free will. Does Macbeth possess the agency to commit murder, or is he
simply fulfilling the prophesy of the Witches? Or, to muddy the waters further,
what role does the Witches’ foretelling of the future have on Macbeth’s
destiny? Had he not heard them would his fate have been any different?
article here
Carrisa Veliz, associate professor at the Faculty of
Philosophy and the Institute for Ethics in AI at Oxford University, is in no
doubt. She argues that ceding more of our choices to algorithms threatens to
denude mankind of mavericks, leaders, inventors and creators — anyone who
thinks outside the box.
This dilemma (most recently interpreted by Joel Coen
in The Tragedy of Macbeth, the latest cinema version of the play), is also
one that can be applied to ethical considerations about how computer algorithms
appear to increasingly govern our lives.
To what extent do recommendation engines, addressable
advertising or personalized political messaging determine what we do, where we
go, what we buy and what we think?
“We want a society that allows and stimulates actions that
defy the odds,” she writes at Wired. “Yet the more we use AI to categorize
people, predict their future, and treat them accordingly, the more we narrow
human agency, which will in turn expose us to unchartered risks.”
Predictions are not innocuous, she maintains. The extensive
use of predictive analytics can change the way human beings think about
themselves.
Such ethical issues lead back to one of the oldest debates
in philosophy: If there is an omniscient God, we can be said to be truly free?
If a supreme being already knows all that is going to happen, that means
whatever is going to happen has been predetermined. The implication is that our
feeling of free will is nothing but that: a feeling.
“Part of what it means to treat a person with respect is to
acknowledge their agency and ability to change themselves and their
circumstances,” she contends. “If we decide that we know what someone’s future
will be before it arrives, and treat them accordingly, we are not giving them
the opportunity to act freely and defy the odds of that prediction.”
A second, related ethical problem with predicting human
behavior is that by treating people like things, we are creating
self-fulfilling prophecies. Predictions are rarely neutral. More often than
not, the act of prediction intervenes in the reality it purports to merely
observe.
“For example, when Facebook predicts that a post will go
viral, it maximizes exposure to that post, and lo and behold, the post goes
viral.”
Veliz goes further, arguing that if AI-driven predictive
analytics are partly creating the reality they purport to predict, “then they
are partly responsible for the negative trends we are experiencing in the
digital age, from increasing inequality to polarization, misinformation, and
harm to children and teenagers.”
Predictions are not innocuous. The extensive use of
predictive analytics can even change the way human beings think about
themselves. There is value in believing in free will.
By contrast, there is immeasurable value to society in
believing in free will she says. After all, society has countered theological
fatalism — the idea that everything is known by God — by creating ways to
improve our health, our education, and punishing those who transgress the norm.
“The more we use predictive analytics on people, the more we
conceptualize human beings as nothing more than the result of their
circumstances, and the more people are likely to experience themselves as
devoid of agency and powerless in the face of hardship.”
In other words, Veliz says we have to choose between
treating human beings “as mechanistic machines” whose future can and should be
predicted (welcome to The Matrix, folks), or treating each other as
independent agents (in which case making people the target of individual
predictions is inappropriate).
Unless, of course, our minds have already been made up for
us.
No comments:
Post a Comment