NAB
article here
The algorithms that underpin social media need to be revised
or liberal democracy itself will die. That’s the message from a pair of
cognitive scientists who have studied the effect of social media on society and
believe that much of our current malaise stems from the deliberate warping of
the news agenda to suit corporate greed.
Stephan Lewandowsky, a cognitive scientist at the University
of Bristol in the UK, and Anastasia Kozyreva, a philosopher at the Max Planck
Institute for Human Development in Berlin, aren’t the first to point out that
we have sold our souls (our personal data in exchange for cat videos and online
networking) to the likes of Google and Facebook, in return for which those
platforms sell our data to advertisers, which then surfaces in our feeds on their
platforms.
It’s what Harvard social psychologist Shoshana Zuboff has
called an assault on human autonomy in her book The Age of Surveillance
Capitalism, which explains how platforms are incentivized to align their
interests with advertisers, often at the expense of users’ interests or even
their well-being.
Lewandowsky and Kozyreva agree that the algorithms that
govern the information we receive are now doing fundamental damage to our
collective ability to make decisions in our own interests.
They urge a four-point plan of action in response.
“Protecting citizens from manipulation and misinformation,
and protecting democracy itself, requires a redesign of the current online
‘attention economy’ that has misaligned the interests of platforms and consumers,”
they argue in OpenMind magazine. “Achieving a more transparent and
less manipulative online media may well be the defining political battle of the
21st century.”
Contrast that with the Arab Spring of 2010, when countries
like Tunisia and Egypt experienced a revolt against autocratic regimes. At the
time, the internet and social media were seen (in the liberal west) as a force
for good by undercutting stage propaganda with real time reportage and an
ability to connect and coordinate a groundswell of civil action. Since then,
states of co-opted social media into their armory by using the same platforms
to disseminate misinformation or to simply call out truth as fake news. This
has happened in authoritarian countries like Russia and in Europe and the US
where leading social media platforms have aided and abetted the spread of lies
because it is in their financial interest to do so.
For example, the writers say YouTube’s recommendations
amplify increasingly sensational content with the goal of keeping people’s eyes
on the screen. They point to a study, “YouTube Regrets,” which confirms that
YouTube “not only hosts but actively recommends videos that violate its own
policies concerning political and medical misinformation, hate speech, and
inappropriate content.”
In the same vein, our attention online is more effectively
captured by news that is either predominantly negative or awe
inspiring. Misinformation is particularly likely to provoke outrage, and fake
news headlines are designed to be more negative than real news headlines.
“In pursuit of our attention, digital platforms have become
paved with misinformation, particularly the kind that feeds outrage and anger.
Following recent revelations by a whistle-blower, we now know that Facebook’s
newsfeed curation algorithm gave content eliciting anger five times as much
weight as content evoking happiness.”
Yet Facebook and other social media platforms have
responded. Whereas only a few years ago they were arguing that they had no
political role and shouldn’t filter what was posted on their platforms, now
misinformation about controversial topics (read by some as conspiracy theories)
like Covid-19, the war in Ukraine and climate change are being censored by the
networks themselves.
This too is problematic according to Lewandowsky and
Kozyreva. “This kind of content moderation inevitably means that human decision
makers are weighing values. It requires balancing a defense of free speech and
individual rights with safeguarding other interests of society, something
social media companies have neither the mandate nor the competence to achieve.”
Their main remedy is to ensure, via law if necessary, that
we are all better educated about the extent to which our knowledge of the world
is being warped.
Even people who are aware of algorithmic curation tend not
to have an accurate understanding of what that involves. A Pew Research
paper published in 2019 found that 74% of Americans did not know that
Facebook maintained data about their interests and traits.
“They are often unaware that the information they consume
and produce is curated by algorithms. And hardly anyone understands that
algorithms will present them with information that is curated to provoke
outrage or anger, attributes that fit hand in glove with political
misinformation.”
So, to shift this balance of power in favor of objective
truth they argue for a redesign or resetting of not just of the algorithms
themselves but of public knowledge of what platforms do and what they know.
·
There must be greater transparency and more
individual control of personal data.
·
Platforms must signal the quality of the
information in a newsfeed so users can assess the risk of accessing it. I.e.,
Does the material come from a trustworthy place? Who shared this content
previously?
·
The public should be alerted when political
speech circulating on social media is part of an ad campaign.
·
The public must know exactly how algorithms
curate and rank information and then be given the opportunity to shape their
own online environment. I.e., independent agencies must be able to audit platform
data and identify measures to remedy the spigot of misinformation.
There are laws in progress in the US and Europe intended to
tighten data privacy but writers suggest that there is considerable public and
political skepticism about regulations in general and about governments
stepping in to regulate social media content in particular.
The best solution, they say, lies in shifting control of
social media “from unaccountable corporations to democratic agencies that
operate openly, under public oversight.”
But how likely is that?
“There’s no shortage of proposals for how this might work.
For example, complaints from the public could be investigated. Settings could
preserve user privacy instead of waiving it as the default.”
Another idea is to develop a digital literacy tool kit
called Boosting that aims to increase users’ awareness and competence
in navigating the challenges of online environments.
The problem as I see it is: outside of academia, outside of
liberal intelligentsia and outside of intellectual paternalism — who actually
cares enough to ditch “The Circle” and lose their crutch of connection to the
outside world?
No comments:
Post a Comment