NAB
What happens if your avatar is groped online? Or your
virtual items are stolen? Or the 3D photoreal representation of you is bullied?
These are not abstract or niche but growing issues with real-world
consequences. If the metaverse is to become the open, democratic simulacra
fresh start for connected society that some envisage then how it is policed
needs as much attention as the technical standards for building it.
https://amplify.nabshow.com/articles/who-polices-the-metaverse/
Virtual groping is real. In 2016, on blogging platform
Medium, Jordan Belamire reported a sexual assault while playing VR
game QuiVr. Her avatar was molested by other players making “rubbing, grabbing,
and pinching gestures” forcing her to quit.
Researchers at Melbourne University’s Computing and
Information Systems studied the response and found a clear lack of
consensus around harmful behavior in virtual spaces. Yet the laws of the real
world are not well-placed to solve the wrongs that occur in digital
environments.
That’s why researchers and designers of virtual worlds are
turning to technology-based tools to proactively manage VR communities.
“Multiplayer digital gaming — which has a long history of
managing large and sometimes toxic communities — offers a wealth of ideas that
are key to understanding what it means to cultivate responsible and thriving VR
spaces,” outlines Melbourne University PhD researcher Lucy Sparrow at Wired.
“By showing us how we can harness the power of virtual communities and
implement inclusive design practices, multiplayer games help pave the way for a
better future in VR.”
However, Sparrow’s own research on ethics and multiplayer
games, “Apathetic Villagers and the Trolls Who Love Them: Proceedings of the
31st Australian Conference on Human-Computer-Interaction,” revealed that
players can be resistant to “outside interference” in virtual affairs. And
there are practical problems, too: In fluid, globalized online communities,
it’s difficult to know how to adequately identify suspects and determine
jurisdiction.
As it stands, one of the most common forms of governance in
virtual worlds is a “reactive and punitive” form of moderation based on
reporting users who may then be warned, suspended, or banned. Given the sheer
size of virtual communities, these processes are often automated: for instance,
an AI might process reports and implement the removal of users or content, or
removals may occur after a certain number of reports against a particular user
are received.
“Because they are reactive, they do little to prevent
problematic behaviors or support and empower marginalized users,” Sparrow
suggests. “Automation is helpful in managing huge amounts of users and material,
but it also leads to false positives and negatives, all while raising further
concerns surrounding bias, privacy, and surveillance.”
As an alternative, some multiplayer games have experimented
with democratic self-governance. Riot Games implemented a short-lived
Tribunal system that allowed players to review reports against other
players and vote on their punishments in the multiplayer game League of
Legends. A similar system is in play in Valve’s CS:GO and Dota 2.
Forms of self-governance in VR are also on Facebook’s radar:
A 2019 paper, “Harassment in Social Virtual Reality: Challenges for Platform
Governance,” suggests that the company is interested in promoting
community-driven moderation initiatives across individual VR applications as a
“potential remedy” to the challenges of top-down governance.
“These kinds of systems are valuable because they allow
virtual citizens to play a role in the governance of their own societies,”
Sparrow writes. “However, co-opting members of the community to do difficult,
time-consuming, and emotionally laborious moderation work for free is not
exactly an ethical business model. And if — or when — toxic hate groups flourish,
it is difficult to pinpoint who should be responsible for dealing with them.”
One way of addressing these obstacles is to hire community
managers, or CMs. “Commonly employed by gaming and social VR companies to
manage virtual communities, CMs are visible people who can help facilitate more
proactive and democratic decision-making processes while keeping both users and
developers of VR accountable.”
CMs can remind players of codes of conduct and can sometimes
warn, suspend, or ban users; they can also bring player concerns back to the
development team. CMs may have a place in the metaverse too, but only if we
figure out how to treat them properly.
As Facebook (itself no paragon of virtue) leads the drive to
move our work and social interactions into VR, the importance of dealing with
harmful behaviors online is being drawn sharply into focus.
It connects with a wider issue concerned with the privacy
and security of our data as more of our digital selves is ported to the
metaverse.
Spending 20 minutes in a VR simulation leaves just under two
million unique recordings of body language,” points out digital marketing
agency Media.Monks. “This directly leads to concern for privacy using XR
technology or indeed data privacy across the metaverse.”
It adds, “Pervasive illegality using emerging technologies
will post challenges in how they are regulated around the world. Platforms
already wield great power in dictating who can use their platforms and to what
end. Who will be the police force of the metaverse?”
Who polices the police?
No comments:
Post a Comment