NAB
Creators and tech entrepreneurs want to build products that
will change the world, and often they do, but not in the way they imagined. Is
it time for them to implement, upgrade, or completely rethink the business
models and structural mechanisms they have in place to reduce the negative
impact on us all?
article here
“The failure to predict the unintended consequences of
technology is deeply problematic and raises thorny questions,” Rachel Botsman poses
in Wired. “Should entrepreneurs be held responsible for the harmful
consequences of their innovations? And is there a way to prevent these
unintended consequences?”
Botsman is a self-described “trust expert” and TED talker
who says she has advised “successful entrepreneurs” and is certainly in the
trust of one of them: Aza Raskin, the Finn who devised the “infinite scroll,”
the feature on our phone that keeps us endlessly scrolling through content with
the simple swipe of a finger.
Raskin tells Botsman that his intention was to create
something that could focus our attention and control our tempo when visiting
websites and apps. He says he did not foresee how tech giants would exploit
this design principle, creating apps and algorithms to automatically serve more
and more content without your asking for it — or necessarily being able to opt
out.
“The thing I regret most is not packaging the inventions
with the philosophy or paradigm in which they’re supposed to be used,” says
Raskin. “There was a kind of naive optimism about thinking that my inventions
would live in a vacuum, and not be controlled by market forces.”
Raskin may not be alone. Larry Page and Sergey Brin probably
meant it when they used “Don’t do evil” as Google’s motto. It was a start-up
challenging the establishment, determined to do things differently. Now
Alphabet is perceived as a tech dinosaur monopolizing our data for pure
corporate greed.
Botsman mentions Airbnb’s founders, who didn’t foresee the
negative impacts of short-term rentals on local communities. “When Justin
Rosenstein invented the Like button, he didn’t imagine the effect that
receiving hearts and likes — or not — would have on young teens’ self-esteem.
I’m not a fan of Facebook, but Mark Zuckerberg arguably didn’t start the social
media giant as a tool for political interference.”
The defense from tech entrepreneurs that have seen their
original innovation spun out of control is that they couldn’t possibly imagine
the negative effects their ideas would have at scale. That there is no way even
they could predict the future.
Botsman, accompanied by Raskin, debunks this argument.
Factors that hamper clear-eyed decision making about the longer-term
consequences of any invention include ignorance, short-termism and speed.
“Speed is the enemy of trust,” she writes. “To make informed
decisions about which products, services, people, and information deserve our
trust, we need a bit of friction to slow us down. When the time frame of
consumer adoption is compressed from decades to months, it’s easy for
entrepreneurs to ignore the deeper and often subtle behavioral changes those
innovations are introducing at an accelerated rate.”
As Raskin points out, “an inability to envision the impact
at scale is actually a really good argument as to why one shouldn’t be able to
deploy tech at scale. If you can’t determine the impacts of the technology
you’re about to unleash, it’s a sign you shouldn’t do it.”
If unintended consequences can’t be eliminated, we can get
better at considering and mitigating them.
Take social media. Right now, the original inventors of
platforms — Zuckerberg (Facebook), Jack Dorsey (Twitter), Chad Hurley (YouTube)
— can’t be held responsible for the content that users choose to post. “But
they should be liable for any content that algorithms they write and employ
spread and promote,” Botsman suggests. “Regulation can’t force people to use a
product or service in a responsible way. But entrepreneurs should be held
responsible for structural and design decisions they make that either protect
or violate the best interests of users, and society overall.”
But even before regulation comes into play, the process of
thinking through ‘unintended consequences’ should be baked into design
philosophy of a product.
Raskin has been doing some ground work on the matter alongside Tristan Harris (startup founder and a former employee at both Google and Apple), as co-founders of the Center for Humane Technology.
Firstly, they would like to see a new open source license
introduced that comes with a Hippocratic oath. It would contain a “bill of
rights and a bill of wrongs,” outlining specific situations or usages of the
tech that would cause the license to be revoked. The idea would help prevent a
creator’s technology being misused with impunity.
Raskin’s second practical solution to hold entrepreneurs
responsible for the scale of liability is to tie it to the scale of power.
“If your product or service is being used by less than
10,000 people you should be bound by different regulations than if your user
base is bigger than a nation state,” says Raskin.
Botsman calls this a “permission at scale” license,
explaining that every time an invention hits an adoption milestone — 100,000
users, a million users, a billion users and so on — an entrepreneur would need
to reapply for their license based on the positive and negative impacts of
their invention. A progressive scale of liability would mean you have lots of
innovation at the small scale, but as soon as it has the surface area to create
harm, you have the responsibility that pairs with it.
Lastly, they recommend that startups build a “red team”
independent of the board or investors. Raskin sees their role as to name all
the ways the tech could be abused for good and for ill. He has even set up his
own “Doubt Club,” a forum for a group of entrepreneurs who are working on
noncompeting ideas to share doubts about their product, company mission, or
metric. The goal is to reduce ignorance and to encourage what Raskin calls
“epistemic humility.” To be willing to say those three magic words: I
don’t know.
Botsman breaks these ideas down further saying that
entrepreneurs and investors need to be responsible for asking “What happens
when…” questions:
What happens when people are left behind by my invention?
What happens when my system becomes susceptible to bias?
What happens when the interests of my business model don’t
align with the best interests of customers?
She says, “Identifying and reducing unintended consequences
calls for greater humility and acceptance of doubt; it requires us to take the
time to explore what we don’t know and actively seek alternative
possibilities.”
No comments:
Post a Comment