IBC
article here
News organisations face an increasing arms race against AI
companies to protect the authenticity of the content reaching audiences through
their channels. AI-powered deepfake content risks polluting the entire
information ecosystem, says ITN’s Tami Hoffman, “What happens when people just
don’t believe what they see?
Evidence reported by the Alan Turing Institute identified
just 16 viral cases of AI-enabled disinformation or deepfakes during the UK
general election last July. It found only 11 viral cases in this year’s EU and
French elections combined, volumes that are far lower than many people had
feared, it said.
“With the explosion of generative AI we were braced for
deepfakes around the UK election,” says Tami Hoffman, Director of News
Distribution and Commercial Innovation. “We were almost surprised that there
wasn’t more of a problem around deepfakes.”
The few that arose tended to be audio deepfakes, perhaps
because there are fewer signals on which to raise red flags than of a person
speaking in a fake video. Wes Streeting was the victim of one such
fake.
They were quickly spotted. “It was quite easy to go back to
sources and ask them what they said or didn’t say,” Hoffman says. “So to some
extent journalists are already quite well prepared for dealing with deepfakes
because we’ve always had to deal with misinformation, with propaganda or people
saying untrue things. It’s in a journalist’s DNA to double check a source if
things look suspicious.
“That said, the level of sophistication is increasing
exponentially. We used to talk about a ‘news antenna’ being able to detect if
something feels a little bit off; that’s obviously far harder now when we’re in
a kind of arms race with AI companies. No sooner do we establish ‘tells’ for
spotting anomalies in a manipulated image then the AI tools get better and bad
actors will use that knowledge to produce something that looks pretty slick.”
Detecting deepfakes
ITN has looked into software tools to help detect deepfakes
but they aren’t failsafe. Hoffman says news teams are concerned about relying
on technology when it’s not foolproof.
“Most tools come up with confidence percentages rather than
‘yes’ or ‘no’,” she says. “We want our journalists to continue using
traditional journalistic practices. There is a role for technology to be
helpful in flagging things or when your news antenna has been raised for
running it through that software for a second opinion. But technology is not
going to be the sole solution.”
The majority of footage ITN uses is either self-generated or
acquired through affiliate network partners or agencies like Reuters or AP.
User generated content (UGC) comprises a relatively small amount, which is why
manual checking is routine and doesn’t need to be scaled; although as Hoffman
points out, those manual checks are now more important.
Hoffman highlights a potentially bigger issue surrounding
the distribution of deepfakes. “We are understandably suspicious when things
don’t come through the conventional channels. That puts quite a lot of onus on
news agencies to be on top of the UGC they are syndicating.
“It’s definitely putting a lot of responsibility on news
organisations to make sure that we don’t become conduits for legitimising
deepfakes. Were we to let a deepfake go out on a reputable news channel, that
would give [the deepfake content] so much more additional credibility.”
“It’s definitely putting a lot of responsibility on news
organisations to make sure that we don’t become conduits for legitimising
deepfakes”
Reputational damage
Arguably the greatest threat for newsrooms is reputational.
On-screen journalists and presenters have been used in deepfake videos, either
to sell commercial products or to defame their reputation, sometimes out of
pure malice.
Channel 4 News presenter Cathy Newman was the subject of
a deepfake porn video; ITV news presenter Mary Nightingale and Dua
Lipa had footage of them manipulated into promoting an investment
app; ITV News Political Editor Robert Peston has had several fake stories
appear on Facebook. Celebrities including Chris Tarrant, Zoe Ball and Jeremy
Clarkson have also appeared in fake interviews designed in a BBC News
template published on Facebook.
Although these have been flagged to social media companies
and eventually get taken down, they can quite easily resurface.
“Having our audience-facing staff being used in deepfakes is
deeply concerning to us from both a personal and reputational point of view as
an organisation. We are urging social media companies to put processes in place
to spot these and take them down much earlier.
“It wouldn’t take you very long to work out that that
presenter would never have done something like that but if you’re glancing at
it on a small phone screen, people can be fooled.”
Social media companies argue that they are platforms not
publishers and therefore tend to abdicate responsibility for what is
distributed over their network.
“We would say that they have far more responsibility,”
Hoffman says. “I wouldn’t say that all social media companies are the same.
Some take their responsibilities more seriously than others. Having human
beings working at these companies in safety roles is paramount. [Elon Musk’s] X
has dismantled many of its safety teams, and if you can’t get hold of human
beings there’s no recourse, no process. YouTube puts the onus on creators to
self-certify the content they publish.
“The legal wheels move very, very slowly. We would much
prefer if the platforms as a first step took responsibility themselves and
realised the huge power they have in being the gateway for pumping out this
information.”
“Deepfakes are so dangerous because they risk
polluting the entire information ecosystem and eroding people’s notion of
trust”
Content credentials
The BBC is leading development of a ‘content credentials’
feature, which confirms where an image or video has come from and how its
authenticity has been verified. It also uses new technology to embed this
information within the image or video itself, helping to counter disinformation
when the content is shared outside the BBC.
ITN is monitoring C2PA but has yet to join. BBC News has not
implemented it either.
“It’s an ongoing piece of work and it’s not perfect,”
Hoffman says. “It requires the good faith of actors right the way through the
process. It would need a lot of upfront resource in terms of hardware, as well
as additional resource to implement it throughout an entire newsroom’s
workflow. This is holding a lot of newsrooms back.”
There are questions about its efficacy too. “Someone could
screen grab an image of a video (or record a video on their own computer) and
the metadata chain is broken. They could then repost that media online. That’s
a loophole that needs closing.
“We absolutely applaud efforts to try and find a universal
solution but we don’t think that it’s going to be an easy thing to actually
roll out.”
The existential threat is to the public’s trust in a news
brand like ITN. If it is tainted by deepfakes it may be impossible to recover
from.
Hoffman says, “Deepfakes are so dangerous because they risk
polluting the entire information ecosystem and eroding people’s notion of
trust. Then we move into a world where facts aren’t believed, nothing is to be
trusted and where objectivity can be knocked down.
“AI has made plausible deniability much easier. As Donald
Trump would say, ‘fake news’. That is probably the biggest problem that we as a
news industry are going to have to deal with. What happens when people just
don’t believe what they see?”
No comments:
Post a Comment