NAB
By 2023, 20% of all account takeover attacks will make use of deepfake technology, consultancy Gartner predicts in a new report. It’s time organizations recognized this threat and raised employee awareness because synthetic media is here to stay and will certainly become more realistic and widespread.
article here
“While deepfakes may have started out as a harmless form of entertainment, cybercriminals are using this technology to carry out phishing attacks, identity theft, financial fraud, information manipulation, and political unrest,” warns Stu Sjouwerman, founder and CEO of security awareness trainer KnowBe4.
According to the Security Forum, criminals can easily manipulate videos, swap faces, change expressions or synthesize speech to defraud and misinform individuals and companies.
“What’s more, people are being bombarded with information and it’s becoming increasingly difficult to distinguish between what’s real and what’s fake,” it warns.
All the elements necessary for the widespread and malicious use of deepfake technology are readily available in underground markets and forums — the source code is public.
“Advanced editing technology, once the exclusive domain of the movie industry, is now available to the average internet Joe,” says Security Forum. “Anyone can download a mobile phone app, pose as a celebrity, de-age themselves, or add realistic visual effects that can spruce up their online avatars and virtual identities.
Sjouwerman reports that in online forums, criminal organizations routinely discuss how they can use deepfakes to increase the effectiveness of their malicious social engineering campaigns.
No one is immune. Even Elon Musk was prey to a deepfake video of himself, promoting a crypto scam that went viral on social media.
That was an attempt to manipulate the stock market.
In 2020, fraudsters used AI voice cloning technology to scam a bank manager into initiating wire transfers worth $35 million. Deepfakes can be leveraged as a strategic tool for spreading disinformation, manipulating public opinion, stirring civil unrest and causing political polarization. As a recent example, a deepfake video of Ukrainian president Volodymyr Zelensky urging Ukrainians to lay down arms was broadcast on Ukrainian TV. Fake evidence (using deepfakes) can also be planted in the court of law. For example, in a custody battle in the UK, doctored audio files and footage were submitted to the court as evidence.
So how can organizations protect themselves against such attack? Sjouwerman lays out some advice. He says the key to mitigating deepfake risks is to nurture and improve cybersecurity instincts among employees and strengthen the overall cybersecurity culture of the organization.
Perhaps the best advice then is to run security awareness training sessions to ensure employees understand their responsibility and accountability with cybersecurity.
Employees can be trained to watch out for visual cues such as distortions and inconsistencies in images and video, strange head or torso movements, and syncing issues between face and lips in any associated audio.
Other tips that can help: When video conferencing, run this simple trick: ask the participant to wave their hands in front of their face or turn their side profile to the camera. If it’s a deepfake, it will reveal quality issues with the superimposition.
You can also deploy technologies like phishing-resistant multi-factor authentication (MFA) and zero-trust to reduce the risk of identity fraud.
No comments:
Post a Comment