Kaarel Kotkas, CEO of Veriff, enlightens leaders on the increasing threat of deepfakes, urging organizations to fortify their cybersecurity measures and explore reliable AI use cases in combating fraud.
If 2023 had a theme, it would undoubtedly be artificial intelligence (AI). It’s no surprise that more and more businesses are seeking ways to integrate AI into their daily workflows. It is increasingly important for leaders to identify how AI can help them and be aware of how AI applications can expose the company to fraud. One such fraud threat is deepfakes. As a society, we’ve grown so accustomed to online scams that we assume everything is fake or untrustworthy unless proven by a reliable source. As organizations integrate AI, leaders must evaluate how the technology can be applied to improve their cybersecurity measures, like identity verification, to combat new advancements and provide a more reliable use case for AI in the workplace.
The Rapid Expansion of Deepfakes
Deepfakes are not necessarily new technology, but the quality and realistic nature of these fake and/or altered images have improved dramatically in recent years, coinciding with and fueled by the growing sophistication of AI.
A report found that 66% of cyberattacks involve deepfakes, a 13% year-over-year increase. As deepfake capabilities become easier to access and the attacks become cheaper to facilitate, this number is only expected to grow in the coming year. Fraud is getting more sophisticated – tools fueled by AI are making fraud activity more accessible to even the less sophisticated bad actors. AI-generated deepfakes make it easy for anyone to create impersonations or synthetic identities, whether of celebrities or someone they know.
There are several common deepfake techniques we see used today:
- Face swaps: Superimposing the image of a person’s face onto a photo or video of another person to make it look like a different person is part of the image.
- Lip-syncs: Lip-syncing technology is used in modified videos where the subject makes mouth movements that match the words in an alternate audio recording.
- Puppets: Digital puppetry is when a video is created that animates a target person to follow another person’s facial expressions and eye and head movements not in the video.
- GANs and Autoencoders: Generative Adversarial Networks (GANs) and autoencoders are deep learning models that generate deepfake videos from a single image or even something entirely fabricated. Some tools can create images of realistic-looking people, but the people do not exist. This is often used to create fraudulent IDs.
See More: How to Overcome Hyper-realistic Deepfakes in 2024
Deepfakes and the Workplace
Today, deepfake technology is incredibly convincing, which means businesses and their employees need to be educated on recognizing a deepfake and defending against it by heightening existing security. Many organizations’ current hybrid work preference provides bad actors with even more opportunities to infiltrate companies. For instance, in the case of businesses that operate online and need to confirm the identity of their customers, deepfakes can significantly increase the threat of fraud, money laundering, and account manipulation.
Deepfakes have also been used to impersonate senior leadership and C-suite members to take advantage of employees. For example, in 2019, a voice cloning scam of a CEO resulted in a theft of $243,000 when the impersonator requested an urgent fund transfer, making an employee an unwitting party to a crime.
Another case in Hong Kong occurred where scammers pretended to be a branch manager’s director using a deepfake voice. The branch manager believed he was speaking with his actual boss and gave the bad actors approximately $35 million.
They are especially effective when deepfakes are used against an enterprise with disjointed and inconsistent identity management processes and poor cybersecurity. They can be used in two ways:
- Fraudulent accounts: Synthetic identities are increasingly used to pass biometric checks during the onboarding process’s Know Your Customer (KYC) stage to open fraudulent accounts.
- Account takeover (ATO): Sophisticated deepfakes are combined with other hacking techniques to gain entry into existing accounts and commit theft from inside a trusted, established account.
In one case, employees pranked their boss when they created a deepfake Microsoft Teams video message that led his boss to be in a face-to-face video call with himself. While in this instance, no fraud occurred, it demonstrates the power and accessibility of this technology.
Deepfake fraud has cost the U.S. billions for some businesses, and leaders need to be even more vigilant in educating and equipping their organizations to combat these scams.
Protecting Data in the Age of AI
As AI technology continues to evolve, so will the AI threat landscape. Teams need to have technology that can be quickly and easily adjusted and be able to implement new techniques. An effective fraud defense must be strong, dynamic, and multi-faceted.
This starts with first being aware that the threat exists. By simply being aware of the potential damage deepfakes can do, organizations can educate their employees and partners on what to look out for and how to protect themselves.
While there is no one-size-fits-all solution to combating identity fraud, business leaders must improve their approach and technology to protect their employees, customers, and proprietary information. One of the best ways this can be done is to implement an AI solution that complements and augments existing security measures.
For cybersecurity and deepfakes specifically, organizations can enable AI and identity verification (IDV) to work together. Not only will this provide a more reliable use case for AI in an organization looking to kick off this new technology, but it can also encourage better cybersecurity practices while advancing the capabilities of existing IDV solutions.
Addressing deepfakes needs to be a strategic discussion within an organization, with clear goals and guidelines in place. 6 steps every organization should consider when combating deepfakes include:
- Conduct robust and comprehensive checks on asserted identity documents
- Conduct a biometric analysis of supplied photographic and video images
- Examine key device attributes
- Deploy counter-AI to identify manipulation of incoming images
- Treat the absence of data as a risk factor (in certain circumstances)
- Actively look for patterns across multiple sessions and customers
This list should constantly be revisited and updated by teams to ensure technology and IDV solutions remain current and reliable. Business leaders need to be ready to adapt as new deepfake innovations evolve and emerge constantly. Fraudsters are constantly innovating with their technology, so businesses must, too. They must stay several steps ahead and beat the bad actors at their own game.
What Leaders Can Do Next
Deepfakes will continue to become more sophisticated, so how we protect against them must also be. When it comes to improving cybersecurity while countering new AI attacks, companies can be bold and invest in tech, even if it adds a bit of friction. By educating employees, partners, board members, and more, the most crucial members of an organization will be equipped to understand and work together to combat deepfake scams.
Robust, trustworthy fraud protection is not a one-and-done solution; it’s a constant battle. Fraud teams must regularly assess threats to recognize and stay ahead of fraud patterns. Business leaders who continuously review data from solution deployments to identify new fraud patterns and evolve their strategies will be the most prepared; the most efficient business leaders will know how to use AI to their advantage to combat deepfakes and other bad AI actors.
How can organizations defend against deepfake scams? Let us know on Facebook, X, and LinkedIn. We’d love to hear from you!
Image Source: Shutterstock