How to Prepare Your Workforce for the Deepfake Era
2024-7-22 22:23:54 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

As the development of AI accelerates, the cyberthreats posed by this technology are becoming more alarming. From large language models (LLMs) which can deploy highly targeted and convincing phishing messages at scale to AI-powered data collection and surveillance, the list of malicious AI applications continues to grow. Of all these AI-powered weapons, the one that your employees may be the least equipped to resist is deepfake technology.

Deepfakes are AI-produced synthetic media capable of imitating physical appearances, voices and other human characteristics to convince viewers that they’re watching or listening to real people. Deepfakes are already being used by cybercriminals to spread misinformation, steal money and sensitive data, and even undermine elections — and the machine learning technology that deepfakes rely on is becoming more powerful. Just as other AI tools have eliminated barriers to entry for cybercriminals around the world, deepfakes are in the process of dramatically expanding the cyberthreat landscape.

The emergence of deepfakes requires companies to revamp their approach to cybersecurity awareness training. Employees have to be prepared for cyberattacks that are much more sophisticated and convincing than ever before — and which can be carried out across multiple attack vectors to increase the likelihood of infiltration. Security teams need to understand how deepfakes are being used in the real world, and they must be capable of providing this knowledge to employees engagingly and effectively.

The Democratization of Deepfake Technology

AI has universalized access to the most advanced tools for launching cyberattacks. While many cybercriminals used to be limited by technology or language skills, AI resources like LLMs and deepfakes have changed that. Just as a young hacker can use AI to create a polished phishing message in languages other than their own, cybercriminals now have access to deepfake technology that only would have been available to movie studios just a few years ago.

It’s no surprise that the use of deepfakes is exploding. From 2022 to 2023, the number of deepfakes detected globally increased ten-fold. A 2022 survey found that two-thirds of cybersecurity professionals had experienced a security incident involving a deepfake over the preceding year. At least 72% of consumers say they worry about being fooled by a deepfake, while a majority of Americans aren’t confident in their ability to identify deepfakes. These are compelling reasons for cybercriminals to rely on deepfakes even more in the coming years as the technology underpinning these attacks improves and becomes cheaper.

CISOs and other security leaders have to ensure that deepfake awareness is a key part of their awareness training program. This means thoroughly explaining how deepfakes are used, the forms of psychological manipulation they leverage, and which red flags, such as demands for sensitive information or a sense of urgency, to watch out for.

Deepfake Cyberattacks are Already Here

Deepfakes are especially well-suited for certain categories of cyberattacks. According to IBM, phishing is the most common initial attack vector, and deepfakes have the potential to turn a phishing campaign into a spear phishing attack, which is more effective. For example, deepfakes can facilitate multi-level phishing attacks by enabling cybercriminals to follow up with realistic verification phone calls and other communications. Deepfakes can also be used for impersonation fraud, such as scam calls from the IRS or Social Security Administration.

The FTC reports that fraud incidents increased by 14% from 2022 to 2023, and the agency warns that deepfake technology “threatens to turbocharge this scourge.” One especially pervasive form of fraud involves impersonating IRS agents or other law enforcement officials to coerce victims into disclosing personal information or sending money. Earlier this year, an employee at the architecture and design firm Arup transferred more than $25 million to cybercriminals after being fooled by deepfakes of the company’s CFO and other staff members. Security leaders need to understand the psychological dimensions of deepfake cyberattacks like these. For example, cybercriminals know they can exploit victims’ fear and obedience by presenting themselves as authority figures who have the power to punish them.

There are other ways bad actors are causing chaos with deepfakes: They’re undermining the integrity of elections with deepfaked robocalls, producing industrial-scale misinformation with deepfake “news” reporting, and stealing identities. Cybercriminals are constantly developing new ways to produce synthetic videos, images, and audio with deepfakes, and this process is gaining momentum as AI continues to improve.

How Your Security Team Should Respond

The rise of deepfakes and other AI-powered cyberattacks should lead to a fundamental transformation of companies’ approach to cybersecurity. CISOs and other security leaders can’t afford to waste any time. Deepfakes are already being deployed and employees need to know how to spot and resist them.

While deepfake detection technology is improving, employee training is indispensable. However, implementing that training correctly is crucial for success. First, just as companies are adopting zero-trust security infrastructure, employees should base their communications on a similar principle. “Verify before you trust” needs to be the standard operating procedure, especially for high-stakes conversations and those involving financial operations. Employees must also be made aware that standard verification methods such as confirmation phone calls can be compromised, as they may end up speaking with an AI-generated imposter instead of a legitimate entity. Verification through multiple encrypted channels, security questions and other forms of personal identification, and a greater reliance on in-person conversations are all strategies to consider.

Second, employees must understand how deepfakes exploit psychological vulnerabilities like fear, obedience, greed, opportunity, sociality, urgency and curiosity. This can be done through personalized awareness training which builds unique behavioral profiles for each employee and accounts for different levels of knowledge and learning styles. Personalized training can also keep employees engaged by ensuring that they’re only learning what’s most relevant to them, and it ensures accountability by tracking individual progress. Deepfakes allow hackers to personalize attacks, which is why one-size-fits-all training is no longer sufficient.

Finally, security leaders must provide concrete examples of how deepfakes are used to deceive and manipulate victims in the real world: misinformation, tax scams, fake phone calls or multi-level phishing attacks. When employees can see the tactics behind deepfake attacks, they will understand how to identify and prevent those attacks. Cybersecurity awareness training will also empower employees to take more proactive cybersecurity measures. AI-powered cyberattacks like deepfakes are understandably intimidating, and the knowledge that employees are capable of thwarting these attacks is critical to developing a culture of cybersecurity.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/07/how-to-prepare-your-workforce-for-the-deepfake-era/
如有侵权请联系:admin#unsafe.sh