‘Tis the Season for Artificial Intelligence-Generated Fraud Messages
2024-12-10 22:0:0 Author: www.trustwave.com(查看原文) 阅读量:2 收藏

3 Minute Read

The FBI issued an advisory on December 3rd warning the public of how threat actors use generative AI to more quickly and efficiently create messaging to defraud their victims, echoing earlier warnings issued by Trustwave SpiderLabs.

The FBI noted that publicly available tools assist criminals with content creation and can correct human errors that might otherwise serve as warning signs of fraud. This effectively removes one of the easiest ways to spot a phishing email: spotting poor sentence structure, grammar, and spelling.

Threat actors can use AI to create several types of deceptive messages, including text, video, and audio. These fall under the category of being a deepfake.

Trustwave SpiderLabs Senior Consultant Jose Luis Riveros, who researched and wrote about the creation process for these items, noted how threat actors can find a large amount of freeware to create video deepfakes or use more advanced software and how they can be used in a variety of attacks.

Riveros’ conclusions were supported by Ed Williams, VP, SpiderLabs at Trustwave, in his recent 2025 Predictions blog. Williams highlighted how AI-enhanced phishing and social engineering capabilities will allow cybercriminals to craft highly convincing phishing emails, social media posts, and even deepfake content, making it increasingly difficult to discern between legitimate and malicious communications. With AI-driven social engineering, the stakes for user awareness training will be higher than ever.

The FBI agreed, saying “criminals use AI-generated text to appear believable to a reader in furtherance of social engineering, spear phishing, and financial fraud schemes such as romance, investment, and other confidence schemes or to overcome common indicators of fraud schemes.”

To make their fake persona appear as "real" as possible, criminals use generative AI to create voluminous fictitious social media profiles to trick victims into sending money.

Criminals can leverage AI to expand their reach by quickly generating messages that resonate with a larger audience. This allows them to create believable content more efficiently. Additionally, AI helps overcome language barriers that may hinder their ability to target individuals in various regions around the globe. They utilize AI to produce content for fraudulent websites, particularly for schemes involving cryptocurrency investments and other financial scams.

AI-Generated Images

Criminals use AI-generated images to create believable social media profile photos, identification documents, and other images supporting their fraud schemes. This tactic is particularly dangerous because one of the methods one can use to determine if a person is real is to check social media profiles, which often include multiple images.

So, criminals create realistic images for fictitious social media profiles in social engineering, spear phishing, romance schemes, confidence fraud, and investment fraud. Criminals use generative AI to produce photos to share with victims in private communications to convince victims they are speaking to a real person, which is particularly effective with romance schemes.

Fraudsters can even take this a step further and generate fake identification documents, such as driver's licenses or credentials (law enforcement, government, or banking) for identity fraud and impersonation schemes.

Malicious actors also play to people's emotions by creating false images of natural disasters and conflicts to elicit donations to fraudulent charities.

AI-Generated Audio

Criminals can use AI-generated audio to impersonate well-known public figures or personal relations to elicit payments. Criminals can generate short audio clips containing a loved one's voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom. Criminals can also obtain access to bank accounts using AI-generated audio clips of individuals and impersonating them.

AI-Generated Videos

The last category the FBI covered was video. Criminals use AI-generated videos to create believable depictions of public figures to bolster their fraud schemes.

The FBI advisory noted that criminals generate videos for real-time video chats with alleged company executives, law enforcement, or other authority figures. These videos can also be used as part of a larger attack to "prove" to a victim the online contact is a "real person."

Tips to protect yourself

  • Create a secret word or phrase with your family to verify their identity.
  • Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic teeth or eyes, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, lag time, voice matching, and unrealistic movements.
  • Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal clone.
  • If possible, limit online content of your image or voice, make social media accounts private, and limit followers to people you know to minimize fraudsters' capabilities to use generative AI software to create fraudulent identities for social engineering.
  • Verify the identity of the person calling you by hanging up the phone, researching the contact of the bank or organization purporting to call you, and calling the phone number directly.
  • Never share sensitive information with people you have met only online or over the phone.
  • Do not send money, gift cards, cryptocurrency, or other assets to people you do not know or have met only online or over the phone.


文章来源: https://www.trustwave.com/en-us/resources/blogs/trustwave-blog/tis-the-season-for-artificial-intelligence-generated-fraud-messages/
如有侵权请联系:admin#unsafe.sh