The Trump Administration is scrambling to alert staffers in embassies and consulates around the world about the threat of imposters using AI to pose as government officials in the wake of someone impersonating Secretary of State Marco Rubio and contacting foreign and U.S. politicians.
The incident last month highlights ongoing concerns about the use of deepfakes – AI technology used to create convincing images, video, or audio of a person – in scams for everything from stealing money and data to spreading disinformation. It’s use in politics has grown in recent years – though there’s debate about how effective it actually is in sowing discord through disinformation – and it took center stage during the presidential campaigns when a telephone message to New Hampshire voters made to sound like Joe Biden encouraged them to go to the polls during the primary.
More recently, deepfakes have targeted high-profile members of the Trump Administration. The FBI in May launched an investigation into some impersonating Susie Wiles, Trump’s chief of staff, and contacting governors, members of Congress, and business executives via phone calls and text messages.
Around the same time, the FBI issued an alert about an “ongoing malicious text and voice messaging campaign” that started in April and involved people using AI to impersonate “senior U.S. officials” to contact others, including many former senior federal or state government officials.
“The malicious actors have sent text messages and AI-generated voice messages … that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts,” the FBI wrote in the alert. “One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform.”
The agency added that access to personal or official accounts of U.S. officials could be used to target other government officials, their associates and contacts, by using trusted contact information they obtain.
Last month, a scam was run on Rubio. The State Department reportedly sent a cable July 3 warning embassies and consulates about attempts to use AI tools to impersonate Rubio and possibly other federal officials. The Washington Post and the Associated Press, which have seen the July 3 cable, reported that the imposter tried to contact at least three foreign ministers, a U.S. senator, and a governor through messages sent by text, voice mail, and the encrypted messaging app Signal.
During a press briefing this week, State Department spokesperson Tammy Bruce gave few details about the incident involving Rubio, saying only that the department “is aware of this incident and is currently monitoring and addressing the matter. The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department’s cyber security posture to prevent future incidents. For security reasons, we do not have any further details to provide at this time. “
Adam Marrè, CISO for cybersecurity firm Arctic Wolf and a former FBI agent, said the incident involving Rubio – who as a U.S. senator from Florida in 2018 warned that deepfake technology was a national security threat – is “highly concerning” and the latest sign that we are in a world where we don’t know what the truth is when we scroll online.”
That said, it shouldn’t be unexpected.
“The real surprise here is that we seem to be surprised by this,” Marrè said. “We’ve seen numerous examples in the US and globally of AI being used to impersonate elected officials, business leaders and celebrities. This is proof [that], once again, we need to always be on guard.”
The situation will only get more challenging in a heated global political environment populated by state actors and hacktivists with agendas and the rapid proliferation of AI tools that can be used to easily generate and disseminate disinformation, he said, adding that governments are “alarmingly unprepared” to address the challenge.
“It isn’t only consumers who need to be concerned, though,” Marrè said. “As outlined in this news, officials around the world received this deepfake, potentially altering their view of Rubio and of the United States. Without the ability to quickly determine if content is legitimate, we face a world in which massive political decisions can be made based on falsified videos.”
There have been federal and state government efforts to address the problem of deepfakes, including agencies like the Department of Homeland Security and the National Security Agency issuing warnings, the National Institute for Standards and Technology (NIST) launching a program regarding generative AI that included plans for building hardware and software to detect AI-generated deepfakes, and statements by members of Congress urging action.
More needs to be done, according to Arctic Wolf’s Marrè. There needs to be more processes to verify all communication, not only for detecting AI deepfakes but also to reliably determine if the message came from the person it said it did. In addition, there has to be more accountability for platforms that host and elevate content, he said, adding that “their inaction makes them complicit in spreading falsehoods that can disrupt elections, undermine trust, and destabilize societies.”
“The common instinct to rely on one’s own judgment to detect deepfakes is no longer a viable defense,” Marrè said. “If you think you can ‘just tell,’ you’ve already lost. … It’s time to treat AI-driven disinformation as a clear societal risk requiring stronger defenses, smarter policy, and real consequences for those enabling its spread.”
Recent Articles By Author