GhostGPT: A Malicious AI Chatbot for Hackers
2025-1-24 13:36:21 Author: securityboulevard.com(查看原文) 阅读量:9 收藏

Threat actors, who increasingly are using AI in their attacks, now have another tool at their disposal: a ChatGPT-like chatbot that can help them with everything from creating malware to running business email compromise (BEC) scams.

Dubbed “GhostGPT,” the uncensored chatbot is available on underground forums and is devoid of any guidelines for ethical use, essentially giving cybercriminals all the capabilities of commercial AI chatbot without any guardrails, according to researchers with Abnormal Security.

“By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems,” the researchers wrote in a report this week.

Techstrong Gang Youtube

AWS Hub

There also are other features with GhostGPT that will attract hackers, including fast processing to allow them to create malicious content and steal information more quickly, a no-logs policy that ensures that activity on the chatbot is not recorded, and easy access.

The chatbot is sold through the encrypted messaging service Telegram, enabling bad actors to start using it immediately without having to create a prompt to jailbreak a legitimate chatbot or download a large language model (LLM) themselves.

The researchers believe GhostGPT likely uses a wrapper to connect to a jailbroken version of ChatGPT or an open source LLM. AI jailbreaking happens when prompts are put into an AI model that lets the hacker bypass safety guardrails and get outputs that a traditional chatbot would block or flag.

Another Malicious AI Chatbot

GhostGPT is not the first of its kind. Uncensored AI chatbots began appearing soon after OpenAI introduced ChatGPT to the world in late November 2022. WormGPT, which appeared in early 2023, was designed specifically for nefarious activities, and was soon followed by others, such as WolfGPT and EscapeGPT.

GhostGPT builds on that trend. The researchers saw an advertisement for it on a dark web forum that outlined its various features, including its speed, no-logs policy, and uncensored AI capabilities.

“GhostGPT is marketed for a range of malicious activities, including coding, malware creation, and exploit development,” they wrote. “It can also be used to write convincing emails for business email compromise (BEC) scams, making it a convenient tool for committing cybercrime.”

Not for Cybersecurity

In promotional materials, it also mentions that GhostGPT can be used for cybersecurity, though they see it as a “weak attempt to dodge legal accountability – nothing new in the cybercrime world.”

“On cybercrime forums and networks, it’s not uncommon to come across malware creators and sellers who attempt to evade responsibility by including disclaimers stating that their tools are intended for ‘educational purposes and penetration testing only,’” Daniel Kelley, a former black hat computer hacker who collaborates with Abnormal researchers, wrote in a blog post last year. “However, a closer examination of these claims often reveals a different reality.”

The Abnormal Security researchers tested GhostGPT, instructing the chatbot to create a DocuSign phishing email, a common scam run by bad actors. They wrote that “the chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims.”

Like the previous uncensored AI chatbot variants, the impact of GhostGPT will ripple through the underground world and be felt by victims of cyberattacks. Like the growing cybercrime-as-a-service trend, such malicious chatbots make it easier for new and lesser-skilled threat actors to launch sophisticated attacks.

In addition, hackers can run campaigns with more speed and efficiency and, because GhostGPT is available as a Telegram bot, “there is no need to jailbreak ChatGPT or set up an open-source model,” the researchers wrote. “Users can pay a fee, gain immediate access, and focus directly on executing their attacks.”

Bad Actors and AI

GhostGPT also seems to be popular among bad actors, given that it has thousands of views on online forums. It reflects their growing appetite to use AI tools for their activities. During the RSA Conference last year, the FBI issued a warning about threat actors using AI tools “to conduct sophisticated phishing/social engineering attacks and voice/video cloning scams.”

“As technology continues to evolve, so do cybercriminals’ tactics,” FBI Special Agent in Charge Robert Tripp said in a statement at the time. “Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike.”

That said, the introduction of generative AI is accelerating the cat-and-mouse game that is cybersecurity, with security pros answering advancements by attackers with AI tools of their own.

It’s how it’s got to be done, fighting malicious AI with defensive AI, according to the Abnormal Security researchers.

“Attackers now use tools like GhostGPT to create malicious emails that appear completely legitimate,” they wrote. “Because these messages often slip past traditional filters, AI-powered security solutions are the only effective way to detect and block them.”

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/01/ghostgpt-a-malicious-ai-chatbot-for-hackers/
如有侵权请联系:admin#unsafe.sh