The security threat driven by advances in AI is hardly new news: be it billions of Gmail users, bank customers, or attacks against individuals by way of smartphone calls and messages that even the FBI has been concerned enough about to issue a warning, AI is a real and present danger when employed by bad faith actors. Unfortunately, it would appear that one AI-powered chatbot is marketed to and used by just such cybercriminals and hackers, according to a new report. Here’s what you need to know about GhostGPT.
GhostGPT Is A New Uncensored AI Chatbot Catering To Cybercriminals, Researchers Said
Cybercriminals are using GhostGPT, a newly discovered and completely uncensored AI chatbot, for malware creation, phishing scams and more, according to a Jan. 23 report published by researchers from Abnormal security.
Unlike traditional AI models that are constrained by guidelines to ensure safe and responsible interactions, uncensored AI chatbots operate without such guardrails, raising serious concerns about their potential misuse. Most recently, Abnormal Security researchers uncovered GhostGPT, a new uncensored chatbot that further pushes the boundaries of ethical AI use.
GhostGPT isn’t like the kind of generative AI chatbots you are used to, the ones that have ethical guardrails in place to prevent misuse and abuse. You know, the type of things that stop you from being able to ask them to create malware or compose targeted phishing scams for you. GhostGPT has no such guardrails; it is, Abnormal said, “a chatbot specifically designed to cater to cyber criminals.” So, while it is most likely based upon a jailbroken version of an open-source large language model, it throws a wrapper around that core and effectively removes any and all ethical safeguards. “By eliminating the ethical and safety restrictions typically built into AI models,” the Abnormal researchers warned, “GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems.”
Buy Now, Gain Immediate Access To AI-Driven Cybercrime
The Abnormal report revealed that GhostGPT is readily available to cybercriminals using the Telegram messaging service where it can be accessed as a Telegram bot once a fee is paid. “It lowers the barrier to entry for new cybercriminals, allowing them to buy access via Telegram without needing specialized skills or extensive training,” they researchers explained, which “makes it simpler for less skilled attackers to engage in cybercrime.”
What kind of cybercrime? The report explained how the marketing materials for the GhostGPT tool, yes, criminal resources, need advertising as well, revealing it is explicitly targeted at a number of malicious activities, including coding, malware creation, and exploit development. “It can also be used to write convincing emails for business email compromise scams,” the report said, “making it a convenient tool for committing cybercrime.” Some of the features pushed as helpful in this regard included:
- Fast processing—GhostGPT has quick response times that help hackers create malicious content efficiently.
- No logs policy—GhostGPT claims that no user activity is recorded, which is obviously of import to those trying to conceal illegal activities.
- Easy access—as already mentioned, GhostGPT is available for purchase on Telegram and allows immediate usage “without the need to use a jailbreak prompt or download an LLM,” making it available to less experienced cybercriminals and those without advanced technical skills.
Those promotional materials also, it should be noted, mention use for cybersecurity rather than cyberattack. However, Abnormal security warned that “this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC scams.”