![]() |
Credit: TheDigitalArtist from Pixabay |
As ChatGPT gains popularity, its influence on work habits and online experiences becomes evident. AI chatbots have piqued the curiosity of even those who haven't directly engaged with them. The rise of generative AI models, however, has introduced a new dimension of potential risks. Recent discussions on the Dark Web Forum provide glimpses into the emergence of FraudGPT, a malevolent counterpart that cybercriminals are exploiting to their advantage.
Netenrich researchers have uncovered a disturbing artificial intelligence tool named "FraudGPT." Crafted explicitly for malicious activities, this AI bot encompasses tasks such as dispatching spear-phishing emails, crafting cracking tools, engaging in carding activities, and more. This dangerous tool is readily available for purchase on various Dark Web marketplaces as well as the Telegram app.
Comparable to ChatGPT in structure but endowed with the capacity to generate content for cyberattacks, FraudGPT has become a commodity in the dark web underworld and on Telegram. The radar of Netenrich's threat research team first caught this tool being advertised in July 2023. One of FraudGPT's selling points is its evasion of the safety features that render ChatGPT unresponsive to dubious inquiries.
According to provided insights, FraudGPT receives regular updates, tapping into a range of artificial intelligence methods. Subscriptions constitute the primary payment method for acquiring FraudGPT, with monthly subscriptions priced at $200 and annual memberships at $1,700.
How does it function?
Team Netenrich embarked on an exploration of FraudGPT by making a purchase and conducting tests. The interface closely resembles ChatGPT's, featuring user requests history on the left sidebar and the chat window dominating the screen. To obtain a response, users simply type their query into the provided box and press "Enter."
For a test scenario involving a phishing email targeting a bank, minimal input was required. Merely mentioning the bank's name in the query format was sufficient for FraudGPT to complete the task, even suggesting optimal locations to insert malicious links within the text. The tool's capabilities extend to creating scam landing pages designed to extract personal information from unsuspecting visitors.
Another task involved prompting FraudGPT to identify frequently visited or exploited online resources, potentially aiding hackers in devising future attacks. An advertisement for the software touted its ability to generate malicious code for crafting undetectable malware, searching for vulnerabilities, and identifying targets.
The Netenrich team additionally uncovered that the supplier of FraudGPT had previously advertised hacking services for hire, with connections to a similar program named WormGPT.
The revelation of FraudGPT underscores the importance of vigilance. The question of whether hackers have already harnessed these technologies to develop novel threats remains unanswered. Nonetheless, FraudGPT and similar tools offer cybercriminals efficiency gains. Phishing emails and landing pages can be produced in a matter of seconds.
Hence, users must maintain caution against sharing personal information and adhere to cybersecurity best practices. Cybersecurity professionals must ensure their threat detection tools are up-to-date, as malicious actors could utilize programs like FraudGPT to directly target and infiltrate critical computer networks.
The analysis of FraudGPT serves as a stark reminder that hackers will adapt and evolve over time. However, even open-source software harbors security vulnerabilities. Both casual internet users and cybersecurity experts need to stay abreast of emerging technologies and the associated threats. The key lies in remaining cognizant of the risks while engaging with programs like ChatGPT.
0 Comments