
Image:
Sigmund (unsplash)
Digital security specialists have pinpointed a fresh AI innovation referred to as “FraudGPT,” that’s been in circulation on hidden internet marketplaces and private chat platforms since July 22, 2023.
The tool, FraudGPT, is marketed as a comprehensive utility for individuals with malicious intent online. It boasts a plethora of features such as fabricating highly targeted fraudulent emails, generating concealed malicious software, producing scam web pages, detecting susceptible sites, and even providing how-to guides on infiltration techniques.
As articulated by John Bambenek, the chief threat investigator at Netenrich, generative AI devices furnish unscrupulous individuals with similar core capabilities they provide to tech professionals: the potential to function more rapidly and extensively. The individuals with ill intentions can rapidly fabricate scam campaigns and initiate multiple attacks concurrently.
The threat research department at Netenrich has been intensely scrutinizing the happenings linked to FraudGPT and its anonymous originator. As per an advisory issued by the organization on Tuesday, the unscrupulous individual had earlier been a recognized seller on multiple clandestine online marketplaces.
In a calculated move designed to avoid fraud related to these platforms’ closure, the anonymous actor laid foundations on a private chat platform, offering a more resilient forum to deliver their malevolent services.
The subscription rates for FraudGPT vary from $200 monthly to $1700 annually, and it claims over 3000 verified sales and reviews.
In response to this burgeoning menace, professionals have stressed the requirement for incessant innovation in digital defense measures. Pyry Åvist, the co-founder and chief technology officer at Hoxhunt, shared his insights on this.
OpenAI has been persistently battling with code manipulation, but it is an endless challenge. Regulations are established, then violated, new ones are introduced, those too are broken and so it continues. Yet, in light of the surfacing of illicit GPT models, it is essential to note that effective security awareness, scam and behavior alteration training are effective, Åvist added.
As per Åvist, those with more knowledge and experience in a program focused on security consciousness and behavior modification displayed significant resistance against both human and AI-constructed scam email attacks. He added, the failure rates among less trained users exceeded 14%, whereas the failure rates among the more experienced users were only between 2-4%.
The advisory issued by Netenrich concerning FraudGPT emerged just a fortnight following the discovery of WormGPT by SlashNext on July 13.
According to Patrick Harr, CEO of SlashNext, the unveiling of FraudGPT soon after WormGPT is only the beginning of numerous tools that are going to exploit generative AI. It is crucially important for security teams to adopt tools that utilize AI to enhance the speed, precision, and automation necessary to prevent these threats from evolving into breaches.