Cheap, accessible malicious AI

On dark web forums, tools like WormGPT, FraudGPT, and EvilCoder AI are no longer a secret, but are openly advertised as regular subscription services. What’s worrying is that for just a few hundred dollars a year, anyone can own a dangerous “AI assistant” to carry out sophisticated cyberattacks.

Photo 1 (10).JPG

Domestic and international security experts have recently recorded a rapid increase in attacks originating from these malicious AI tools. Typically, FraudGPT, buyers only need to pay about 200 USD per month, this tool immediately supports them to create a series of personalized phishing emails with realistic content, or write complete security vulnerability exploit code.

WormGPT, a cheaper version of the AI ​​malware, costs just 60 EUR per month, and offers the ability to produce polymorphic malware that can continuously mutate to bypass most traditional antivirus systems. This is a new challenge for corporate security teams, because attackers no longer need advanced programming skills - they just need to know how to use these available tools.

Malicious AI models like WormGPT, FraudGPT are built on similar language model architectures as ChatGPT, but have been tweaked to remove ethical barriers. These tools can:

Automatically generate phishing emails with perfectly personalized content, based on information collected from social media platforms or public data sources.

Write new types of malware completely automatically, continuously changing signatures to avoid detection by traditional hash-based antivirus solutions.

Automatically develop exploit code that targets known vulnerabilities (CVEs), making it easier for attackers to penetrate vulnerable systems.

Dark web forums also provide support and detailed instructions, allowing anyone, even non-techies, to launch an attack.

In the first quarter of 2025, many international businesses recorded serious attacks originating from these AI tools. Several companies in the US and Europe reported that they faced unprecedented AI phishing and malware campaigns, rendering traditional security systems almost completely useless.

Proactive AI defense instantly

Realizing this reality, CMC Telecom's security experts have quickly deployed special defense solutions to combat cheap, easily accessible AI attack tools on:

Deception Technology: Actively creates a simulated environment in the system, causing AI hackers to waste time on fake targets, helping businesses identify and analyze attack techniques early.

AI-powered Sandboxing: All suspicious emails and files are thoroughly examined in a virtual isolation environment to detect signs of polymorphic malware.

Photo 2 + Facebook post.jpg

SOAR - Automated Incident Response: When detecting signs of attack from AI-driven malware, the system automatically takes quick response steps, minimizing damage and response time.

Advanced Threat Intelligence: Analyze new attack data every day, continuously update indicators of compromise (IOCs), helping businesses stay one step ahead of attackers.

These solutions are operated by CMC Telecom's team of security experts. This team of experts has been continuously honored in Apple's Hall of Fame and owns many international certificates such as OSCP, CREST, CHFI.

According to Mr. Luu The Hien - Deputy Director of CMC Telecom's Information Security Service Center, tools like WormGPT and FraudGPT are no longer a thing of the future - but have become an existential threat. Business leaders need to be clearly aware: they cannot continue to wait or delay immediately accessing appropriate defense technologies.

CMC Telecom's team of experts believes that only by proactively applying AI to security can we be able to fight against AI cyber attacks in a sophisticated, cheap way, especially when they are happening at a faster rate than ever.

Thuy Nga

Source: https://vietnamnet.vn/chu-dong-phong-ve-truoc-cac-cong-cu-tan-cong-ai-2415360.html