Sunday, October 6, 2024

Dark Web Markets Offer Latest Hacking AI Tool “Fraud GPT” Promising To Automate Scams

Researchers at Netenrich have uncovered a hazardous AI tool known as “FraudGPT” being sold on the dark web. This tool is designed to create harmful content. It includes malicious code, phishing pages, scam emails, and more.

FraudGPT is launched on July 23rd. It is available for $200 per month or $1700 per year. However, the alarming aspect is that malicious actors can replicate this technology without ethical safeguards. Moreover, it allows cybercriminals to exploit AI without restrictions.

Although, google initially developed transformers for internal purposes, but OpenAI’s success with ChatGPT has attracted widespread interest, even from malicious actors.

Recently, the creator of FraudGPT started promoting it on hacking forums, claiming it will revolutionize online fraud. With this tool, hackers can easily generate convincing text to deceive customers. And potentially engage in various malicious activities.

Unlike ethical AI platforms, FraudGPT is capable of generating malicious code and trafficking stolen information. It can even scan vulnerable websites for potential infiltration targets. As a subscription-based service, it operates centrally, raising concerns about the scale of its impact.

Furthermore, the monthly cost of $200 for FraudGPT exceeds that of WormGPT, which was priced at $60 per month. The creator boasts over 3,000 sales, making it a significant threat.

To avoid falling victim to scams created by FraudGPT, users should be cautious of well-crafted messages lacking the typical broken English that often gives away scams.

Moreover, the emergence of dangerous AI tools like FraudGPT underscores the pressing need for more robust safeguards against AI misuse in the future.

Related Articles

Latest Articles