How generative AI is reshaping the digital threat landscape and what organizations must do to defend against evolving cyber risks.
As generative AI continues to revolutionize industries, its impact on cybersecurity remains a topic of intense discussion. While AI-powered tools enhance security measures, they also provide cybercriminals with unprecedented capabilities to launch sophisticated attacks. In an exclusive interview with vpnMentor, cybersecurity expert Jeremiah Fowler shares insights into how generative AI is reshaping the digital threat landscape and what organizations must do to defend against evolving cyber risks.
The Growing Role of AI in Cybersecurity
Artificial intelligence (AI) has become an integral part of cybersecurity, both for defense and offense. According to Fowler, the global AI-based cybersecurity market was valued at $15 billion in 2021 and is expected to grow to nearly $135 billion by 2030. This exponential rise underscores the increasing reliance on AI-driven security solutions.
“Generative AI plays a crucial role in identifying vulnerabilities, crafting defensive strategies, and enhancing incident response,” Fowler explained. “However, the same technology is being exploited by cybercriminals, enabling them to develop more sophisticated and evasive attack methods.”
AI has enabled security teams to detect threats faster and automate responses. However, it has also lowered the barriers for cybercriminals, allowing even those with limited technical expertise to launch complex attacks.
The Dark Side: AI-Enabled Cyber Threats
Fowler emphasized that generative AI is already being used to develop new attack vectors. Criminals leverage AI to automate phishing campaigns, create deepfake content, generate malware, and identify system vulnerabilities at an unprecedented scale.
“Not long ago, phishing attempts were easier to spot due to poor grammar or awkward phrasing,” he noted. “Now, AI can generate flawless, personalized phishing messages, making them almost indistinguishable from legitimate communication.”
AI-powered social engineering attacks have become more refined. Cybercriminals use generative AI to create realistic fake identities, deepfake audio and video, and sophisticated scam operations. This has led to an alarming increase in AI-driven fraud cases.
According to the Voice of SecOps report by Deep Instinct, 75% of security professionals witnessed an increase in cyberattacks in 2023, with 85% of those attacks being AI-powered. Malware, traditionally reliant on static signatures, is now adapting in real time, evading detection by modifying its behavior and source code.
The Rise of AI-Powered Malware and Cybercrime Tools
Fowler highlighted the growing availability of malicious AI tools on the dark web. Two examples—FraudGPT and WormGPT—demonstrate how AI is being weaponized for cybercrime.
- FraudGPT specializes in generating deceptive content for phishing attacks and scams.
- WormGPT focuses on creating malware and automating hacking attempts.
“These tools pose a significant threat because they allow cybercriminals with little to no technical expertise to conduct highly sophisticated attacks,” Fowler warned. “With just a few command prompts, attackers can launch large-scale cybercrimes with minimal effort.”
AI-powered malware, such as BlackMamba, further exemplifies the growing risk. Developed as a proof-of-concept by HYAS Labs, BlackMamba constantly rewrites its code to evade detection, bypassing even the most advanced Endpoint Detection and Response (EDR) systems.
State-Sponsored AI Cyber Threats
Fowler also discussed the involvement of state-sponsored actors in AI-driven cyber threats. The 2023 Microsoft Digital Defense Report revealed that state actors from Russia, China, Iran, and North Korea have attempted to exploit AI for espionage and cyber warfare.
“These groups use AI to conduct spear-phishing, hacking, and even satellite and radar technology investigations,” Fowler explained. “In some cases, AI is being deployed to manipulate social media narratives and influence political events.”
Hybrid disinformation campaigns—where AI-generated content is combined with human manipulation—have been particularly effective. The Russian Internet Research Agency (IRA) has used AI-powered bots to create fake social media profiles and spread disinformation, influencing global political landscapes.
The Most Common Generative AI Cyber Threats
Fowler identified several key cyber threats fueled by generative AI:
- Advanced Phishing Attacks: AI can generate highly personalized phishing emails, increasing their effectiveness.
- Deepfake-Based Social Engineering: AI-generated videos and audio can impersonate executives or family members, facilitating financial fraud.
- AI-Generated Malware and Ransomware: AI-created malware can mutate, evading detection by cybersecurity systems.
- Automated Vulnerability Discovery: AI can rapidly scan and exploit system vulnerabilities, accelerating cyberattacks.
Defending Against AI-Powered Cyber Threats
Organizations must adopt multi-layered cybersecurity strategies to counteract AI-enhanced threats. Fowler outlined several key defense measures:
- AI-Powered Threat Detection: Implement AI-driven security tools capable of identifying AI-generated threats.
- Anomaly Detection & Behavioral Monitoring: Track network behavior to detect suspicious activity.
- Zero-Trust Architecture: No user or system should be trusted by default. Strict authentication protocols must be enforced.
- Network Segmentation: Isolating sections of a network can limit the spread of malware.
- Human Oversight: While AI enhances security, human analysts remain essential for contextual decision-making.
“The key to combating AI-powered cyber threats is continuous monitoring and rapid response,” Fowler emphasized. “Education is also crucial—employees and users must be aware of common cyber threats and how to recognize them.”
Real-World Example: AI in Cyberattacks
One of the most notable AI-driven cyberattacks occurred in Hong Kong in early 2024. Cybercriminals used deepfake technology to impersonate a company’s CEO, convincing the CFO to transfer $24.6 million. The realistic video and audio left no suspicion, leading to a successful fraud.
“AI is making it harder to distinguish between real and fake content,” Fowler said. “Without strict verification protocols, organizations remain vulnerable to these attacks.”
The Role of Generative AI in Incident Response
Despite the risks, AI also offers immense benefits for cybersecurity defenses. Generative AI can enhance incident response by:
- Detecting Threats in Real-Time: AI can identify security breaches within seconds, minimizing damage.
- Automating Response Actions: AI-driven security tools can isolate compromised systems and contain threats automatically.
- Simulating Attack Scenarios: AI can generate realistic cyberattack simulations, helping organizations test and improve their defenses.
- AI-Powered Honeypots: Organizations can use AI to lure attackers into fake environments, studying their tactics and improving cybersecurity strategies.
The Future of AI in Cybersecurity
As AI technology continues to evolve, so too will its applications in cybersecurity—both for defense and attack. Fowler believes that AI-driven security solutions will eventually become more effective at countering AI-enhanced cyber threats, but organizations must act now to stay ahead.
“We are entering a new era where AI plays a central role in cybersecurity,” Fowler concluded. “Organizations must adapt, implement proactive defenses, and ensure that AI is used as a tool for protection rather than exploitation.”
Final Thoughts
The impact of generative AI on cybersecurity is undeniable. While it offers powerful tools for threat detection and response, it also presents significant risks if misused. Organizations must balance the benefits and challenges of AI, investing in advanced security measures to combat evolving cyber threats.
By leveraging AI responsibly and staying vigilant, businesses and individuals can safeguard their digital environments against the growing threat of AI-powered cyberattacks.