Artwork
iconShare
 
Manage episode 493184340 series 2948336
Content provided by EM360Tech. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by EM360Tech or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Artificial intelligence (AI) is on everyone’s mind, and its impact doesn't escape the cybersecurity industry. The industry experts acknowledge not just the benefits but also the cybersecurity threats of AI integrations.

As Pascal Geenens, Director of Threat Intelligence at Radware, puts it, "It's AI, so everything is changing weekly. What I talked about two weeks ago has already changed again."

The constant change means that malicious actors are not just adopting AI, they're leveraging it to create new threats at a striking pace.

In this episode of The Security Strategist Podcast, Richard Stiennon, an industry Analyst, Author and Chief Research Analyst at IT-Harvest, speaks with Geenens.

They discuss how cybersecurity threats are enhanced by AI. This includes how attackers are using AI tools, the implications of new technologies like agentic AI, and the challenges posed by AI advancements.

The conversation also touches on the role of nation-states in utilising AI for cyber operations, the concept of vibe hacking, and the future of interconnected AI agents.

AI-Driven Attacks Fuelled by Prompt Injection

Malicious hackers evidently first used AI in 2023, specifically through prompt injection attacks on large language models (LLMs) such as ChatGPT.

Attackers would find "evasion techniques" to bypass ethical guardrails, asking questions indirectly to generate malicious scripts or gather information for attacks.

Geenens says, "If you would ask the direct question, how can I commit a murder and get away with it? He would say, no, no, no, that goes against my ethical principles. But there are ways around it."

The game changed with the emergence of offline models and specialised services like WormGPT and FraudGPT. These models, distilled from larger ones and enhanced with hacking-specific information from underground forums, lowered the barrier to entry for aspiring cyber criminals.

"They created their own model and sold it as a service underground. And that model was geared towards helping anyone with questions to interact with a prompt and to make their malware better, increase the effectiveness of their malware," explained Geenens.

This accessibility meant that "more actors would actually move from script kiddie level to a more sophisticated level." Teenagers, in particular, took advantage of these AI assistants. They provided a friendly, non-toxic environment to learn and develop hacking tools, unlike the often unwelcoming underground forums.

The Rise of Agentic AI & Automated Exploits

In 2024, the focus shifted to AI agents, which provide attackers with automated workflows. Unlike LLMs, agents can interact with their environment, gather updated information, execute tools, and even spawn new agents that communicate with each other.

"You can have a manager agent that says, okay, I need to develop something. It’s a big problem here. I need to develop a tool. I have an agent that does the development. I have an agent that does the QA testing,” depicts Geenens. “And then I have another agent who's a problem solver who will help the other tools do their job. And they interact with each other.”

This agentic capability majorly hastens the exploitation of vulnerabilities. Research showed that AI agents can quickly rebuild proof-of-concept exploits for published CVEs.

Geenens stressed the dramatic reduction in time: "Earlier, when the CVE was published, it took 24 hours-48 hours and a security researcher who typically posted a proof of concept online in Python before the actual attacks in the wild would start.”

“But now with those agents and those workflows has been proven as much easier to get access and to get a proof of concept. That window might now reduce from 48 hours to a couple of minutes,” he added.

Takeaways

  • AI is changing the landscape of cybersecurity threats.
  • Attackers are using AI to enhance their hacking capabilities.
  • Guardrails in AI are not foolproof against malicious intent.
  • Teenagers are increasingly entering the cybercrime space due to AI tools.
  • Agentic AI allows for automated workflows in attacks.
  • The sophistication of attacks has not drastically changed, but entry barriers have lowered.
  • Nation-states are using AI for disinformation and phishing.
  • Vibe hacking represents a new frontier in automated vulnerability discovery.
  • MCP and agent-to-agent protocols will shape the future of AI interactions.
  • AI will be necessary to combat AI-driven threats.

Chapters

00:00 AI in Cybersecurity: The New Frontier

06:51 The Evolution of Hacking: From Script Kiddies to AI-Enhanced Threats

12:48 Nation States and AI: The New Age of Cyber Warfare

15:12 Vibe Hacking: The Future of Coding and Security

19:14 The Internet of Agents: A New Era in Cybersecurity

25:11 Emerging Threats: Indirect Prompt Injection Attacks

  continue reading

162 episodes