Artwork

Inhoud geleverd door Tessl. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Tessl of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

How Attackers Trick AI: Lessons from Gandalf’s Creator

54:35
 
Delen
 

Manage episode 472050790 series 3585084
Inhoud geleverd door Tessl. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Tessl of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

🔒 How Secure is AI? Gandalf’s Creator Exposes the Risks 🔥
AI security is under attack, and hackers are finding new ways to manipulate AI systems. In this episode, Guy Podjarny sits down with Mateo Rojas-Carulla, co-founder of Lakera and creator of Gandalf, to break down the biggest threats facing AI today—from prompt injections and jailbreaks to data poisoning and agent manipulation.
What You’ll Learn:
- How attackers exploit AI vulnerabilities in real-world applications
- Why AI models struggle to separate instructions from external data
- How Gandalf’s 60M+ attack attempts revealed shocking insights
- What the Dynamic Security Utility Framework (DSEC) means for AI safety
- Why red teaming is critical for preventing AI disasters
Whether you’re a developer, security expert, or just curious about AI risks, this episode is packed with must-know insights on keeping AI safe in an evolving landscape.
💡 Can AI truly be secured? Or will attackers always find a way? Drop your thoughts in the comments! 👇
Watch the episode on YouTube: https://youtu.be/RKCvlJT_r4s

Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh
Ask us questions: podcast@tessl.io

  continue reading

Hoofdstukken

1. How Attackers Trick AI: Lessons from Gandalf’s Creator (00:00:00)

2. Over-Permission in AI Systems (00:02:00)

3. Nebulous AI Functionality (00:07:00)

4. Jailbreaks and Prompt Injections Attacks (00:10:00)

5. Introducing the Dynamic Security Utility Framework (00:18:34)

6. Security in Agentic Systems (00:23:34)

7. Red Teaming for Ai Security Testing (00:28:34)

8. The Future of Agentic Systems (00:35:34)

9. LangChain and Real-World Vulnerabilities (00:42:34)

10. Proactive Security Strategies (00:48:34)

50 afleveringen

Artwork
iconDelen
 
Manage episode 472050790 series 3585084
Inhoud geleverd door Tessl. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Tessl of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

🔒 How Secure is AI? Gandalf’s Creator Exposes the Risks 🔥
AI security is under attack, and hackers are finding new ways to manipulate AI systems. In this episode, Guy Podjarny sits down with Mateo Rojas-Carulla, co-founder of Lakera and creator of Gandalf, to break down the biggest threats facing AI today—from prompt injections and jailbreaks to data poisoning and agent manipulation.
What You’ll Learn:
- How attackers exploit AI vulnerabilities in real-world applications
- Why AI models struggle to separate instructions from external data
- How Gandalf’s 60M+ attack attempts revealed shocking insights
- What the Dynamic Security Utility Framework (DSEC) means for AI safety
- Why red teaming is critical for preventing AI disasters
Whether you’re a developer, security expert, or just curious about AI risks, this episode is packed with must-know insights on keeping AI safe in an evolving landscape.
💡 Can AI truly be secured? Or will attackers always find a way? Drop your thoughts in the comments! 👇
Watch the episode on YouTube: https://youtu.be/RKCvlJT_r4s

Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh
Ask us questions: podcast@tessl.io

  continue reading

Hoofdstukken

1. How Attackers Trick AI: Lessons from Gandalf’s Creator (00:00:00)

2. Over-Permission in AI Systems (00:02:00)

3. Nebulous AI Functionality (00:07:00)

4. Jailbreaks and Prompt Injections Attacks (00:10:00)

5. Introducing the Dynamic Security Utility Framework (00:18:34)

6. Security in Agentic Systems (00:23:34)

7. Red Teaming for Ai Security Testing (00:28:34)

8. The Future of Agentic Systems (00:35:34)

9. LangChain and Real-World Vulnerabilities (00:42:34)

10. Proactive Security Strategies (00:48:34)

50 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding

Luister naar deze show terwijl je op verkenning gaat
Spelen