Artwork

Inhoud geleverd door SquareX. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door SquareX of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

Using LLMs for Offensive Cybersecurity | Michael Kouremetis | Be Fearless Podcast EP 11

9:46
 
Delen
 

Manage episode 440902967 series 3579095
Inhoud geleverd door SquareX. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door SquareX of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://sqrx.com/

  continue reading

Hoofdstukken

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

38 afleveringen

Artwork
iconDelen
 
Manage episode 440902967 series 3579095
Inhoud geleverd door SquareX. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door SquareX of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://sqrx.com/

  continue reading

Hoofdstukken

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

38 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding

Luister naar deze show terwijl je op verkenning gaat
Spelen