Artwork

Inhoud geleverd door Reimagining Cyber. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Reimagining Cyber of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

AI and ChatGPT - Security, Privacy and Ethical Ramifications - Ep 62

27:13
 
Delen
 

Manage episode 359870152 series 3361845
Inhoud geleverd door Reimagining Cyber. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Reimagining Cyber of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

This episode features “the expert in ChatGPT”, Stephan Jou. He is CTO of Security Analytics at OpenText Cybersecurity.
“The techniques that we are developing are becoming so sophisticated and scalable that it's really become the only viable method to detect increasingly sophisticated and subtle attacks when the data volumes and velocity are so huge. So think about nation state attacks where you have very advanced adversaries that are using uncommon tools that won't be on any sort of blacklist.”
“In the past five years or so, I've become increasingly interested in the ethical and responsible application of AI. Pure AI is kind of like pure math. It's neutral. It doesn't have an angle to it, but applied AI is a different story. So all of a sudden you have to think about the implications of your AI product, the data that you're using, and whether your AI product can be weaponized or misled.”

“You call me the expert in ChatGPT. I sort of both love it and hate it. I love it because people like me are starting to get so much attention and I hate it because it's sort of highlighted some areas of potential risk associated with AI that people are only start now starting to realize.”
“I'm very much looking forward to using technologies that can understand code and code patterns and how code gets assembled together and built into a product in a human-like way to be able to sort of detect software vulnerabilities. That's a fascinating area of development and research that's going on right now in our labs.”
“[on AI poisoning] The good news is, this is very difficult to do in practice. A lot of the papers that we see on AI poisoning, they're much more theoretical than they are practical.”
Follow or subscribe to the show on your preferred podcast platform.
Share the show with others in the cybersecurity world.
Get in touch via reimaginingcyber@gmail.com

  continue reading

101 afleveringen

Artwork
iconDelen
 
Manage episode 359870152 series 3361845
Inhoud geleverd door Reimagining Cyber. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Reimagining Cyber of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

This episode features “the expert in ChatGPT”, Stephan Jou. He is CTO of Security Analytics at OpenText Cybersecurity.
“The techniques that we are developing are becoming so sophisticated and scalable that it's really become the only viable method to detect increasingly sophisticated and subtle attacks when the data volumes and velocity are so huge. So think about nation state attacks where you have very advanced adversaries that are using uncommon tools that won't be on any sort of blacklist.”
“In the past five years or so, I've become increasingly interested in the ethical and responsible application of AI. Pure AI is kind of like pure math. It's neutral. It doesn't have an angle to it, but applied AI is a different story. So all of a sudden you have to think about the implications of your AI product, the data that you're using, and whether your AI product can be weaponized or misled.”

“You call me the expert in ChatGPT. I sort of both love it and hate it. I love it because people like me are starting to get so much attention and I hate it because it's sort of highlighted some areas of potential risk associated with AI that people are only start now starting to realize.”
“I'm very much looking forward to using technologies that can understand code and code patterns and how code gets assembled together and built into a product in a human-like way to be able to sort of detect software vulnerabilities. That's a fascinating area of development and research that's going on right now in our labs.”
“[on AI poisoning] The good news is, this is very difficult to do in practice. A lot of the papers that we see on AI poisoning, they're much more theoretical than they are practical.”
Follow or subscribe to the show on your preferred podcast platform.
Share the show with others in the cybersecurity world.
Get in touch via reimaginingcyber@gmail.com

  continue reading

101 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding