Artwork

Inhoud geleverd door Audioboom and Information Security Forum Podcast. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Audioboom and Information Security Forum Podcast of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

S25 Ep5: Boosting Business Success: Unleashing the potential of human and AI collaboration

22:48
 
Delen
 

Manage episode 415445894 series 1318624
Inhoud geleverd door Audioboom and Information Security Forum Podcast. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Audioboom and Information Security Forum Podcast of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Today, Steve and producer Tavia Gilbert discuss the impact artificial intelligence is having on the threat landscape and how businesses can leverage this new technology and collaborate with it successfully.
Key Takeaways:
1. AI risk is best presented in business-friendly terms when seeking to engage executives at the board level.
2. Steve Durbin takes the position that AI will not replace leadership roles, as human strengths like emotional intelligence and complex decision making are still essential.
3. AI risk management must be aligned with business objectives while ethical considerations are integrated into AI development.
4. Since AI regulation will be patchy, effective mitigation and security strategies must be built in from the start.
Tune in to hear more about:
1. AI’s impact on cybersecurity, including industrialized high-impact attacks and manipulation of data (0:00)
2. AI collaboration with humans, focusing on benefits and risks (4:12)
3. AI adoption in organizations, cybersecurity risks, and board involvement (11:09)
4. AI governance, risk management, and ethics (15:42)
Standout Quotes:

1. Cyber leaders have to present security issues in terms that board level executives can understand and act on, and that's certainly the case when it comes to AI. So that means reporting AI risk in financial, economic, operational terms, not just in technical terms. If you report in technical terms, you will lose the room exceptionally quickly. It also involves aligning AI risk management with business needs by you know, identifying how AI risk management and resilience are going to help to meet business objectives. And if you can do that, as opposed to losing the room, you will certainly win the room. -Steve Durbin
2. AI, of course, does provide some solution to that, in that if you can provide it with enough examples of what good looks like and what bad looks like in terms of data integrity, then the systems can, to an extent, differentiate between what is correct and what is incorrect. But the fact remains that data manipulation, changing data, whether that be in software code, whether it be in information that we're storing, all of those things remain a major concern. -Steve Durbin
3. We can’t turn the clock back. So at the ISF, you know, our goal is to try to help organizations figure out how to use this technology wisely. So we're going to be talking about ways humans and AI complement each other, such as collaboration, automation, problem solving, monitoring, oversight, all of those sorts of areas. And I think for these to work, and for us to work effectively with AI, we need to start by recognizing the strengths both we as people and also AI models can bring to the table. -Steve Durbin
4. I also think that boards really need to think through the impact of what they're doing with AI on the workforce, and indeed, on other stakeholders. And last, but certainly not least, what the governance implications of the use of AI might look like. And so therefore, what new policies controls need to be implemented. -Steve Durbin
5. We need to be paying specific attention to things like ethical risk assessment, working to detect and mitigate bias, ensure that there is, of course, informed consent when somebody interacts with AI. And we do need, I think, to be particularly mindful about bias, you know? Bias detection, bias mitigation. Those are fundamental, because we could end up making all sorts of decisions or having the machines make decisions that we didn't really want. So there's always going to be in that area, I think, in particular, a role for human oversight of AI activities. -Steve Durbin
Mentioned in this episode:
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.
  continue reading

261 afleveringen

Artwork
iconDelen
 
Manage episode 415445894 series 1318624
Inhoud geleverd door Audioboom and Information Security Forum Podcast. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Audioboom and Information Security Forum Podcast of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Today, Steve and producer Tavia Gilbert discuss the impact artificial intelligence is having on the threat landscape and how businesses can leverage this new technology and collaborate with it successfully.
Key Takeaways:
1. AI risk is best presented in business-friendly terms when seeking to engage executives at the board level.
2. Steve Durbin takes the position that AI will not replace leadership roles, as human strengths like emotional intelligence and complex decision making are still essential.
3. AI risk management must be aligned with business objectives while ethical considerations are integrated into AI development.
4. Since AI regulation will be patchy, effective mitigation and security strategies must be built in from the start.
Tune in to hear more about:
1. AI’s impact on cybersecurity, including industrialized high-impact attacks and manipulation of data (0:00)
2. AI collaboration with humans, focusing on benefits and risks (4:12)
3. AI adoption in organizations, cybersecurity risks, and board involvement (11:09)
4. AI governance, risk management, and ethics (15:42)
Standout Quotes:

1. Cyber leaders have to present security issues in terms that board level executives can understand and act on, and that's certainly the case when it comes to AI. So that means reporting AI risk in financial, economic, operational terms, not just in technical terms. If you report in technical terms, you will lose the room exceptionally quickly. It also involves aligning AI risk management with business needs by you know, identifying how AI risk management and resilience are going to help to meet business objectives. And if you can do that, as opposed to losing the room, you will certainly win the room. -Steve Durbin
2. AI, of course, does provide some solution to that, in that if you can provide it with enough examples of what good looks like and what bad looks like in terms of data integrity, then the systems can, to an extent, differentiate between what is correct and what is incorrect. But the fact remains that data manipulation, changing data, whether that be in software code, whether it be in information that we're storing, all of those things remain a major concern. -Steve Durbin
3. We can’t turn the clock back. So at the ISF, you know, our goal is to try to help organizations figure out how to use this technology wisely. So we're going to be talking about ways humans and AI complement each other, such as collaboration, automation, problem solving, monitoring, oversight, all of those sorts of areas. And I think for these to work, and for us to work effectively with AI, we need to start by recognizing the strengths both we as people and also AI models can bring to the table. -Steve Durbin
4. I also think that boards really need to think through the impact of what they're doing with AI on the workforce, and indeed, on other stakeholders. And last, but certainly not least, what the governance implications of the use of AI might look like. And so therefore, what new policies controls need to be implemented. -Steve Durbin
5. We need to be paying specific attention to things like ethical risk assessment, working to detect and mitigate bias, ensure that there is, of course, informed consent when somebody interacts with AI. And we do need, I think, to be particularly mindful about bias, you know? Bias detection, bias mitigation. Those are fundamental, because we could end up making all sorts of decisions or having the machines make decisions that we didn't really want. So there's always going to be in that area, I think, in particular, a role for human oversight of AI activities. -Steve Durbin
Mentioned in this episode:
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.
  continue reading

261 afleveringen

כל הפרקים

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding