Artwork

Inhoud geleverd door LessWrong. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door LessWrong of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

“Fields that I reference when thinking about AI takeover prevention” by Buck

20:01
 
Delen
 

Manage episode 434444221 series 3364760
Inhoud geleverd door LessWrong. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door LessWrong of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.Is AI takeover like a nuclear meltdown? A coup? A plane crash?
My day job is thinking about safety measures that aim to reduce catastrophic risks from AI (especially risks from egregious misalignment). The two main themes of this work are the design of such measures (what's the space of techniques we might expect to be affordable and effective) and their evaluation (how do we decide which safety measures to implement, and whether a set of measures is sufficiently robust). I focus especially on AI control, where we assume our models are trying to subvert our safety measures and aspire to find measures that are robust anyway.
Like other AI safety researchers, I often draw inspiration from other fields that contain potential analogies. Here are some of those fields, my opinions on their [...]
---
Outline:
(01:04) Robustness to insider threats
(07:16) Computer security
(09:58) Adversarial risk analysis
(11:58) Safety engineering
(13:34) Physical security
(18:06) How human power structures arise and are preserved
The original text contained 1 image which was described by AI.
---
First published:
August 13th, 2024
Source:
https://www.lesswrong.com/posts/xXXXkGGKorTNmcYdb/fields-that-i-reference-when-thinking-about-ai-takeover
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefinedApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

332 afleveringen

Artwork
iconDelen
 
Manage episode 434444221 series 3364760
Inhoud geleverd door LessWrong. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door LessWrong of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.Is AI takeover like a nuclear meltdown? A coup? A plane crash?
My day job is thinking about safety measures that aim to reduce catastrophic risks from AI (especially risks from egregious misalignment). The two main themes of this work are the design of such measures (what's the space of techniques we might expect to be affordable and effective) and their evaluation (how do we decide which safety measures to implement, and whether a set of measures is sufficiently robust). I focus especially on AI control, where we assume our models are trying to subvert our safety measures and aspire to find measures that are robust anyway.
Like other AI safety researchers, I often draw inspiration from other fields that contain potential analogies. Here are some of those fields, my opinions on their [...]
---
Outline:
(01:04) Robustness to insider threats
(07:16) Computer security
(09:58) Adversarial risk analysis
(11:58) Safety engineering
(13:34) Physical security
(18:06) How human power structures arise and are preserved
The original text contained 1 image which was described by AI.
---
First published:
August 13th, 2024
Source:
https://www.lesswrong.com/posts/xXXXkGGKorTNmcYdb/fields-that-i-reference-when-thinking-about-ai-takeover
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefinedApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

332 afleveringen

Wszystkie odcinki

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding