Artwork

Inhoud geleverd door LessWrong. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door LessWrong of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

“Frontier Models are Capable of In-context Scheming” by Marius Hobbhahn, AlexMeinke, Bronson Schoen

14:46
 
Delen
 

Manage episode 454188092 series 3364758
Inhoud geleverd door LessWrong. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door LessWrong of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
  continue reading

500 afleveringen

Artwork
iconDelen
 
Manage episode 454188092 series 3364758
Inhoud geleverd door LessWrong. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door LessWrong of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
  continue reading

500 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding

Luister naar deze show terwijl je op verkenning gaat
Spelen