Artwork

Inhoud geleverd door Soroush Pour. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Soroush Pour of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

1:37:19
 
Delen
 

Manage episode 389405391 series 3428190
Inhoud geleverd door Soroush Pour. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Soroush Pour of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.
We talk to Thomas about a huge breadth of technical alignment areas including:
* Prosaic alignment
* Scalable oversight (e.g. RLHF, debate, IDA)
* Intrepretability
* Heuristic arguments, from ARC
* Model evaluations
* Agent foundations
* Other areas more briefly:
* Model splintering
* Out-of-distribution (OOD) detection
* Low impact measures
* Threat modelling
* Scaling laws
* Brain-like AI safety
* Inverse reinforcement learning (RL)
* Cooperative AI
* Adversarial training
* Truthful AI
* Brain-machine interfaces (Neuralink)
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Thomas --
Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.
* Center for AI Policy - https://www.aipolicy.us
* LinkedIn - https://www.linkedin.com/in/thomas-larsen/
* LessWrong - https://www.lesswrong.com/users/thomas-larsen
-- Further resources --
* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is
* Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.

  continue reading

15 afleveringen

Artwork
iconDelen
 
Manage episode 389405391 series 3428190
Inhoud geleverd door Soroush Pour. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Soroush Pour of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.
We talk to Thomas about a huge breadth of technical alignment areas including:
* Prosaic alignment
* Scalable oversight (e.g. RLHF, debate, IDA)
* Intrepretability
* Heuristic arguments, from ARC
* Model evaluations
* Agent foundations
* Other areas more briefly:
* Model splintering
* Out-of-distribution (OOD) detection
* Low impact measures
* Threat modelling
* Scaling laws
* Brain-like AI safety
* Inverse reinforcement learning (RL)
* Cooperative AI
* Adversarial training
* Truthful AI
* Brain-machine interfaces (Neuralink)
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Thomas --
Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.
* Center for AI Policy - https://www.aipolicy.us
* LinkedIn - https://www.linkedin.com/in/thomas-larsen/
* LessWrong - https://www.lesswrong.com/users/thomas-larsen
-- Further resources --
* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is
* Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.

  continue reading

15 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding