Artwork

Inhoud geleverd door Foresight Institute. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Foresight Institute of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

Jan Leike | Superintelligent Alignment

9:57
 
Delen
 

Manage episode 382006351 series 2943147
Inhoud geleverd door Foresight Institute. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Foresight Institute of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.


Key Highlights

  • The launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.
  • The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.
  • How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.
  • Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.
  • Experimentation with intentionally misaligned models to evaluate alignment strategies.

Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:

Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

134 afleveringen

Artwork
iconDelen
 
Manage episode 382006351 series 2943147
Inhoud geleverd door Foresight Institute. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Foresight Institute of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.


Key Highlights

  • The launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.
  • The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.
  • How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.
  • Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.
  • Experimentation with intentionally misaligned models to evaluate alignment strategies.

Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:

Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

134 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding