Artwork

Inhoud geleverd door The Gradient. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door The Gradient of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

Sasha Luccioni: Connecting the Dots Between AI's Environmental and Social Impacts

1:03:07
 
Delen
 

Manage episode 413222548 series 2975159
Inhoud geleverd door The Gradient. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door The Gradient of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.

Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:43) Sasha’s background

* (01:52) How Sasha became interested in sociotechnical work

* (03:08) Larger models and theory of change for AI/climate work

* (07:18) Quantifying emissions for ML systems

* (09:40) Aggregate inference vs training costs

* (10:22) Hardware and data center locations

* (15:10) More efficient hardware vs. bigger models — Jevons paradox

* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports

* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs

* (28:22) General vs. task-specific models

* (31:20) Architectures and efficiency

* (33:45) Sequence-to-sequence architectures vs. decoder-only

* (36:35) Hardware efficiency/utilization

* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment

* (40:50) Stable Bias

* (46:45) Understanding model biases and representations

* (52:07) Future work

* (53:45) Metaethical perspectives on benchmarking for AI ethics

* (54:30) “Moral benchmarks”

* (56:50) Reflecting on “ethicality” of systems

* (59:00) Transparency and ethics

* (1:00:05) Advice for picking research directions

* (1:02:58) Outro

Links:

* Sasha’s homepage and Twitter

* Papers read/discussed

* Climate Change / Carbon Emissions of AI Models

* Quantifying the Carbon Emissions of Machine Learning

* Power Hungry Processing: Watts Driving the Cost of AI Deployment?

* Tackling Climate Change with Machine Learning

* CodeCarbon

* Responsible AI

* Stable Bias: Analyzing Societal Representations in Diffusion Models

* Metaethical Perspectives on ‘Benchmarking’ AI Ethics

* Measuring Data

* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

127 afleveringen

Artwork
iconDelen
 
Manage episode 413222548 series 2975159
Inhoud geleverd door The Gradient. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door The Gradient of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.

Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:43) Sasha’s background

* (01:52) How Sasha became interested in sociotechnical work

* (03:08) Larger models and theory of change for AI/climate work

* (07:18) Quantifying emissions for ML systems

* (09:40) Aggregate inference vs training costs

* (10:22) Hardware and data center locations

* (15:10) More efficient hardware vs. bigger models — Jevons paradox

* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports

* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs

* (28:22) General vs. task-specific models

* (31:20) Architectures and efficiency

* (33:45) Sequence-to-sequence architectures vs. decoder-only

* (36:35) Hardware efficiency/utilization

* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment

* (40:50) Stable Bias

* (46:45) Understanding model biases and representations

* (52:07) Future work

* (53:45) Metaethical perspectives on benchmarking for AI ethics

* (54:30) “Moral benchmarks”

* (56:50) Reflecting on “ethicality” of systems

* (59:00) Transparency and ethics

* (1:00:05) Advice for picking research directions

* (1:02:58) Outro

Links:

* Sasha’s homepage and Twitter

* Papers read/discussed

* Climate Change / Carbon Emissions of AI Models

* Quantifying the Carbon Emissions of Machine Learning

* Power Hungry Processing: Watts Driving the Cost of AI Deployment?

* Tackling Climate Change with Machine Learning

* CodeCarbon

* Responsible AI

* Stable Bias: Analyzing Societal Representations in Diffusion Models

* Metaethical Perspectives on ‘Benchmarking’ AI Ethics

* Measuring Data

* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

127 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding