Artwork

Inhoud geleverd door david@georgian.io (Georgian). Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door david@georgian.io (Georgian) of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

Testing LLMs for trust and safety

21:07
 
Delen
 

Manage episode 406621932 series 2534786
Inhoud geleverd door david@georgian.io (Georgian). Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door david@georgian.io (Georgian) of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

We all get a few chuckles when autocorrect gets something wrong, but there's a lot of time-saving and face-saving value with autocorrect. But do we trust autocorrect? Yeah. We do, even with its errors. Maybe you can use ChatGPT to improve your productivity. Ask it to a cool question and maybe get a decent answer. That's fine. After all, it's just between you and ChatGPT. But, what if you're a software company and you're leveraging these technologies? You could be putting generative AI output in front of your users.

On this episode of the Georgian Impact Podcast, it is time to talk about GenAI and trust. Angeline Yasodhara, an Applied Research Scientist at Georgian, is here to discuss the new world of GenAI.

You'll Hear About:

  • Differences between closed and open-source large language models (LLMs), advantages and disadvantages of each.
  • Limitations and biases inherent in LLMs due to their training on Internet data.
  • Treating LLMs as untrusted users and the need to restrict data access to minimize potential risks.
  • The continuous learning process of LLMs through reinforcement learning from human feedback.
  • Ethical issues and biases associated with LLMs, and the challenges of fostering creativity while avoiding misinformation.
  • Collaboration between AI and security teams to identify and mitigate potential risks associated with LLM applications.

Who is Angelina Yasodhara?

Angeline Yasodhara is an Applied Research Scientist at Georgian, where she collaborates with companies to help accelerate their AI products. With expertise in the ethical and security implications of LLMs, she provides valuable insights into the advantages and challenges of closed vs. open-source LLMs.

  continue reading

101 afleveringen

Artwork
iconDelen
 
Manage episode 406621932 series 2534786
Inhoud geleverd door david@georgian.io (Georgian). Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door david@georgian.io (Georgian) of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

We all get a few chuckles when autocorrect gets something wrong, but there's a lot of time-saving and face-saving value with autocorrect. But do we trust autocorrect? Yeah. We do, even with its errors. Maybe you can use ChatGPT to improve your productivity. Ask it to a cool question and maybe get a decent answer. That's fine. After all, it's just between you and ChatGPT. But, what if you're a software company and you're leveraging these technologies? You could be putting generative AI output in front of your users.

On this episode of the Georgian Impact Podcast, it is time to talk about GenAI and trust. Angeline Yasodhara, an Applied Research Scientist at Georgian, is here to discuss the new world of GenAI.

You'll Hear About:

  • Differences between closed and open-source large language models (LLMs), advantages and disadvantages of each.
  • Limitations and biases inherent in LLMs due to their training on Internet data.
  • Treating LLMs as untrusted users and the need to restrict data access to minimize potential risks.
  • The continuous learning process of LLMs through reinforcement learning from human feedback.
  • Ethical issues and biases associated with LLMs, and the challenges of fostering creativity while avoiding misinformation.
  • Collaboration between AI and security teams to identify and mitigate potential risks associated with LLM applications.

Who is Angelina Yasodhara?

Angeline Yasodhara is an Applied Research Scientist at Georgian, where she collaborates with companies to help accelerate their AI products. With expertise in the ethical and security implications of LLMs, she provides valuable insights into the advantages and challenges of closed vs. open-source LLMs.

  continue reading

101 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding