Artwork

Inhoud geleverd door The New Stack Podcast and The New Stack. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door The New Stack Podcast and The New Stack of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

LLM Observability: The Breakdown

25:51
 
Delen
 

Manage episode 409245001 series 75006
Inhoud geleverd door The New Stack Podcast and The New Stack. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door The New Stack Podcast and The New Stack of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram & Associates and an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.

Similar to infrastructure observability in DevOps and SRE practices, LLM observability aims to provide insights into the LLM stack's performance. This includes monitoring metrics specific to LLMs, such as GPU/CPU usage, storage, model serving, change agents in applications, hallucinations, span traces, relevance, retrieval models, latency, monitoring, and user feedback. MSV emphasizes the importance of monitoring resource usage, model catalog synchronization with external providers like Hugging Face, vector database availability, and the inference engine's functionality.

He also mentions peer companies in the LLM observability space like Datadog, New Relic, Signoz, Dynatrace, LangChain (LangSmith), Arize.ai (Phoenix), and Truera, hinting at a deeper exploration in a future episode of The New Stack Makers.

Learn more from The New Stack about LLM and observability

Observability in 2024: More OpenTelemetry, Less Confusion

How AI Can Supercharge Observability

Next-Gen Observability: Monitoring and Analytics in Platform Engineering

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

  continue reading

858 afleveringen

Artwork

LLM Observability: The Breakdown

The New Stack Podcast

517 subscribers

published

iconDelen
 
Manage episode 409245001 series 75006
Inhoud geleverd door The New Stack Podcast and The New Stack. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door The New Stack Podcast and The New Stack of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram & Associates and an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.

Similar to infrastructure observability in DevOps and SRE practices, LLM observability aims to provide insights into the LLM stack's performance. This includes monitoring metrics specific to LLMs, such as GPU/CPU usage, storage, model serving, change agents in applications, hallucinations, span traces, relevance, retrieval models, latency, monitoring, and user feedback. MSV emphasizes the importance of monitoring resource usage, model catalog synchronization with external providers like Hugging Face, vector database availability, and the inference engine's functionality.

He also mentions peer companies in the LLM observability space like Datadog, New Relic, Signoz, Dynatrace, LangChain (LangSmith), Arize.ai (Phoenix), and Truera, hinting at a deeper exploration in a future episode of The New Stack Makers.

Learn more from The New Stack about LLM and observability

Observability in 2024: More OpenTelemetry, Less Confusion

How AI Can Supercharge Observability

Next-Gen Observability: Monitoring and Analytics in Platform Engineering

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

  continue reading

858 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding