Manage episode 268594587 series 2698455
This episode is sponsored by “The Chief IO”.
The Chief I/O is the IT leaders' source for insights about DevOps, Cloud-Native, and other related topics. It’s also a place where companies can share their stories and experience with the community. Visit www.thechief.io to read insightful stories from cloud-native companies or to submit yours.
It's 2018 in Kubecon North America, a loud echo in the microphone, and then Ben Sigelman is on the stage.
There is conventional wisdom that observing microservice is hard. Google and Facebook solved this problem, right? They solved it in a way that allowed Observability to scale to multiple orders of magnitude to suit their use cases.
The prevailing assumption that we needed to sacrifice features in order to scale is wrong. In other words, the notion that people need to solve scalability problems as a tradeoff for having a powerful set of features is incorrect.
People assume that you need these three pillars of Observability: metrics, logging, and tracing, and all of a sudden, everything is solved. However, more often than not, this is not the case.
I'm Kassandra Russel, and today we are going to discuss Observability and why this is a critical day-2 operation in Kubernetes. Next, we will discuss the problems with Observability and leverage its three pillars to dive deep into some concepts like service level objectives, service level indicators, and finally, service level agreements.
Welcome to episode 6!
Moving from a world of monolithic to microservices world solved a lot of problems. This is true for the scalability of machines but also of the teams working on them. Kubernetes largely empowered us to migrate these monolithic applications to microservices. However, it made our applications distributed in nature.
The nature of Distributed Computing added more complexity in how microservices interact. Having multiple dependencies in each one produces a higher overhead in monitoring.
Observability became more critical in this context.
According to some, Observability is another soundbite without much meaning. However, not everyone thought this way. Charity Majors, a proponent of Observability, defines it as the power to answer any questions about what’s happening on the inside of the system just by observing the outside of the system, without having to ship new code to answer new questions. It’s truly what we need our tools to deliver now that system complexity is outpacing our ability to predict what’s going to break.
According to Charity, you need Observability because you can “completely own” your system. You have the ability to make changes based on data you have observed from the system. This makes Observability a powerful tool in highly complex systems like microservices and distributed architectures.
Imagine you are sleeping one night and suddenly your phone rings.