 All right, hear me great. So now that we've passed our slide problems How do you want my name is Ellie Goldberg? I'm the director of platform engineering at salt security Bit about me these are my homegrown animals Alice the cat mango the dog or as we come by this office name Mongo Salt security so salt is a API security company we basically protect APIs by looking at our customers traffic We build a model of their APIs and we identify potential attackers doing so we handle billions of daily requests and we Love Lingerty, and I'm happy to share why So around two years ago. We were running about 20 microservices on Kubernetes mostly Scala and They communicate with each other mostly via HTTP and some other proprietary protocols And as the teams were getting bigger some of the challenges we've had was to prevent APIs from breaking internally so we've seen programmers introduce new programming languages and it's becoming harder and harder to solve those problems from the code itself and we started thinking outside the box and Looking for tools that will make it easier So just to illustrate the level of complexity this is what we've had around two years ago and Over time we started seeing a more complicated pictures more programming languages more inputs and GeoPC was a great solution for us because it allowed us to create a single centralized schema To actually see our entire APIs entire APIs in a single repository It was much easier because you could just call a function and Generated source code library instead of having to recreate your serialization and deserialization Jason our quest And protobuf is much more efficient much smaller payloads The problem is when you introduce gRPC into your your stack you lose one capability, which is low balancing So unlike htp1 gRPC is based on htp2 which basically reuses Reuses the the same connections over a different quest and then you have to figure out another way to keep your load balancing capabilities in Kubernetes But there are multiple ways to solve this There is of course client side load balancing But since we were we wanted to solve it in a language agnostic way We prefer to to solve them the network level and since our use case was focused We wanted to solve the gRPC load balancing issue. Our primary candidate candidate was Lincardy Going to staging it took us a few minutes to deploy to our staging cluster Running a few low testing scenarios showed a 250% performance increase just by introducing gRPC and Lincardy And we thought that we'd let it run for a couple days before we go full full production just you know to gain some confidence Production end-to-end work was about five days. It was mainly about switching Lincardy to work as a highly available solution and We wanted to deploy it in a more get-ups approach. So we use the helm charts wrapped around there with Terraform We set out to solve a simple and single Problem we've had which is the gRPC load balancing, but we've actually gained so much more So our connections were now MTLS encrypted end-to-end between our pods We've started seeing a whole network all the communication between the services was now in front of our eyes and Also it opened the door for us for a few more interesting Features that Lincardy has to offer that we've already implemented such as gRPC retries and per team monitoring alerts To actually be able to tell teams as a platform team Hey, you have you have a problem here. Go check it out without even knowing about which service or which call they're doing So if that was the complexity we're talking about now We're starting to see all the connectivity between the services which ones were failing and a fairly easy way of pointing the finger into where a problem persists in the cluster Aside from production we started seeing teams utilize Lincardy for development environments as well To actually verify the correct behavior For the services and their deployments before they reach production, which is pretty cool to see We're super excited about the upcoming features so there's circuit breaking that was just released in Lincardy 213 and things like multi-cluster multi-cloud Lincardy at the edge and Canary deployments as well, so The fact is that we're not Lincardy experts But we were able to do all that means that Lincardy is super simple but powerful tool and it has an incredibly Welcoming and supportive community and we're grateful for that If you have any questions, please, it's my Twitter handle or find me via email. Thank you Thank you. Once again, I'm gonna ask LA to