 I'm Alan. And today I'm presenting this lightning talk, Unifying Hybrid Clouds, our journey through a multi-control plane service mesh. A little bit about myself. I know I'm younger in this photo. I'm a satirical bit engineer at booking. And just to put everybody on the same page with the terms I'd be using, a little bit of service mesh. We are in this today, so everybody knows this. But yeah, service mesh is this thin layer with primary object of enabling easy service-to-service communication. We have the data plane, which has this service-to-service communication happen, and have the control plane, which configures this data plane. And the control plane uses information from a service-discover database. It's also a configuration storage. So no new information for everybody here. Where we were at booking at time, our service mesh was deployed in 2017. So we are using that for seven years now, using it in my proxy. It's covering Kubernetes, Bermeto, other cloud providers like GCP, AWS, with more than 100,000 sidecars deployed now. These sidecars are deployed in 98% of the workloads that we have, more than that, actually. Our data plane is four nines. We, that's all I per se, is more than that. But it's very hard to measure. And our control plane was built in-house and tailored to our needs, of course, seven years ago. So we are now figuring out all the caveats and things. So the limitations that you have now, we're having limitations integration with some cloud providers, and the high cost of implementing new features, because we need to build it from scratch. So we start evaluating Easter scenes version 0.2. And when it reached version 1.3, we evaluate it again, and it would cover our needs. So now, how we replace the control plane, the magic that happens and controls the data plane layer. So we have this complex setup, a bunch of services talking with each other, very chaotic. And then you have the operator there that's turning the knobs and making this communication happen. So how we replace this operator there with a shiny new technology? And how we did that, to enable service formagation to one match to the other without changes, we cloned our data plane from the control plane that we have today to Istio. And they both held traffic in the same way. So to change Envoy with Istio Proxy, simple as that, no change in the service, you just replace the sidecar and start using Istio Proxy instead of Envoy. So the idea is simple. The implementation is sometimes complicated. So today, we have something like this. We have part of our fleet running Istio Proxy and part of a fleet connected to the old control plane. The services don't need any change. It's transparent. There's just one line of configuration change for the service owners. How we keep these two match things, we have a service that translates configuration from the control plane to Istio Resources and Istio connects to this service using the XDS protocol. We register the services and extra configs search on Istio. Challenges that we're facing now, and we faced in the past, we're still working on this, the X implementation uses the state of the world, so which is not applicable for dynamic environments because for every request, you need to send everything. We overcome this limitation like creating sharding. The NAC response is not fully implemented. The configuration rejections happen later in the stack on Istio. So there is a limited visibility of the horse using this approach, and it's very hard to do with a duplicated configuration. We need to ensure that the configuration is unique, otherwise Istio goes crazy and sends some configuration to the sidecars. So yeah, that's it. I hope you like it. I'll be available if you want to talk about it. Thank you. He asked about how many services we're running now. We have more than 2,000 services. I'm not able to give the real number, but have more than 2,000 services right now. Yes. Right now, we're running 20% on Istio and 80% on control plane, but we also have some services that are in mid-migration. So have some workloads running Istio and some workloads running the peer envoy. I have a question. What would you like to see happen when an Envoy NACs config within Istio? How would you like that to sort of bubble up to the user? Right now, we have in the logs message something happen. It's hard to track the resource that's there, and also because we have sharding. So all the configurations run at the same time. You need to track the sharding, and then you need to find the configuration. For some, it's easy, like typos or something like this, but others are more complicated to find, and it takes time. Yeah, that's it. All right, one more question. Nope, you answered them all. Thank you. Thank you.