 OK, hi everyone. Welcome. I'm going to be talking about Istio, and Spring, and Micro Profile, and Erin's waving her arms at me from the back. Hi Erin. And my name is Ozzy Osbourne. Yes, that's actually my name. Just for anybody else who wants to ask me today. We're getting on quite well. So let's go back to that one. Apparently, we've all been asked to add this one to our foils. So I have to read this one out. So please note the locations of the surrounding emergency exits and located the nearest exit sign to you. In the event of a fire alarm or other emergency, please calmly exit to the public concourse area. Emergency exit stairwells leading to the outside of this facility are located along the public concourse. For your safety in an emergency, please follow the directions of the public safety staff. OK, now we can get back to the real stuff. So show of hands. How many people have heard of Istia? OK, keep your hand up, or put your hand up again if you've used it. OK, how many people have heard of Spring? Same deal. How many people have used Spring? Thought so. How many people have heard of Micro Profile? And how many people have used it? OK, that gives me a rough idea of where we're headed. So I should explain a little bit about my motivation behind giving this. We've been looking at the way that applications have been evolving over time. And you've been looking at the way that basically the business logic has been becoming clearer and clearer to be just about what the business is about. And if you think about the way that applications used to be put together, we used to have vast watches of code. All of it would be custom, and your application would run if you were lucky. But when you went on to your next application, parts of it would work, parts of it wouldn't work. And you'd have to figure out how it came together. And then libraries came along and we decided for all the common stuff in there. And then you end up with basically little libraries that you can transfer between stuffs. And then we realized that enough of these libraries were common that we were going to stick them all together and call them a server. So you had JE servers, and then suddenly you had applications that you could run on all these different servers, and it didn't matter where you transferred your app to, it would still work. And then people started looking at JE and saying, well, that's not enough. I need to add more to this. So we have things like Spring come along as a framework, and start providing additional APIs over the top of those as well. And then people start writing their applications to those APIs because they look good. And then lastly, we've got these things as a concept of a platform. And a platform seems very similar to a framework, but it's essentially providing capabilities to your application that you no longer need to take care of. All of these have in common that they're moving responsibilities that you previously have taken care of out of your application and placing them somewhere else. This leads to your business logic becoming clearer. But importantly, it has impacts on how you design your application. And it's that kind of stuff that I wanted to look at and be able to show you through comparing Istio against Spring and against MicroProfile. But to be able to do that, we've got to understand the capabilities of these platforms. So as a suggestion that none of us have got through Istio yet, that's why the first half of this presentation is going to be showing people a little bit about how it works and how the architecture hangs together. So Istio describes itself as an open platform to connect manage and secure microservices, which is great. It's got a web page. You can go read the same statement there. And it will mean just the same as, if I say it, 20 times. It's not going to help much. It's a service mesh. So you've got a bunch load of services. They all talk together. And it provides some magic that helps glue this together and helps provide some facilities that you can use to make it easier to do this stuff. Some of that stuff includes intelligent routing and load balancing. So you can have things in Istio where your request comes in and you decide where it's going to go based on the content of the request. Or you decide where it's going to go based on pretty much any attribute you can imagine. If you want to route it to somewhere that's lightly loaded, you can go that way. You want to route it based on which IP address it goes from. You can do that kind of thing. Load balancing. So Istio can be something that can decide which one you're going to route to, which instance of the service you'll route to, based on more algorithms than just round robin, because it's got knowledge of how long the last one took to respond. And fault tolerance. So Istio is capable of sitting there and saying, well, actually the service you're trying to talk to hasn't responded for a while. I don't think we should probably be trying to talk to that one anymore. You get into things like policy enforcement, which is should this service even be allowed to talk to that other service? And all of this comes via the platform. And then we get onto stuff like metrics and logging and observability and visualization, which are all coming for free by the fact that by this point Istio is deeply embedded in your mesh. And it can see all of the stuff happening. So why not pull this stuff out and make use of it? And lastly, I'm going to go into some of the stuff about service identity and security, which is because Istio is that far down in your stack. It's got a concept of the identity of your services. And it can provide security based around the identity of those services. So instead of having a security that says that this user is allowed to do this thing, you can have the concept that this service is allowed to do this with this other service. So now we start getting a little bit deeper. This is the Istio architecture diagram. It's present on their website. It explains pretty much everything you need to know about Istio in one little diagram. But because it's not that much useful without some more words, we should probably start looking at that. So they divide their stuff loosely into a control plane and a data plane. The data plane you're probably all familiar with. The data plane is basically a way of saying, here's a bunch of services, which you can see as service A and service B at the bottom. And these services talk to each other. They're exchanging data. They're getting on with the usual business in the life of the service. But the control plane is something you might not have heard of. And the control plane is one of these things that I've best read recently described in a blog, as the control plane in most cases today is you, the human. You're the person who's responsible for defining the configuration that you're going to push out all of these services so that these services know how to talk to other services, what to do if a service doesn't answer, how long you're supposed to wait for a service to answer before you give out. All that configuration information today is what you, the human being, enters into various config files and scatters to the forewinds when you deploy your services in the vague hope that you maintain synchronization between all of these when they're actually in production. Istio decided that basically this was not a good plan. And we wanted to try and come up with a way that this stuff could actually be controlled from a central location and managed via strongly typed configuration. So you can't accidentally put an integer where you're supposed to have a URL for the service you talk to next. So we get on to the next blog, which is this Envoy sidecar. You've heard a little bit about Envoy from Erin earlier today if you were in one of her talks. Istio works because it's capable of intercepting all the traffic in and out of all the services. So each service individually has its own little Envoy stuck next to it. And it's delivered to this service mesh by nature of a sidecar mechanism. And the sidecar basically means that the container that you're pushing out with your app gains another one. And that Envoy sits next to it, takes over all the traffic from your service, and then listens for configuration information that comes in from Mixer. And that's what tells it what it's allowed to do. And there's a pilot communication stuff which says how it does service discovery and how it should talk to other services. And these are extensible, pluggable mechanisms. So Istio's intended originally, but it was designed to be neutral. But the first implementation of it that everybody's familiar with is the one that runs on Kubernetes. That's the one I'll be showing some more detail through. But I know that CF is looking at integrating this stuff into their stack. And I believe that the CF, they've not just got one version of some of these, they've had to create a few more different pieces along the way. But Envoy's very important because sitting in front of all of the traffic that comes in and out of your service is capable of implementing a lot of the stuff that you may previously have done inside the application itself. So back to the first part where I was explaining about how this leads to cleaner business logic, it means your application no longer needs to explicitly take care of things like load balancing or circuit breaking or health checks or fault injection, which is a fun one. Because fault injection is one that's kind of unique to Istio. We'll cover it a little bit more. But the idea basically that you can deliberately fail something between some services to test how the rest of your application copes when a failure likely will occur at some point in production. So talking about sidecasts, when we're over in Kubernetes, if you're not familiar with this stuff, the concept of a service in Kubernetes is associated very strongly with the concept of a thing called a pod. A pod is basically your unit of function that you've deployed. And if you were to horizontally scale that service so you have multiple instances of it present, you'll end up with multiple pods present, which is why in the little diagram that I've got here, there are these little squares that are stacked going backwards. Now, when you've played with this stuff in the past, you would have a container that would sit in your pod, and that container would be your application. And if you've played with Kubernetes a little bit, you'll know that usually you have this one-to-one relationship between your pod and your application container running in the pod. That changes quite quickly when you start looking at using Istio, because Istio is using Kubernetes' ability to run more than one container in the same pod. So when you deploy your application container into a pod, you end up with an envoy container deployed alongside it. They're still part of the same pod. They share the same lifecycle. But in deploying the envoy container into the same pod, it has taken over all of the network traffic for the whole pod, now rooting through Envoy. So Envoy has now got full visibility over everything, and manages to achieve interesting things via that. So we get on talking a little bit about Mixer and Pilot. Mixer's the part that has the interesting role of basically pulling all the different fragments of config together, and blending them out, and then sending them down to the envoys. So Mixer's role is pretty much to interpret what you've asked for your services to be allowed to do or not allowed to do. Say you want the service to only be allowed to invoke other services on a white list. That would be an example of a policy that you would give to Mixer. And Mixer takes all of these different policies that you slap together from various places, aggregates them into what's required for each instance of Envoy, and propagates them out to the envoys, and keeps all those envoys up to date. Pilot on the other hand. Oh, that should cover telemetry is where Envoy's because it's sitting there, it's also aware of all the traffic flowing in and out all the time, so it can collect that across your entire service mesh and stream that out to a Prometheus endpoint. So you can collect and aggregate that data and visualize it and do useful things with it. But Pilot on the other hand is a similar role in that it's another extension point. And Pilot's job is to interpret the stuff that's coming through from Mixer slightly and be able to implement service discovery in an abstract way. It's basically there to try and hide the fact that Istio will run across multiple platforms. So for me, it's very easy to imagine that Istio is basically just a chunk of Kubernetes. But when you start looking at how Istio is going to implement over the top of CF, they're talking about having a CF version of Pilot. And I think at the moment, they're talking about having a co-pilot as well, because they've decided that Pilot's lifecycle wasn't quite what they were after in every case. So Pilot takes care of service discovery, traffic management, so A-B tests where you can basically route a certain percentage of your traffic to one service and another percentage out to another. And you can keep an eye on it and see whether you think it's any good or not. You can do canary deployments where you can throw out things and see whether they're actually any good or keep an eye on them and see if they're still alive. It handles resiliency. So automatically, Pilot can configure an invoice so that you can have services that if they don't respond within, say, 15 seconds, they will time out then, rather than waiting for the full two-minute TCP socket timeout. Retries, if it doesn't answer it, can retry under the covers for you. And again, all of this is your application is not aware of what's going on. It placed a request. The request will either come back with a valid result or it'll come back with an error eventually. But Pilot's job could be summarized quite loosely as it's there to look after all of those little envoy instances. It's job to collect the configs, push them out there, and keep them all in sync. That leaves the last part of our architecture diagram, which is this DI-Auth. And that part is basically responsible for being able to secure the services. It's capable of creating mutual TLS connections between all of the services transparently. This is, of course, all the data is flowing out through Envoy. So, of course, Envoy can now set up these connections between places. But if you've tried configuring just even HTTPS between a nest of microservices at any point, you'll quickly have found that certificates become your enemy. You've got to set up probably your own custom CA, or you've got to buy certificates for every service, and then you've got to make sure that you've got the right certificates in the right places. And then what happens when you need to do certificate rotation? What happens when a key expires? What happens if you need to revoke a certificate because it ended up somewhere it shouldn't, like GitHub? Then you have to think through, you know, how do you handle this? And Istio makes all this quite simple. You just send HTTP from your service. It wraps it up into a TLS connection. It manages with its own CA all of the certificates for you, and all of the connections are then secured, not just with one-way TLS, but with mutual TLS, client certificate-based HTTPS. And if you've ever tried setting that one up, it's a world of pain even greater than just going for normal HTTPS. But this way it's free. It happens for you. Your application doesn't care. You can revoke certificates. You can even shut down which routes you want by using the mix of policies. So a lot of power. So hopefully at this point, when this diagram comes up, and when you see this kind of thing floating around on their website, you'll have an idea of roughly now what each of these different parts were there for and what they were doing. So that's pretty much it for where I wanted to get to with the technology, with the overview of Istio. And now we start looking at, for each of the different capabilities that's there, how this stuff compares between Istio and Spring and Micro-Profile. So we're gonna look at a couple of different things. The first one's gonna be service discovery. And we're gonna look a little bit at how we can blend different approaches so you can mix together Spring and Kubernetes. And what that means, and what happens if you try to throw Istio into that mix as well. And we're gonna look at fault tolerance and then security and then trace logon metrics. So starting off with service discovery. Service discovery within Spring it centers around Eureka, usually. Eureka's borrowed from the Netflix stack and it's a service registry. It requires that each of your services, as they boot up, basically contact the service registry and say, hi, I'm Service Fred. I'm over here, the service registry remembers that. And then via Ribbon and via other client plugins or via lookups using Discovery Client, your app code turns around and says, I need to talk to Service Fred. And Eureka turns around and says, yeah, I know three versions of Fred. Which one do you want? And the client picks which one it wants and the client goes out and makes the connection and then everything's finished and you've completed your passage. Istio is the other way around because Istio is using Envoy. So Istio is actually more common to the way that Kubernetes does its stuff. In Kubernetes, when you wanna connect to a service, you turn around and say, hey, I wanna talk to Service Fred. And Kubernetes knows where the service Fred's are because when you deployed them to Kubernetes, you said Kubernetes is a service, it's called Fred. If you horizontally scale it, so you've got 20 copies of Fred, then Kube knows there's 20 copies. But critically, the client doesn't care. The client is never told about all 20 copies of Fred. In Kube, you turn around and say, I need to talk to Fred. And Kube turns around and says, yeah, okay, I'll connect you to a Fred. I'll find one for you. I'll find you something that matches the definition of what you are asking for. And that's critical in Kube because it isn't just down to a single thing. It matches it based on the claims that you've made when you deployed your service. So you could deploy a service and claim that it's a satisfied Fred and a bunch load of others. And then you would be talking one of those. So because of that, you end up with a situation where the application no longer cares about this stuff. The application doesn't care about where it's connecting to. So it's unaware of where it's connecting to. Registration is automatic and the framework takes where it's going to. Istio is playing into that because it's sitting kind of taking over from Kubernetes default service discovery so that when you turn around from an Istio enabled Kubernetes stack and say, I wanna talk to service Fred. Will Istio could turn around and say, well, are you allowed to talk to service Fred? And if the answer is no, it will just turn around and say, well, there isn't a service Fred for you to talk to. Or it could actually be that there were three different versions of Fred. You might have Fred one, Fred two, and Fred three deployed concurrently. And then Istio will make the choice as to which Fred you're gonna connect to based on the policies that you've told it to use. So you could have it so that if your user ID is Graham, you'll be connected to Fred two. And if your user ID is Erin, you'll be connected to Fred three and everybody else gets stuck with Fred one because that's the one that's in production and the other two are a beta. So it's an interesting way of looking at it that no longer does the application have to care is pushing out the configuration of how the service discovery is functioning to an ops level. And all of that's done by the control plane. So it's all done using proper config files instead of little mishmashed bits of YAML and properties that you've scattered to the forewinds and deployed to containers that you can't remember what you set them to or you've put in environment variables that you have to go to various web consoles to edit to keep everything in sync by hand. This one makes sense. The other way, it's tricky. A quick note for micro profile, for the one person I know who deeply cares about it in the room at the moment, that micro profile doesn't have anything to say about service discovery. Now it's not saying that it's lacking a feature here, it's not saying that it's not doing anything. It's basically saying that micro profile just doesn't need to worry about service discovery because it's somebody else's job. If you're using a micro profile app somewhere or a deployment, you're gonna have service discovery but it's just not down to micro profile to make that choice. So talking about service discovery. If we've got Netflix apps that are using Eureka and you need to bring them over and put them on Kubernetes, what do you do? Do you rewrite the entire app? Because as Erin was showing earlier, the code that Netflix uses and the code that Spring uses to say that you're gonna use a Eureka server is actually very specific. It turns around and says, at enable Eureka server or at enable Eureka client. That's not just a statement saying I wanna use service discovery. That's a statement saying I wanna use service discovery on a particular technology. And quite often you can end up not just injecting a discovery client, which is the nice standardized abstract way of talking to discovery. You can see Spring code that has injected Eureka clients. That means that code could be making client side load balancing decisions. And that's just interesting because if it relies on that, if it was the code that was there was critical that it had to make the choice of which one it was connected to. So it was manually implementing sticky sessions. Then you have to maintain that behavior as you roll out to another architecture like Kubernetes. So just for kind of entertainment value, I rolled out a Kubernetes cluster and then deployed Eureka to it. Because Eureka's a container like any other, so I pushed it out, set up a cluster of Eureka servers inside my Kubernetes cluster. And then I started setting up applications on Spring and I told them to start talking to Eureka and you get into this wonderful world of pain. Because you've got your applications on the one hand saying, hey, where's the Eureka server? But at this point, Eureka's not in play right because they don't know where the Eureka server is so there's no way it can get in the way of it. So what happens is Kubernetes turns around and says, Eureka server's over there performing service discovery for your Spring app that wants to use Eureka-based service discovery. So you've bootstrapped your Spring app now so that it's found its Eureka server. The request comes back from Eureka, oh, I'm here. And you turn around and say, I need to talk to service Fred. Tell me where all the service friends are. And here's where things get really, really twisted because your Eureka server turned around and said, I know of three Freds. There's that one that was sitting there. That one that's sitting there and there's that one sitting over there. But I'm gonna give you the exact seat address of where each of these Freds are and then you're gonna decide which one you wanna talk to. So you go, great, I'll talk to Fred too. And you go off to start talking with Fred too and your Kubernetes cluster says, wait, you're trying to talk to a Fred. I know what you're doing here and it randomly connects you to any Fred that you want to talk to because all of the Freds one, two and three were actually declared as a service type Fred. When you were given your URL from this Eureka service registry, you were given a URL to this Kubernetes service Fred, which is itself an abstract service discovery proxy that goes off and finds an actual Fred to connect you to. So your attempt to talk to a particular Fred has now been completely thwarted and you're now talking to a random Fred again. So you think, I know, I'll get round this. I'll change the way my clients register to Eureka. I'll have them instead of registering using the Kubernetes service URL, I'll have them register using their pod IP. That's awesome. I tried this. It's rather entertaining. You end up with a Eureka service registry that now knows exactly where each of the Freds are to the T and you turn around and say, I want to talk to service Fred. Eureka gives you the IP address and you connect to that pod IP and you think, excellent, I've solved it. Everything's working. And then you deploy Istio to the cluster and you set Istio up to say that when user Graham is coming through they're only allowed to basically be rooted to Fred too and your client code comes along and says, I need to talk to Fred. My user ID is Graham. And Eureka turned around and said, oh, well, Fred is basically over there. That's the Fred you want. I'll connect you exactly to that one. And Istio turned around at that point should turn around and say, but you're supposed to talk to Fred too. I'm going to make you talk to Fred too, except it can't because you're not asking for Fred anymore. You're now asking for an exact IP address of a service within your cluster. So you bypass the whole of Istio in one step. So is it possible to deploy this? Yes. Is it practical? No. You can make this stuff work. It has great caveats. You can pull it together, but actually administering a system that's sitting like that is very tricky. If you allow people who are expecting to administer a Kubernetes system to look at that, then you're in for even worse pain because they're going to come along assuming that certain things are still true. If you deploy it to a cluster where your ops team are the reliant on Istio to do that, they're going to hate you because you're just going to sidestep all of their nice policies they start creating. So that said, it's not all doom and gloom. There are ways to integrate these kind of frameworks together. And although a lot of Spring does have things like at Enable Eureka Server, it also has things like at Discovery Client and Ribbon and Zipkin and Config Map and bits and pieces that, you know, Spring Extensions, which are aspects of Spring Config that would map really, really well to existing functionality that exists within Kubernetes. And there's a project called Spring Cloud Kubernetes that offers Kubernetes-specific implementations of Spring APIs. And so that you can get hold of and inject yourself with the Spring Auto Wire stuff. You can inject yourself a version of a Spring Discovery Client. And if you ask that, hey, I need to talk to Service Fred, then it's going to delegate down to the Kubernetes Service Discovery, which in the case of a cluster that's got Istio present will delegate to Istio, at which point your Spring app is now talking natively through Istio and the whole mesh comes back together again. The trick here is of course that you're talking to the generic Discovery Client and not to an Eureka Client specific. It's a lot of fun. The bits like Ribbon is client-side load balancing. If you plug that in, there's a plug-in there that will also route that out through Kube. Zipkins for distributed trace. And there's plug-ins again from Spring Cloud Kubernetes that plug that in so that it will connect across the Kubernetes trace APIs. And Config Map is a Kubernetes concept for a key value store. And there's a Spring Cloud Kubernetes plug-in that allows you to dynamically access the configuration elements from your Spring Cloud stuff by just injecting it like you would with normal Spring properties. So that gives you a way to configure your Spring application without needing to have too much data inside an application yaml somewhere. You can store it in a Kubernetes Config Map and transparently inject it directly into your app as if it was just any other Spring configuration. I wouldn't recommend trying this with a mutual auth turned on, mutual TLS for Istio at the moment. There's issues to do with which ports are allowed and how the data communicates that will get you into trouble. So it's best to avoid. I know the people who are working on it at the moment and they're trying to put it back together, honest. So we should look quickly at fault tolerance. If you're looking at Spring, Spring's got Hystrix. How many people know of Hystrix? A few, quite a lot, cool. Hystrix handles all the wonderful things there. So timeout, you wanna make sure that if you try to talk to a service, if it doesn't answer within X that you eventually timeout before your socket ends up timing out. Retry is easy enough. If you're trying to talk to a service, it doesn't answer the first time. Maybe it'll answer again if you keep ringing. It's just like bringing the doorbell, right? Either somebody will answer or they'll give up and decide that you're horrific and won't answer you ever again. Fallback is airing covered this one. If you want to talk to a service and it doesn't answer, maybe there's another service you can talk to that will give you another answer. Works well for Netflix if they wanna list a movie. It's not so great if you want a stock price. Trying to find another fallback service for the stock price of your company isn't such a great plan. Circuit breaking is you've talked to this service for a bit and other people are talking to this service for a bit and it's not responding and what are you gonna do? Well, it's best at this point, if the service is struggling, if the service has got all of the threads it's got available to it working on trying to give answers, the worst thing you can do is let any more requests land on it because it will never finish it. So, that will basically let you turn those off, give the service a chance to recover and then maybe everything will come back together. Vault heading is one where you basically say I'm only gonna allow so many people to talk to the service at one point. And the thing about these capabilities is they're not just common for history. They're present within micro profile. They've got their own spec there called fault tolerance one point no and that basically provides all those capabilities again. And then when you go over to Istio the same lots all there again as well. The key difference is of course that when you're doing this from micro profile when you're doing this from history you're doing it with code inside your application. In history's case you create a thing called a history's command that wraps the invocation to the service that you're trying to do to it and the history's command specifies the behavior of whether you want to do a retry or whether you wanna fall back and so on and so on. But the point is that if you needed to move your code off of history to somewhere else you've gotta rewrite all that code. Istio doesn't have any code in the app. The behavior's at the network level. You try talking to a service, the network decided according to the policies that you defined by the control plane decides that you should be retrying that or it will retry the request for you. It decides if you want to go off and have other behaviors laid onto it. So your application has no code and I say almost because of course if the request actually fails you still need to handle the failure at the end of the day. It's not sufficient to just magically have your platform find out that the service isn't there and you're gonna have to still deal with the fact that at the end of the day if it's not there that's your job to deal with. And the fault injection. Fault injection is the great one that comes free from MBOA so you can actually have the things lie. Ever wondered how well your microservice mesh would survive if one of your services started randomly returning a 404? Perhaps one of your microservices already returns a 404 occasion. You've seen what happens, yes? Right, so the key thing here is you can do this in a safe test environment but before it gets to the point where you need to do it in production and debug what happens if a service falls over takes too long to reply or isn't answering. So Istio lets you do that kind of fault injection and simulate failures within your graph. Security's the next one. Security, spring security is great fun. I've had the joy of digging into it incredibly deeply over the last sort of six to eight weeks. It's a combination of a bunch of building blocks that are incredibly flexible. You can make them do anything you want. And the cost that comes with that is that basically you've got huge cognitive load. You have to understand how the framework works. You have to understand what it is you're trying to achieve with it. You have to understand the implications of what you're doing and whether you end up with something that's ultimately secure is your problem. It's very easy to end up with ones that just bypass the whole stack just by sending the wrong response at the wrong layer. But at the end of the day, it does work. It's flexible, it'll do whatever you need and the code is part of your app. Micro-profile offers another approach. Micro-profile has JWT propagation. That allows you to basically mark methods on your services and say that this one requires authentication by a user that has a given role. And the role is retrieved from the JWT that's used to authenticate the request. But again, you're annotating code in your application with your security concerns. Istio comes along and says, well, knowing if a service should talk to another service isn't really something I want in the code. That's kind of an ops-level concern. I wanna be able to configure that and say that all of the services will talk to each other. They're all gonna be secured on a traffic level using mutual TLS. And what I need to know now is should this service be allowed to talk to that service? And that's a policy decision that's fed in by Mixer. So lastly, we come up to Tracer Logo Metrics. Spring has a very little handy project called Actuators. You turn that on, you get a bunch load of metrics and you get a bunch load of health endpoints and it's quite easy to play with. You can customize them, make them do what you want. Sleuth is handling end-to-end tracing within Spring. So if you wanna see a request flowing all the way through the different services end-to-end, Sleuth will handle that. Micro-profile on the other hand has very similar capabilities within Metrics 1.1 and Open Tracing. But again, Istio's the odd one out here because it's not inside your application. It's sitting outside, but because it's seeing all the traffic, it can achieve much of the same goals. You can still see a request flowing end-to-end so you can collect all of that stuff. You can still collect metrics from all of those envoy endpoints and have those routed through to Prometheus where you can render them in dashboards. And you can do things like this because envoy knows everything about where your traffic's flowing. So it can generate you a real-time graph of where your services are and how they've connected and start showing you where the data is within that net. Showing you which services are retrieving like thousands of requests a second versus the one that's sitting there not doing much. I like that. I think it's kinda cool. Not just because I've built something that looked rather similar a few years back. But that comes to the end of where I wanted to get to with this. So if you've got the needs to go and find more, you can go to Istio at Istio.io, Spring at Spring.io, or MicroProfile at MicroProfile.io, or you can come to Aussie, but not at Io because I'm not Aussie, Io, Io, Io, Io, because that gets very confusing. So any questions? One question, yes. Yes, Istio comes with an Ingress module on Kubernetes at least. So there's Istio Ingress and if you deploy and you configure rules on Istio Ingress, then you can apply the same policies that you have between service to service to the traffic that's entering your service mesh. So you can say like things that come in with a particular header must be routed to the following service at this version and so on and so on. So you can control all the flows across the entire graph. Cloud Foundry's integration with Istio isn't complete at this stage, so I can't speak to the capabilities. You can Google as well. Okay. Thank you very much, everyone. Yes, sir.