 Well, hello and welcome to another Dev Nation Live. As you guys can see, we've changed our platform a little bit. So I'm learning it as well as you are, but we have a huge crowd today, all the way from Vietnam, Johannesburg, and every place in between, it seems like. So thank you guys for showing up. Now, I've already given you the link in the chat to the slide deck, but we're gonna show it to you now. So let me flip over and start screen sharing and let's get going here. Screen shared, go, go, go. Click the right button. All right, fantastic. That should be coming through now. Let me flip into the right screen here and we're off and running. So you guys should already have access to this deck and you can see that I already provide the link to you in the chat. As I said, that's the Bitly Istio-Canaries as the slide deck link and the demonstration and the tutorial you'll see is the Bitly Istio-Tutorial. And actually I have those repeated on this trigger slide because you'll want access to all these links right here because today's presentation is a little bit more of a deeper dive into some more complicated technology around Istio and the service mesh technology. If you're not that familiar with Kubernetes and general and OpenShift specifically, then you'll want to kind of go back and do some of these introductory materials around microservices. There's actually some great stuff when it comes to the actual book, two free eBooks we've provided for basic microservices and reactive microservices, as well as some nice demonstration capability from the older way of doing microservices if you will, older slides and even video training. And we even did another Dave Nation Live that specifically covers Kubernetes or Java developers. You'll want to go back and check those things out. Today we're going to go a little bit deeper, a little bit faster and we all are going to spend about 30 minutes together today. And again, the link to this deck is the Bitly Istio-Canaries which will give you access to all these other links. Let's go ahead and dive in now. So if you've been developing Java applications or .NET applications or PHP or doesn't really matter at this point, you normally think of your application. Your application is the most critical thing for what you create as a software developer. In my case, it's a Java developer or even a .NET developer, PHP again, you build the app. But what we've been doing over the last couple of years is thinking in terms of microservices. So that one big application, that one big monolithic application might be in this case, let's say 20 independent services. And what that means is the actual big application starts to fade into the background over time when we start thinking about our services and those services start becoming distributed and networked. And this is where things get super complex. And when we only had one single monolithic application, the role was pretty easy. If the app was up, we knew it was up. If it was down, we knew it was down. Now in this world of a network of distributed services, if one of those components is up or down, it might actually impact the rest. You may have a cascading failure across the entire network. So we have to start thinking more carefully about how we manage a whole series of microservices than what we had to do in the previous monolithic world. So microservices is fundamentally about distributed computer, having a bunch of services running under independent Java virtual machines with their independent operating systems, running out there on different compute nodes in many cases. So it might be several different virtual machines, several different pieces of hardware, but they're running across a distributed network. And so because of that, you have to have these kinds of microservice capabilities. So I call this the microservice illides. What kind of capabilities might your microservice need? And you have to think in terms of, well, my service now needs to have a really clean, solid API, because that's how another service would talk to me. My great API that I provide, whether it be REST and JSON or something else, GRPC or maybe your old school folks are more soap and wisdom. I'm just kidding, I know we're trying to get out of that. But you have to have a clean API that other folks will find you with, right? So discovery, they'll find you, they'll call you, they'll invoke you. So the invocation matters. You need to be elastic and you need to be resilient, and you need to scale. And if you fail, you need to be fault tolerant. And you have to think about all those illides, if you will, around your microservice. Now, how do you achieve these things? How do you get a pipeline? How do you get authentication? How do you do it monitoring and tracing across that network of services? Well, let's do a quick short history of microservices, because it actually puts things in context, I believe. We actually been trying to solve the go faster problem for our software development now for many, many years. In other words, how do we take really massive scale projects and waterfall projects from big monolithic applications and break them down into more incremental units of work? And the continuous integration, which came out of the XP world, was really for us to think about that. And the agile manifesto really wanted to think about that. How do we actually create more incremental work product? In this case, maybe doing sprint cycles in the course of two or three weeks, as opposed to a big six month monolithic waterfall model, right? So we also had the cloud invented in 2006. We have this DevOps concept point in 2009. Java E6, by the way, came out in 2009. I put that out there because that was when Java E really kind of came into foreground there when it came to building new modern workloads. But Netflix specifically moved AWS in 2010. They moved from on-premise data centers to a cloud hosted architecture and they wanted to achieve DevOps. They wanted to achieve velocity for their teams and they started thinking in terms of how to break out their monolithic applications and microservices. DropWizard also came out in May 2011. And now Netflix starts to open source their microservices architecture in the form of ribbon, a discovery mechanism and load balancing mechanism, districts, a circuit breaker mechanism and Eureka, which is actually a discovery service. So you start seeing all those things born into open source as of 2012 and microservices start showing up on the ThoughtWorks radar in 2012 as a term and thinking about assessing it, thinking about microservices. Docker force is born in 2013. Spring Boot also in 2013 and microservices more officially defined by Mark Vowler and Mr. Lewis there in March, 2014. So you can kind of see what the series of events were that kind of gave us the huge interest in microservices as it stands right now and all the hype and the hoopla around it. So Kubernetes came out in 2014 shortly thereafter and now you see most people talking about containerized microservices, Docker containers running on a Kubernetes backplane often with a spring boot-based payload as the application development model and leveraging a lot of that awesome Netflix OSS. But the one thing I want you to think about here is this stuff was invented between 2010 and 2012. It's now 2018. Perhaps there's a better way to do it, a better way to think about this problem and that's where the service mesh concept comes in. So what's wrong with Netflix OSS? There's nothing wrong with it. It was awesome. It taught us some amazing things. They definitely showed us the way when it came to microservices architecture. But the Netflix OSS capabilities are Java only. What if you have something other than Java? There would be a .NET application or PHP or Python or some other Ruby-based application. Node.js has got to be very popular now. Also, all those libraries are stuck in your code and that's important to understand also. You now have to carry that weight with you. So if you think about it from the distributed computing on microservices again, you have this concept of the container that your JVM runs in, the operating system runs, the JVM runs inside that and then your service, service A, B and C here, you can see A in blue and B in green and C in this nice plum color here. They have to carry the discovery mechanism. How do I find another service? The load balancer. Let me load balance across the services. Resiliency. What happens if the other service I'm calling fails? Also metrics and tracing. So the different services have to carry that load and that could be very problematic. You know that that fact would have to add all those capabilities to it. So think about all these microservices again, right discovery indication, the last test of the resiliency, et cetera, where do they come from? So let's kind of show you where we've been dealing with many of these things over the last couple of years. And the former Kubernetes is one key aspect of that and OpenShift, which is Red Hat's distribution of Kubernetes, it is our enterprise supported version of Kubernetes. And so right away, you can see that the discovery mechanism, Kubernetes solved that problem. We didn't actually have to embed that as a library inside your code. We, Kubernetes manipulates DNS in such a clever way that you can just simply say, service B and invoke service B, that's it. All you got to do is call service B. And when it came to invocation, just use straight up HTTP, right? You didn't have to do anything fancy there, the straight HTTP, just get request, put request, post request, we'll put fine. Also it gave you elasticity, meaning it's easy to scale. If you want two of these, 20 of these, that's just a declaration from a Kubernetes standpoint and you get that kind of for free. You also get monitoring out of the box in many cases with just a base Kubernetes architecture. But OpenShift even augments it further, right? We did some very specialized things for logging. We did some very specialized things for pipeline integration, Jenkins pipeline integration, and a lot of other things that kind of go over and above base Kubernetes. And most of that of course continues to get pushed into the upstream of Kubernetes over time. Now this is where Istio comes in. Istio, by the way, means sale. It's another Greek term. So in the Kubernetes world, we have a lot of Greek nautical terms. The word Kubernetes itself means helmsman or ship's pilot and Istio means sale as an example. And so the service mesh is basically dedicated infrastructure for service-to-service communication. Kubernetes handles the running of all those processes, all those containers, all those pods, and they can ensure that every one of those processes is load balanced and managed around a series of nodes, compute nodes and a big old cluster. So Kubernetes ensures all that, but the actual service-to-service invocation, the service-to-service communication, Kubernetes kind of ignores that, you know, just assumes you're going to use HTTPM, that's all it is. The service mesh focuses on that layer, service-to-service communication. It typically does it through the concept of a proxy, which we'll hear more about in a second, and we refer to it as the sidecar container. It basically is intercepting, like you think of Java interceptors, it's intercepting the traffic, the network traffic, from your business logic through this sidecar, and then of course, manipulating in interesting ways. So if you look at our chart now, the microservice illities, we have added Istio here. Istio is augmented things like tracing. It's focused on things like authentication. It definitely helps with resiliency as an example, and you also, it augments discovery mechanisms too. Okay, we'll keep going here. So this is what it looked like before Istio, right? You had discovery, load balancing, so that's your, we had to have ribbon in there, we had to have histricks in there, we integrated Zipkin, and maybe we used, you know, things, you know, various capabilities, if you will, from various libraries and my Spring Boot application, you know, I had to add these additional jar files. That was before Istio. With Istio, you remove those, and it goes down to the sidecar container. The sidecar container is managing that. Based on the sidecar container, that proxy that we talked about earlier is specifically based on this thing called Envoy, and it's very clever. What it's doing is manipulating IP tables so that when service A calls out for anything else in the world on the network, it is intercepted by that sidecar, handled to that proxy, and you can manipulate the rules associated with that now that you can actually override those concepts. We'll see that in the demonstration. So the concept of the service mesh has these kind of capabilities that people are focused on. You'll see things like, one, it's code-independent, not Java, because it's a sidecar in separate, it works with Node.js or Python, et cetera. It doesn't really have anything to do with Java in particular, but we've done some interesting things. We'll do more interesting things from a Java standpoint. You can also do really intelligent routing with it, and that's the thing I'll focus on with this concept of the Canary in this presentation. You can do really smart Canary releases with it, and that allows you to do things like really detailed AD testing, and then the future of things like dark launches through mirroring. Also, it also has the concept of chaos engineering. You can do fault injection at the network level, throw in 503s, throw in the network delays, and see how your application, how your services behave. So there's a lot of capabilities in Istio, and actually it's quite a huge project to be layered on top of Kubernetes, and has a bit of a learning curve, and you'll see that when you get to our tutorial. And again, the concept of the sidecar, in this case, I kind of want to explode what that YAML looks like. You can kind of see the little sidecar additional container added over to your pod. So pod in the Kubernetes line can always have more than one container. In this case, we have two now, right? My business logic container and my cycle container. So the Istio control plane has this concept of pilot mixer, and then the authentication solution, which also is for mutual TLS and focusing on certificates. But that idea that the sidecar, the on-by-sidecar communicates with pilot and mixer to get its rules, its routing rules, also to figure out things like quota and how to deal with exceptions and things of that nature, and how to deal with tracing and monitoring. But the actual pod-to-pod communications is still regular HTTP, 1.1, HP2, GRVC, or TCP with TLS, with or without TLS. So from a programmer standpoint, you just call service speed. You don't realize there's something in the middle, something intercepting that call. And then pilot and mixer are what allows you to apply the specific rules that you want to change the behavior. So quickly talk about a canary deployment. You have this little build that moves through your pipeline that lands in a production environment, and you specifically will then take on a small percentage of load and grow that percentage of load over time. If anything fails, you roll it back. If it succeeds, you keep rolling it forward, is the idea, the concept of the canary in the comb line. So that's really straightforward, but here's a challenge with canaries and Kubernetes. By default, the base service capability, which has load balancing built into it from a base Kubernetes standpoint, is 50-50 load balancing. So in order for you to actually have a canary deployment with base Kubernetes, you would actually have to roll out, let's say, 100 pods with one pod version two to have 1% of traffic by default. Or if you wanted 10% of traffic, you could roll out 10 pods with one of those pods being version two, and then you get 10%. By default, it is even round robin load balancing. We'll show you that in the demonstration. With Istio, and you'll notice the slight change here, Istio allows you to be very fine grained. If you only want 10% or 1% of traffic going to version two, or only certain users with certain HTTP headers, you can do that by using an Istio route roll. Okay, so let's actually dive into the demo here. And I wanna make sure we don't run out of time because there's a lot to show you from a demo standpoint that I think you'll find to be very cool, and we'll come back to this. So the demo URL, as you saw here, is Bitly Istio tutorial. That brings you to this GitHub, where you will find us actively working on this every day. So you're gonna see massive changes to it as we go. We're gonna keep upgrading it with different versions of Minyship, which is OpenShift, and different versions of Istio as they come out. But it basically has this concept of three simple microservices. Customer, which calls preferences, which calls recommendations. There are three spring boot applications, and if I'll go over here to the code here. Like here's preferences as an example. So real simple, they're basically hello world, but we wanna show you the mechanics of the server mesh underneath it, not necessarily the Java logic inside it. There's not much Java logic here. But you can kinda see that we have the REST template, and we're gonna basically say go for that remote URL here, which is the recommendations. So basically customers calls preferences, preferences calls recommendations. Pretty straightforward. A simple chain of microservices, and in this case it's only three. And then recommendations will respond back with, it's got the big red dog and Clifford. So basically preferences is going to look for, it's gonna basically say, I've got red and big, I'm looking, I have red and big preferences. And recommendations is, oh, if you have red and big preferences, you get Clifford, the big red dog. Hopefully you guys know the context there. If not, I'll have to come up with a better example. And customers just simply returns C100. So to kinda make that point, I already have this one in here. So let's see, OC get pods. Okay, you can see that we have the three pods here. And I use OC interchangeably with QCTL, by the way. So it's just a little shorter type, but you can always say QCTL, because it is just a Kubernetes. And I have mapped Docker to this also. So if I did the Docker images, you kinda see that it'll show you all your Docker images. And we specifically have these Docker images here for this example. We have recommendations V1, customer preferences. So three different containers, running in three different pods. Here's the version one of recommendations. And of course, there's the base customer preferences. We're gonna make a version two of recommendations. But because we have all those three, I can pull this endpoint and you can see what it's returning. There's the C100 from customer, the red and big from preferences and Clifford V1, okay? So if you can see those strings right here, there's the Clifford V1 here. And then there is the B, P, red and big right there, okay? So that's where that content is being returned from. And then like I said, customers to C100. So it's pretty straightforward, right? You got pretty much a chain of events. You can invoke them. And all we're doing here is just issuing a little curl command. I'll look at this poll thing here. You can kind of see this as a simple curl command and you can just kind of curl, curl, curl, right? Real straightforward there. And actually I do have it running here in Firefox. Pop up here and you can see there it is running in Firefox and I also have it running in Safari. So very straightforward item, right? Just a little simple application there and you can see it says Clifford V1 there, okay? Clifford V1 there. Let's hop back over here now. I'm gonna go back to the little tutorial. But what we do in this tutorial, we've already done these concepts, right? Set up mini shift, install Istio, deploy customer preferences and recommendations, how to update code. So let's actually show you how we do recommendations V2. So let's add a second recommendations V2, a canary deployment where we're making just a little code change. We wanna roll it out to a small group of users to see if they're impacted by it in a positive way or a negative way. If it's a bad experience, we wanna roll that back. If it's a good experience, roll it out to everybody. We're just gonna simulate that right now. So let's go into recommendations to do, you can see I'll come here and make it V2. So I'm gonna make a change. And I'm gonna actually remove the word dog here and let's make this a little bit more interesting. And we'll put in a fire truck. What else is big and red? A fire truck here in the U.S. I know in Europe, it's not so red in some cases, but often we have big red fire trucks here in the United States. I'm gonna save there and I'm just gonna save that Java code and you can already see that, hey, I have a Git issue right here, right? Because my Git status has changed. Let's switch into the recommendations folder here. So this is where we're at. You can see it's just a generic Maven project, no big deal there. Maven Clean Compile Package. So we're gonna do rebuild that fat jar. So this is a little spring boot application, nothing unusual at all really about it. You can kind of see here's what our fat jar is right there. And so that's what it is. And then we just need to put it into our Docker image. So over here, we can see there is a Docker file. There's our little Docker file right there, very straight forward. We're using the fabricate base image right there. So we can do things like a Docker build dash t example, recommendations v2 and dot. If I spilled everything correctly there, we'll get a nice little Docker build real fast. You can see that Docker build does execute very quickly because all the other layers have already been downloaded based on the fact that I built those previous ones. I can say Docker images, script and example. And you can see now we have recommendations v1 and v2. All right, so that's just a Docker image. If you look at our pods, we just have the pods for recommendations v1, customer preferences. So v2 is not yet running, we just have the Docker image. So I'd kind of make that point maybe a little bit more strongly. Again, it's a simple spring boot application. So if I do just a spring boot run, which is pretty typical of your running spring boot on a local host scenario, let spring boot pop up there. You can kind of see if I say curl localhost on 8080. Okay, see it says fire truck v2. So that's just a spring boot app that I put into a Docker container. So I could do a Docker run now and run it. But what we wanna do is launch into Kubernetes. And the way you do that is a little bit different with Istio as of this moment. This is something we'll work on a little bit more. But you have this concept of the Istio CTL command, okay? And this Cuban jet. That's where the sidecar gets added for you. There is an automatic way to add the sidecar, but I didn't feel quickly comfortable with it right now meaning it's hard to know if it really worked or didn't work in some cases. So in this case, we're going to manually add the sidecar and that's what we're doing. I picked the wrong one or I'm in the wrong directory. Here we go. All right, so if I do OC get deployments. All right, we now have two deployments, a v1 and a v2. And again, I could have done QCTL there. I get deployments or let's just do get services. I only have one service for recommendations. And remember by default, all the pods behind the service are automatically load balanced. And in this case, since I have one pod each in a 50-50 way, so I could say QCTL get pods. Okay, and you can see there's v1 and v2. Now here's the next trick you should be aware of. See there's two by two here. That's the net new thing you probably, if you're familiar with Kubernetes, this is a new idea. There are two containers in that pod, okay? So there's the two by two now. They both need to be ready. One easy way to kind of understand how do you know this? You can just say QCTL logs. And let me just pick the pod name here. Yeah, and it'll actually give you an error message to say which container do you mean? Do you mean the recommendations container or the STL proxy container? So if you really wanna view those logs, you gotta use this dash C trick recommendations, okay? Yeah, so there you can see this Spring Boot startup in that log file. All right, let's go back over here and run our polar again. Now you'll notice it automatically load balances based on the fact that we have two pods, v1 and v2. That is the default, actually, let's show you this window up here. I've run with a lot of windows open. Let's see, get pods. And so you can kind of see that now. You can see there's a v1 and v2 and you get 50, 50, nice clean round, ramen load balancing, fire truck versus Clifford, okay? Back and forth, back and forth. That's standard Kubernetes as I said, nothing unusual there. But what we wanted to know is kind of have some fun with the Istio route rules. And that's where it gets super interesting. And I'll just flip back over to the tutorial, okay? So you can see where I'm at. This is not, we're not gonna get through all of this today. There's tons of other stuff happening here. But what I wanna do is just simply change the route rules. And let me just grab this one here. So in this case, we're gonna make all the routes from preferences to recommendations only v2, okay? So this is not the canary scenario, right? We're just simply gonna go only v2 and kind of make that point again. So let's see, get services. Okay, again, there's only one service, but I'm going to change that by saying Istio files route rule recommendations v2. Let's go look at that file with it. So go look at that file, route rule v2. You can see it's going to the label, pods have labels. So in the tutorial namespace recommendations v2, 100% of traffic goes to v2 now. So if I come back over here and just do the poll, okay? You should see everybody's on fire truck, all right? So everybody's on fire truck. Everybody's on version two of that component now. And what's kind of interesting about this is it actually creates a new object type or a new kind in the case of a Kubernetes speak. You have this concept of the route rules. As you get route rules, you can see it's recommendations default. I can say a cube CTL get route rule recommendations. If I spell it correctly, we recommend default and O YAML. I can kind of look at the details of that. And that's basically the YAML file that we applied to it. So basically 100% weight to v2. You can see it right there on screen, okay? Now, here's where it gets interesting. I have the v2, everybody's targeting v2 now. Let's switch it back to v1. And that's also described here nicely. And notice the difference is it's OC replace. OC replace. So in this case, I want to replace the route rule, the recommendations default. Let me do that again, get route rules. This is known as the CRD, by the way. It's the custom resource definition. I'm gonna replace that guy. So if I come over here now and say OC replace, it's replaced and it switches everybody over to v1. You can see now everybody's focused on v1. So I can go back and forth between v2 and v1 as supplied by changing those route rules. And that's again the magic of Envoy proxy, sitting there underneath that container and then intercepting the actual definition of request. Because by default, the code is none the wiser. It doesn't know that it will switch from one endpoint to another endpoint at all, okay? And then if you want to just go back to default behavior, you can just basically just delete that route rule, okay? So if I come over here and say OC get route rules again, I can all app use it. So OC delete, route rule recommendations, default. So we've removed the fancy Istio routing from it. And if we go back here, we should just have your generic 50-50 back and forth again, the traditional Kubernetes way of doing load balancing across the back of the service. Again, so that one service recommendations, two pods behind it, as you see up here, it's just load balancing across the two of them. And if you notice, we actually return, let me hit control, so you can see it. We return the host name. You see that host name right here for V1? That's the host name here also. And then this is the host name for V2 right there, okay? And you can see that here too. So literally the Java code in this case thinks it's running on two completely different computers, right? Each pod is a different computer from the Java code's perspective. There's a different JVM in each one. And it thinks it's running on a different host. And so that's why we show that host name. So one, you can see that it truly is a different computer if you have a different pod and you can also see it in the pod name also. I wanna show you one little thing though to make this a little bit more interesting and actually a couple of little things. Let me actually do this. I'm gonna change it to a 1090 mix like we said earlier. So we're gonna run that. You can see recommendations V1 and V2. So it's gonna go to V1 and V2, but V2 is only 10% of the load. So approximately every 10 or 20, you know, every 10 times you should see a V2 and approximately every 20 times you should see two V2s. So there's one. You'll notice it's not an evenly load balances that was before. So Istio has some jitter in there. It has some logic that is ongoing in this case. That basically says, you know, don't give it an even spread just but do distribute things in an approximate percentage that the user requested. In this case, a 9010 split that we've asked for here. And it's pretty straightforward to actually ratchet that up. So now if I'm truly doing a canary rollout, I'm going from 10% of the traffic of my users, let's say 20%, 25% of the traffic. So in this case, I'm gonna go to 25%. Actually, if I open this file, you'll see what I'm talking about here. You can see where it's V2 gets 25% right there on the way. And if I run that, you'll see now, instead of being about one in 10, it'll be about, you know, one in four. You'll see more V2 fire trucks. And again, I can decide to roll that out. Another thing that I'll show you and then we'll kind of need to wrap up here from a canary to point standpoint that I mentioned earlier. Let's actually do this. Let's go to here to where it talks about Safari. I can also just set all my users to just, or at least all my Safari users to go to version two. There's a get route rules. Let's see, that's V1, V2, so OC, delete. Let's delete that one. Reconvations dash V1, dash V2. Let's remove that one. Okay, let's go back and see what that looks like now. It should be a nice back and forth. Again, we're at the baseline, 50, 50, back and forth. They no big deal there. Let's add the Safari rule, okay? And so let's go back to my Firefox over here. So Firefox sees V1 and V2, if you see that. All right, all right, V1 and V2. Again, because it's kind of randomly about load balance now, you don't actually see them 50, 50. You know, it's just randomly about something to get both. Let's look at Safari now. Where's my Safari? It had V1 before, it saw V1 earlier. Now let's check this out, V2, V2, V2. So Safari now is fixed to V2. So this is another clever way. We can actually say, look, I only want users on Safari to see the new Canary rollout. I only want customers of a certain class to see the Canary rollout. Or better yet, I only want employees to see the latest version of my software before it rolls out to all the customers. You can get fairly creative with this and it's fairly straightforward. All you were basically doing, let me open up that YAML file, which when we go, it's doing not the Firefox one, but we have a Safari one. We can just look at this Firefox one here though, like it is, takes the Safari one. It's just looking for the user agent in the header. And that's critical, right? It's just basically HTTP, whatever you want. You know, if you had a cookie, you could do the same thing with a cookie and you could basically then decide who sees version two. Okay, and if you wanna get really fine grained about it, only that single group of users would actually see that version of it. So that's really kinda how you would do the Canary rollout for this new cool super capability you find in Istio. Now, this tutorial that I mentioned here has tons of other goodies in it. You can go on for days looking at how to do different things. Are some things not quite working at the moment? We actually built this tutorial. For instance, the dark launch concept with mirroring doesn't quite work as we had expected to. Some other little gotchas in here, but like load balancer works pretty well, but circuit breaking is still a little bit iffy at this moment. So just keep that in mind that we will be updating this tutorial on a regular basis. Please do come back and check it out on a regular basis as we get better about documenting it and understanding it ourselves and then showing it to you. And I guess I didn't, I forgot to show you that it does have a nice monitoring. You can see the monitoring right here in Grafana that comes through Prometheus. So you can actually get all your Prometheus-based metrics and see the Grafana endpoints here. And then of course, you can also look at the tracing. So you get the tracing and monitoring basically for free, which is nice. You don't have to actually worry too much about that. And again, because it's intercepting at the network level, it can actually produce all these metrics. Okay, let's hop back in here one last time. Again, the slide deck is at Bitly Istio-Canaries and that gets you access to all the other links. And you can see a whole bunch of people did join me on the slide deck. So those folks have access to all these links. You'll wanna make sure you have access to that. I'll add this back to the chat though for people came in a little bit late to the live session. We have many hundreds of you on today. But let me jump out of the slide share now or the screen share and into Q&A mode. I'll be available to answer questions for just a few moments, but that was our 30 minute lightning session on Istio-Canaries and Kubernetes. Hopefully you guys enjoyed that. But let me jump out of here and we'll get you guys into answer some questions. All right, and here. Okay, and I'll add the link to the chat. And then let me go look at Q&A here. Let's see what we have, if we have any specific questions. We had questions around, hey, is it gonna work here? From a technical standpoint. All right, let's see here. We had video questions, we had audio questions. Always whenever you're doing live broadcasts, you're gonna get questions about that. When will the recording be available? It'll be available very soon. There will be a recording. And we're gonna update the main DevNation Live website too to show you the archive of the recording. So right now that website has not been quite updated, but at developers.redhat.com slash DevNationLive, that's where you'll see our historical ones from the previous platform and you'll see these new recordings show up there too. Along with other announcements. More questions about the video, more questions about the audio. I'm looking to see if there are any Istio questions here. Could be we explained everything so well, but we covered it in film. And switch to different browsers was even one recommendation. Okay, I actually don't see any questions in the queue specifically around the content, but make sure you guys have access to the repository, the GitHub repo, which has all the cool tutorial in it and make sure you have access to those slides. Again, I put that link in the chat because that'll get you everything that you need and look for changes coming for us in the future. One question that's popped in here is the can Istio replace the whole Netflix OSS stack? And the answer is Istio plus Kubernetes replaces almost all of Netflix. There's one slight little thing that you might still take advantage of and that is history from an application circuit breaking standpoint. So on the application side of things, you want to have your fallback, if you will, the basically how do you program out the fallback in case the endpoint fails? Also, if the endpoint is kind of slow, history does a nicer, more fine-grained version of that. So you might actually have a scenario where you don't want you don't want to replace everything, you might still use a little bit of history as an example. But overall, you can assume no more Netflix, if necessary. All right, well, that is all we have time for today. I did, let me double check one time on the chat. Here's the link, make sure people have it and then get access to that and feel free to hit me up on Twitter. Feel free to hit us up on email. Most of you guys have my email address at this point based on the fact that the invitation came from me and look forward for more DevNation Lives. We've got another great session coming up later this month, specifically deep diving and some spring boot on Kubernetes capabilities that you'll be interested in. And then we're gonna have even more great sessions coming up for DevNation Live coming in the future. All right, so if you have anything else, feel free to reach out to me on Twitter via email. Thank you so much and we'll turn it back over to the organizers here.