 Okay, thanks for coming. So please welcome Kamesh Sampath, Director of Developer Experience with Red Hat in India. So this is a topic my wish list. Kamesh is going to talk to us about Istio and I think Envoy as well. Yes, not much on Envoy, just Istio. Okay, thanks. Cool. Good evening everyone. So there's a bit of introduction about myself as they already done. So I'm a kind of active open source contributor to OpenWisk on serverless and then do a lot of on-chain, eclipse-chain, omnipheno-chain. Okay, and then fabricate platform. So which is again another open source project which does for Java app development on Kubernetes. And I'm a creator of Vertex Maven plugin which does for Vertex. Any questions that you have on this session? If you're not able to do it because the time is very short. So you can reach out to my social handles right there. So to my email or my Twitter or my GitHub handles. Okay, well how many of you do microservices today? Good, fairly good. So I'm just going to skip this slide. It just says what I just want, I just underlined few things in this. So one is which says it runs in own process and automated deployment machinery. So these are two critical points for any microservices to make it 12-factor app, what Martin Fowler defines. So this is an official definition from Martin Fowler. So people who are going to start with microservices, I would recommend that we have a bunch of books and tutorials and all these available. So you can go ahead and get these things from there. So this is free download. You don't need to pay for this. So we have two microservices, one on Vertex and another one is on Spring Boot. So you can use anything and then probably have some basic fundamental stuff about how to write MSA application, microservice architecture applications. And then finally we have this tutorial like the studio tutorial is kind of a step ahead for people starting from ground zero to kind of see what how to deploy STO and how to start with STO, how to run this with all detail instructions that is there on bit.sy STO tutorial. And also you can go to learn.openshift.com and you'll also find another STO tutorial which you can do it interactively. So this is something which you can do at self-paced. You can just do it online as well with the OpenShift instance deployed. So before we go further into STO and talk about STO, I just want to give a very short history about how microservice is evolved and why we are talking about STO today. So it all started with continuous integration, XP. How many of you know XP? Extreme programming, one. And then we gave the agile manifesto, define what's agile and all these stuff. And the real revolution started when EC2 was launched. After EC2 was launched, quickly we followed with DevOps, what we started to define what's DevOps and other stuff is all about. And then we entered into Java EE6 which was more modularized than what is today, what you see. And then quickly Netflix was on toward. The moment we saw Java EE6, Netflix started to deploy on AWS. And then there was the first microservices Java, microservices framework was launched, which is called drop wizard. And then Netflix started with Reborn and then Histrix for circuit breaking all kind of 12-factor application properties that were started. And Eureka for registry. And microservice access was defined there and then Docker came back in March 2013. That's the time when the real, again one more evolution after EC2 started for microservices development. And then Spring Boot, take together all these, the red ones which is typically Netflix ones and started to bundle them across and give it to you to develop cloud native Java apps, which is also called as a microservice application for us as well. And then ThoughtWorks defined what's microservice is exactly is. So the definition which you saw in the first slide was quite close to this. And then the revolution we are now in is in Kubernetes, which started in 2014. And now everywhere you see that we do a Kubernetes deployment instead of fraud Docker deployments. So this is a brief history of how microservices started. But the real definition of microservices from my end is that when we see microservice, it means that it's distributed computing. It's nothing but then breaking my services into smaller pieces and chunks. So when I say distributed computing, it means that it is a network of services. A lot of services talking to each other and all these stuff. And then you keep increasing, you divide your monolith into small microservices and you keep deploying them across. So when I say it's network of services, inherently it has to define some properties which is expected out of it. For example, I say I need to have an API to call each other. I need to have a discovery mechanism which says that how do I discover the service? For example, registered in Eureka registry and then look it up from the Eureka to call the service. And then there should be some kind of invocation mechanism, elasticity and resilience. And then few other properties like tracing, monitoring, logging and authentication. So the bunch of circles you see is most commonly used microservice frameworks which is quick and easy to develop. So before even we go to today, so I want to just show you like what we had, what has been a distributed computing or cloud native application developed has been so far, even before Kubernetes. So it has been like, as I said, Netflix OSS was the center around everything. So everything was around Netflix OSS components. And then AWS gave me elasticity and resilience, compute instances plus other stuff. And then I was using Zipkin for tracing. How many of you know about distributed tracing? Tracing one, okay. We'll look at it that further down below. But tracing is something very critical. And especially if you are here, something like this, then I start from the fag and I go up to this. I want to know where I'm losing. Something goes wrong or something slow or something is not working as expected. Then I want to find out where is the problem. So that's where tracing comes into picture. So Zipkin was the first kind of tracing thing which was started. And then now we have an open tracing standards, so which defines how the tracing should be. Cool. So before we go further, I just want to quickly give you guys a feel about how does this thing looks. So let me take my screen to this. So one of the biggest advantage of Istio is that, so Istio gives me a bunch of stuff which I can say. So let me quickly mirror my screen instead of doing this. So it's easy for me to see what's happening. I'm sorry about this display. That's good. So when I deploy Istio, I think right now I'm running on a single node open shift cluster. This is again an enterprise Kubernetes. So I get Grafana for free to know about my status, what happens to my application, all these stuff. So and then I get, oh my God, what happens to this? Network won't come here. The network, okay. There is no network here. Is it, sir? Oh, okay. This guest, okay. LLI. If it connects. Yeah, good. So we get multiple components when we do deploy Istio. And then if you guys want to take a snap of this repository, you can do this. This is a repository which gives you a detailed walkthrough on Istio. So if you want to get started from level zero. So you don't understand anything about Istio. You just know. So if you wanted this, go ahead and start with this. So this has, we have developed three comprehensive, either you can do with Java, Node.js, or .NET. Or you can even make it polyclot, right? So this gives you all stuff, how to deploy everything. So I've pre-deployed it considering the time we have. So once we deploy it, we get three different operational components, right? So one is Grafana for your monitoring. And then you have Prometheus for metrics generation. And then Geiger is an open tracing based distributed tracing thing, equivalent to Zipkin, which can be used to trace your components. We'll see examples of all those things, right? So what we have done is like we have a simple application I've deployed. So just like what you see, right? There's a customer preference and recommendation. So it just goes, there's three different services deployed, three different microservices, customer calls preference, preference calls recommendations. And then I have two versions of recommendations deployed, right? It depends upon where I want to call, which I can define, right? That's what route tools Istio gives you. Natively if you see the logs here, so it is a typical round robin that Kubernetes does for you, kind of 50-50 percentage of each load getting distributed. But if we use Kubernetes, I mean Istio, then Istio has some properties which helps you distribute doing smart networking. Let's say I want all 25% of my traffic to go to V1, and then this 75 go to V2, I can control that, right? That's one of the examples, right? So what we quickly do is like I want to show you how distributed stuff works. I'm just going to generate some metrics. So I'll show you what, let's not worry about, sorry, my token has expired, okay? So I just created a thing which generates a custom metric for me. So let's not go into deep into that. So what I'm going to show you is that I do this, and then I kind of, I'm just firing some bunch of calls. At the moment I do this, I go to Prometheus first, and then fire this execute call, and she starts seeing bunch of calls coming up here, okay? There's a synchronization issue, it'll take some time. So and also if you see Grafana, Grafana should start giving me a bunch of metrics right now. Let's say refresh last two days. So I'm just going to give it once more. You should start seeing the ops coming down for me, and then you have some services to want the services coming down for me, and then some more HTTP services. I'm not sure if this is visible there. This is going to blow a little bit up, right? So it gives you like this, right? Remember the tutorial is a namespace, the application.namespace, and then what is the percentage of P50, P90, which means that how much is going to V1, and how much percentage is going to V2, right? So it will start generating the graphs soon for you. So also like we get all other metrics like whether it has 401s, 402s, and version based ones, I want to grab metrics, which says that give me all the, right? They have all the pre-baked definitions given for you with Grafana, which we can take it and pull it out, right? And also if we go to Jaeger, I'm going to find, see if I have, if you see I already got the traces, right here. So it says that to this root, I've got seven spans, and then it says that it starts from the customer and then went to preference, and from preference it went to recommendation, right? And you can see all other stuff which is related to this, like tags, and what are the headers that have gone in. So what are the HTTP methods that has gone in, which is called this, right? And how much time it took, that you also see the timeline it takes, it took 1.227 milliseconds, and then the other service took so much, so much milliseconds, right? So these things, technically if you don't do Istio, then you have to do manually, deploy everything, configure everything manually, but Istio does this automatically for you, right? And also like if you have been in Java world, if you see this piece of code which I've shown to you, so this is technically what I do when I want to write a tracing code, right? I write a bunch, this is Java, it could be equivalent in .NET and other languages as well, so which means that approximately for every business application, every microservices I'm writing, approximately 30 to 50 lines of code just to do tracing, right? With Istio, you don't need to do that. So it's already done by default for you, right? All I have to do is like there's a bunch of headers, so which I need to propagate from service to service. Once I do that, then my... So let's get back to this thing. I'd run some quick demos because I'm running out of time. So why we chose... We saw a few bunch of properties earlier, right? We saw the X again which says that these are the properties that your microservices has to exhibit, right? So what happens is that some properties are not given by EC2 and some properties are not given by any cloud provider, right? That's the reason why we chose Kubernetes or OpenShift. So OpenShift is nothing but an enterprise Kubernetes which has come with support from Red Hat. So we do the upstream of Kubernetes as well as we do the support for Kubernetes officially, okay? So why we chose Kubernetes is because as per the definition, I want a single process to be running on my VM or a container, so we should be communicating with the external world, right? So which means if I run a VM, there can be multiple processes which can run in VM, which means that I'm not well-factored, right? So if you go with Kubernetes, so what happens is that Kubernetes gives all these stuff. I can run containers, which is going to be one process, which is going to run on a container, and then it's going to talk to other words using REST API or GRPC or whatever you call it. And also I get all these things for free, right? In case if I'm going to deploy it on EC2, then I have to deploy another registry. For example, I'm taking an example of EC2, so I need to use another registry, so where I need to look at the service for, right? When I do Kubernetes, I don't need to do that because Kubernetes already comes with a registry, so which can allow you to look up the services and deploy the services. Similarly for invocation, I can use name-based service-based invocations. Similarly for elasticity, I can scale up, scale down my replicas and Kubernetes with ease, right? So but what does OpenShift gives you, right? That's what I said enterprise, Kubernetes is OpenShift because I get monitoring, logging via Elasticsearch and monitoring via with all the metrics and other stuff which we need to monitor and CI, CD pipelines for your microservices development, so which inherently OpenShift gives you package so that it's kind of enterprise ready for application development. But even all these things, we still have few problems, right? We already saw this about this. So these are the typical microservices pain points, which from my perspective what I saw, because for discovery, tracing, circuit breakers and there are some operational requirements, right? I have to do A-B testing, I have to do can't releases, I have to do rate limiting, I have to do access policies, who can access what and all these stuff, right? That becomes more critical when your microservices keep growing, okay? So for this case, for this talk, I'm just going to talk about only distributed tracing which we already saw, so which means that right now distributed tracing, I'm not sure how many other libraries are there, but right now predominantly, we have only Java libraries for distributed tracing. So if you have to write tracing in other languages, then I have to write the own library, right? Which is painful because then you will not become polyglot for every polyglot languages you write. The microservices, you have to write the library, right? All right, so this is a big problem which you alleviated with Istio, right? So Istio means sailing, again, related to your ocean ship, whatever you call us. So what does a service mesh do? Service mesh is one which is going to alleviate this bigger problem for you. So this is a definition which I bought from this particular site. So let's say I'm an organization where I'm moving from monolith to microservices or I'm still going to maintain some monolith and make it talk to microservice, okay? In that case, if you see, there's a bunch of these red blocks which you see there which is going to take circuit breaker, discovery, tracing proxy, all bunch of libraries which I need to add to each of my application, right? And then they start talking to each other. If we see the earlier example, the Java code which is that particular tracing thing I have embedded inside my application. So which means that every application which needs to be traceable, I have to add that piece of code. It doesn't make any sense with my business logic, right? So this is what a service mesh is going to do. It's going to take out all the stuff which is going to, which is operational, I call it as circuit breakers and all these stuff and then move it out of your application and also make it talk to any kind of other application. So in that case, this is how my exigent turns out to be now, right? So I'll delegate discovery to Istio with Kubernetes. Again, Istio underlying, you see Kubernetes API to talk and then resilience again. I'll go with Istio, authentication. Istio, there's a component called Istio. We'll see at the last slide. I'll show you the component for Istio which takes care of authentication. And again tracing, we saw the example, right? We didn't do any code. We just called a couple of services and we saw the traces coming down in my UI, right? So these are something which tracing is going to take care for you. And then if you see the application, let's say I have three different applications talking to each other. So we say service B and service C. Before Istio, this is how it looked like. Everything clubbed into one single application, right? If I want to change how I do load balancing or if you want to change how I do discovery, that requires an application redeployment. It's an operational concern, but I'm doing an application redeployment. But what we do after Istio? So this is just my application. This is JVM application. Just imagine a case. It could be .NET, Node, whatever you want. But we did something called as sidecars. How many of you know about sidecars? Great. So we moved everything to sidecar and that's how Istio works. We'll see how it works. So people who doesn't have idea about sidecar, so this is what sidecar means to you. Very short definition which I did. The very important thing is like they are co-located, same namespace, same pod IP, shared lifecycle, which means that if they live, there will be two containers running in one pod, given at a spot, and they come up and go down at the same time, right? And each other knows about each other, so we cannot run two ports, two applications running on the same port. But how we do this? So every sidecar is an on-white proxy. So in Kubernetes what we do is like these sidecars, when you do Istio deployment, these will be on-white proxy. Each on-white proxy has two important discovery services. LDS called as Lesnar Discovery Services and CDS which is called as Clustered Discovery Services. So what LDS will do is like it knows what all the services which is running inside this pod, and that is going to central control plane. We will see that in next slide, right? How do I do? How do I call my service? How do others know my service? It's all taken care by Lesnar Services. So that when pod goes up and goes down, your request is not troubled. So Istio takes care of how to delegate to a new pod. Similarly Clustered Discovery Services means that, let's say I have three pods forming a separate cluster for a particular service, then the Clustered Discovery Services will give me information about how to look up the cluster for this particular service inside a pod. So it kind of makes all the Kubernetes API transparent for you, okay? But how it does? So this is what, how Istio does this? Istio has something called as control plane. Istio is nothing new for you. It's a bunch of applications, Kubernetes applications which is deployed on a different namespace, right? And then you have to give special permissions for it to go and talk to Kubernetes API to make this working. So this is what we do there. So for example, every pod has an onboard sidecar as we talked about yesterday. And then everything talks to Istio pilot, mixer and auth. So auth, this is what the component I was talking about which gives you authentication. So right now I can make, for example, I want service B to talk to service C with a particular authentication token, right? In that case, what I do is like, I can deploy JWT filter, for example, JSON web tokens, and then pass it along with these HTTP calls which can be done at Istio level so that every call from one service to another service carries the token with it so that this other service can verify whether this guy is valid or not and then talk to it, right? So Istio pilot, this guy is responsible for getting you the configs, the onboard configs. The onboard configs reside in the onboard server as they sold the CDS and LDS APIs, their REST APIs. Those APIs can take care of getting the latest configurations on each box and then they push the configurations into those respective sidecar proxies. Whenever something changes, they have around 600 milliseconds or that's configurable. By default, it's 600 milliseconds cache time within which your proxy keeps pushing your config. And Istio Auth, so Mixer take care of doing all your quota, rate limiting, access control, who can call bot and all these stuff. And then Istio Auth CN SPIFI, I'm not sure it's in now, but they are there right now working out to get SPIFI also inside. So it should be in soon, right? And then all the communication, all protocols possible, HTTP1, HTTP2, GRPC, you name it, right? So this is how we are going to go into the next generation of microservices, which is via service mesh. We sort all these things possible. We will not have time for the Canary Deployments, but if you take a look at that tutorial which I shared you earlier, so that gives you Canary Deployments as well, right? You can go step by step. It's very easy to follow. So you can just follow in case if you need anything to be added, you can just file a GitHub issue for us. So we can look on to get that thing done. So we have all these things, intelligent routing, and then smart can, the smarter can releases nothing, but we have one example in the tutorial. So where we can, we say we call a service from, let's say Firefox, and we call a service from mobile. And the request goes to two different version of the services based on the configuration. So that's called a smart routing. So a smart can releases also possible. Chaos engineering, we can do fault injection. So we can make it automatically do fault. It's all just do, create Istio rules. We give, give in at its name space, service name, and a version, or version is nothing but a labels, and then automatically attaches the rules to those things. So whenever it goes to those calls, Istio takes care of how to delegate the call to it, right? The circuit breakers we have. This circuit breaker is not a typical microservices defined circuit breaker. This is more of a bulkhead kind of a stuff, right? Which says like, it works predominantly. You cannot see in single calls, but if you, if the concurrency increases, then you'll see how the circuit breakers work much better, right? And fleet-wide policy enforcement. So we can have policies, make sure, take care of delegating the policies to each of those parts. So how different is my cube YAML is going to look like? So the right-side one, which you see here is a typical YAML, which I deal. So the important thing is like we have the app details. It just used a version label. It could be any label. Just make sure that you use the same label and Istio rules as well, right? So this is single container. The moment we say like, when I'm going to go to Istio, I'm going to have one more container. You see a side card deployed here. So this is an Istio on-wipe proxy side card that gets deployed, right? And I guess I'm right on time. So this is the tutorial which I was talking to you about. So if you wish, take a play at it and then let us know how it looks. It's ground zero to even complex labels, right? Okay, and that's all I have for the day. Okay, thank you for your time. And then maybe I'm not sure. I think we have two minutes for questions. Well, I mean, this is your last talk. We're pretty free. Okay. Let's cap it in one hour. So yeah, any questions? So any questions or if you want to see how rules work, I can show you rules work. Otherwise, you can take a shot at these tutorials at your own pace. Saturday evening pretty hard on you guys. Sorry about that. So this is a tutorial I would recommend you to go through because that's where we define the terms and then say how to work on each and every step. It's kind of, you do copy paste kind of stuff, right? You don't need to spend so much time on what it does and other stuff. We just say make Istio deployed onto your cluster and then go ahead. And then these books are highly recommended for if you want getting into microservices development. And that's it. Thanks. It was pretty interesting. I heard we could try those tutorials myself. So any questions? Maybe one minute, two minutes? No? Okay. Okay. Thanks. Thank you so much.