 Hi, everybody. I'm Peter. I want to talk about Istio and why do we actually need it. I want to start with some motivation here. Because speaking DevOps, there are so many things everybody says you shouldn't do when you're in DevOps team or where you're doing Agile. And I want to say one thing that when you're doing Agile and when you're doing DevOps, you're basically looking for cooperation with cross-functional teams, which means cross-functional team is a team of people which can cooperate together, but they are still specialized. So they can use the same tooling across a team, which is a really hard challenge because there are times when new tooling comes into the team and you see those two people just clashing together, saying, all right, we want this new tooling. And there's a group of people which are saying, well, this is too hard. We can't do that. I don't feel the value and all this stuff. And so the thing that's happening here is basically the barrier for the entry to these new technologies is too high for everybody else, so they won't accept it. And the way we do this or the way we deal with this is basically by creating APIs. And APIs, as you may know, basically abstract what's under the hood. So developers can actually just concentrate on what steps do I need to take to create my solution or to come to a final state. But if all you could look at is even further, and that's what Kubernetes and Istio does, is they actually create declarative APIs. So what you can do with APIs, they're abstract, implementation, but you still need to know the steps. With declarative APIs, you don't actually need these steps. All you need is that you want some final state. And it's then on the system to actually determine what steps to take to get to your final state. We'll see this in a couple of seconds. And the system has one property. And the system is easy to start with, but it's really hard to master. And that's because we just created two levels of abstraction so people can actually come in and say, all right, I want to deploy this. The system will basically say how to do that or do it under the hood. But then if we want to optimize these things or try something like a different deployment, we just need to drill down to the technology and there's a lot of things that we must work on and that's why we have the specialized roles in the teams. But it's good for everybody else to just understand what the final state is and what the technologies can do. But we need to start with Kubernetes first. And that's because Istio actually builds on Kubernetes, which means we still need some Kubernetes infrastructure built in to then enhance it with Istio. And I want to start with a really simple examples here. You don't need to focus on the world code, you just need to focus on that red rectangle. But this is our fully runnable configuration. And so what you're doing here is basically you're saying just run NGINX 1.14 and I don't care how you do it. Under the hood, it pulls the image, it stores it somewhere and so on and so on. So the system basically determined what steps to take to create this configuration. What more? You can say something like this. You can just edit the same configuration and just say, oh, all right, I don't want just one replica, I want three of them. Now what happens here is systems like Kubernetes and Istio, they see a conflict because you want just one replica and it sees one replica that's running, but suddenly you want three. So it will again determine the steps that it needs to take to get three of your replicas up and running. And the last thing that I want to talk with Istio and what do we need to actually build Istio into talk about Kubernetes is we want to expose these services. So we can do scheduling, we can do scaling. We can also expose services which gives us a service discovery capabilities. What this means is you can see the first example is the same deployment, but it has labels and you'll see labels everywhere in the Kubernetes world because they're kind of a way that we describe any workload or any resource in the cluster. So what we're doing is every resource that you label is then queryable by you or by other resources in the cluster. And what we see is in the first rectangle, there's a label called that app is Engines. So all the blue rectangles there are the labeled app Engines. And the service then basically just queries the cluster and says, all right, give me all workloads that are app Engines. So this way it basically connects. One disclaimer, there's no limit to how you actually label things. So you can just deploy Apache and then label it Engines and nothing happens. It will just work, but you may be really surprised what happens if you do something like that. So there are no limits to this. And so there are more things. I don't wanna give examples of these because they are just too complicated for this session, but you can also mount volumes, mount configuration, certificates and all this stuff. And there's one more feature that's really cool at these things. And that's what happens when node fails or when the pod's deleted or something. But we got our configuration that says, hey, we want free replicas and we only have two. Well Kubernetes again determines there's this state and then tries to make steps to repair this. What it does is basically it replaces the pod for you. So you don't need to worry that you are not running the configuration that you want because if you have a good state in the cluster and if it's healthy, then you'll always be running what you want in your declarative configurations. But there's one thing. And what we modeled is a really basic scenario, basically exposing free workloads of engines. It's nothing hard. But what about the things that are advanced, like A-B testing or policy enforcement if you want enforced TLS between like neural communications or if you want to load balance in a different style than your Outdoor Bin, which Kubernetes does. It's kind of a hard topic and that's where you don't really see this in many starting projects because you don't need it. But when you start needing these things, Kubernetes really starts to fall short on these and you need to think about how you actually want to implement these. So this example that we'll have right now, we'll use the same infrastructure that we built just now with Kubernetes with one change. It's a really simple change and that's that we have two versions instead of one. Another goal is right now to split the traffic, to create A-B testing, to actually use HTTP headers to split the traffic in a different versions. So what you can see is we have two versions and these have also one new label and the label is below the rectangles and its version. So we have app engines on all of them so the service can see them but we also have labeled them as versions. What we can do right now is we can walk through all these free components of Istio that will enable us to do this and the first component is called destination rule. What destination rule does is it creates a policies that should be applied to a traffic that comes to a service which means the traffic is already there, it came to our service which means it was already handled by something but now you need to basically say what should you do with the traffic? And what you can do here is you can define load balancing policies or load balancing strategies which can be around robin or based on some cookies and so on. And also there's one more feature which is what we will be looking at and it's called subsets. This is within destination rules and what subsets do is they actually create a sets within the global subset of the service where we can define what labels to use to distinguish between these versions. So now here we say we know that we have two versions two different versions running but we wanna tag them so we can then direct traffic to this version or this version instead of just saying where it goes we're okay with that. And it is with the simple configuration you just say all right these labels are those which are used to determine if it's version one or version two. Now what we need to do next is this is a core feature of Istio and it's a kind of big example. So what you need to focus on right now is on the right hand or on the right side which is the example where we actually have an example traffic that's containing HTTP header service version one. What we want to do with this traffic is we wanna direct this traffic to the service and then the service must recognize this traffic and put it into or redirect it into workloads they are labeled with version one. So it feels like an easy job and it also is virtual services where there are two rules or virtual services are basically routers and they work in a way that you define rules under which must match or which must be true if we wanna direct traffic that way. We have two rules here the one is the big rectangle and the second one is below it and the first rule basically says match everything that has headers with service version two exactly not a prefix nothing else it's like exactly version two and then wrote this traffic to a destination which is our service but to a subset called version two and we also have the second rule and this rule has no matches which is like a default case so if no rule is matched then the default case is used and in this example what we do is for the traffic on the right side what happens is it goes to the first rule and it says no because it doesn't match it but it then goes to the second rule which is basically default case so it wrote the traffic to the version one what would happen if you just switched it to the version three or version four or version five they would still be all wrote it to the version one because that's the default case we have right now but if you would create more subsets for these versions then you could actually create these rules in virtual services so you could direct the traffic there and there are more features like this in virtual services you can do HTTP failovers or recoveries but it's kind of out of scope right now and there's one more last thing one more last component and it's called gateway and this is really I'd say the easiest component because it's a component that runs on the edge of your service mesh and it's basically entry point and point where every traffic goes in and out and what gateways do is you basically define on what port they should listen what protocol they should accept or what host they should accept and these gateways are then used to direct traffic to virtual services where all the rules are defined and one thing I forgot to say is virtual services can be bound to gateways so in this example you see gateway and if we go back you can actually see the first rectangle and the first rectangle says bound these virtual services to the gateway so it's like virtual services basically component that's bounds to gateways and then to subsets to direct traffic all right so going up I just want to take a quick look at how the what's the life of the request I'd say so the traffic goes into gateways and gateways basically are either open or closed for the traffic which means we need to have HTTP traffic on port 80 from the domain example com to actually be able to pass through the gateway and then we go to the virtual service and the virtual service what it does it takes the traffic or the request and it goes from top to bottom of all the rules that we added there and basically matches these so if it doesn't match it goes further if it matches it basically stops and says all right this is where the traffic will go and tags the traffic with subset if you're using subsets on where the traffic should be directed next and then then goes to the service and what service does is basically just accepts these policies on load balancing or TLS which will just which basically just helps you tweak tweak the infrastructure on traffic routing so the summarizes I know these are that may be a lot of things but there are three main components of FISTO one of them is gateway the second one is virtual service and a destination rule gateways are just entry points virtual services are basically routers where you just say all right so these are my rules and if it accepts any of these rules and then get a traffic somewhere else and you also do things like failovers or timeouts and then we have destination rules which are basically policies on how to divert traffic further and as you could see what we did was we basically took a Kubernetes infrastructure and just adopted Istio on that and it's what you can do in any of your projects so if you think you don't actually need Istio right now you don't need to use it right now what you can do is you can just use Kubernetes and then when you feel like you actually need more of these things like the traffic routing policies and stuff then you can just add up Istio there and it provides a declarative networking API which basically means that anybody should be able to understand what's happening and any member of the team should be able to actually just read if not right then just read the configurations and say all right this is what's happening there. One last thing it's kind of easy to read when it's written but it's not really easy to write the code and one must really look at these pitfalls such as labels because if the traffic is not going where you want you may actually have just a problem with destination rules or labels and that's it from me. If you want to learn more about this topic you can just go here or look at this presentation. These documentations are basically technical or they really go deep into the technical stuff and you can see all the references there so they may really help you there. Thank you. Any questions? We have a lot of time. Inside the service mesh, service to service track but that's my understanding that's sort of the class of these pitfalls for Istio. Yep. Could you expand on that or talk? With the service mesh or why do you? Is Istio within not just as an ingress controller? Yep. There's other ones for there's ambassador OpenShift has HAProxy and stuff like that so I'm more interested in how Istio operates. I think it's a sidecar, is that correct? Yep, yeah. So basically what happens in Istio is every workload you deploy Istio deploys something called Sidecar within it which is a proxy and the proxy then allows all these restrictions and policies because every time you apply configurations every time you apply configuration all these sidecars are basically updated to comply with these policies that you just said. So yeah, it operates with a lot of on voice. Is it good? How do you, but I'm sorry. What types of, is it JWT or what type of authentication is there from service to service that's supported by? Well, so there's TLS. There are certificates everywhere. But if you wanna just implement like a user to service authorization you can actually use JWT as well on virtual service. So how does the tendency look like Istio will every development team have their own Istio in an accredited cluster you will have one service mesh which is shared among different developers? Right now, what we do right now is we have one service mesh but you can have multiple. There's a lot of isolation that comes with Istio. So it seems you kind of like at start you always need just one Istio but if you have like a really great, like really large teams then maybe like creating multiple clusters and so on makes sense here. Anybody else? Yep. When building the application and using service mesh I will kind of troubleshoot where it goes and just for testing and development is it easy to find out the things? Yep. Yeah it is. I think that was actually one talk today about Istio and observability. So one of the main features of Istio like with traffic management is basically observability and because all the traffic flows through these sidecars although like you can see all the traces, you can see all the metrics and all the stuff in Istio as well. So it's really easier to debug than Kubernetes, yeah. One more question. What happens if you find multiple virtual services to a single gateway? What happens with the traffic? Multiple virtual services to a single gateway. What if you want one? No it's not. What happens is basically all these virtual services are merged together. So there's, I think there's also like a merge algorithm for virtual services and as well for gateways. Any other question? Yeah. It would be a good question please. Sorry? Yeah, so if a URL or traffic content-based routing is based in a virtual service, yes it is. You can basically, what I did with headers, you can do the same thing with URLs. So we can just say every URL that has prefix of this or that is exact, exactly this. Yeah? In this example, what setting the header that you're filtering based on? What setting the header? Client, basically. Just basically you, we accept traffic from the internet, from some client and then we sort of like filter on that. So your client would need to know that they want to do your... Yeah, you can do multiple things here. You don't need to really, you can do things like set cookies instead of headers. So these will travel with the clients all the time. Yep? How about using SSL? Because you would be unable to match headers. Yeah. So TLS can be basically implementing the gateway where you just say, this is where the certificates are and it will mount it. All right, any other question? The company has already provided what to do with the scan or authentication with the scan? 50 questions, please. So what happens if I have multiple types of sidecars and they comply with Istio? I mean, if they comply with the configuration, I don't see a problem, but... So you can define what IPK registers or IPKs can pass and what not and also the designers and monitor. Yeah. If there are multiple sidecars with a similar functionality in one book. Okay, so if you can filter based on IPs. Yes, you can. Yes, what it can do is basically you can say, also for like incoming and outgoing requests, where they can go or which addresses should be accepted. So you can basically whitelist block list things. All right, last question? All right, Nad. Oh, yep. When explosion gateway, basically it's a bot that's acting as a proxy. Is it possible to have that bot placed in a specific host assuming I put the product type? How things... Is it a virtual service or is there early up? I'm assuming in this case it's a bot that actually terminates the CLS and stuff. Is it possible to have been on the node or the node selector or the code nodes? Do you mean like a demon or? Well, it's a bot. I mean, can we have a second passive Y-way where I would like this in my post? So what happens here in Istio is basically gateways are also just configurations and these are configurations for ingress. So gateways are bound to ingress, ingress gateways, which are physical proxies. Does it answer your question or? How is the gateways implemented in the generator's role? The gateway. Well, the gateway only bounds to a, like it creates a configuration for the ingress controller. So it configures a physical proxy. But it runs as a... Sorry. Do you get a part where it's implemented gateway? Oh, no. Do you mean like if you just release free gateways, if you have free different pods or no, you shouldn't have? I believe, well, we can just discuss it later, but I think you shouldn't have these. You should have just one pod, which is an ingress controller and then the gateway just configurates. Well, you get just a single pod, right, for which is a ingress controller. Yep. All right. Thank you, everybody.