 Good morning, everyone. Sorry for the delay. So, the next we'll have John speaking about introduction to Istio on Kubernetes. I hope I spelled, I pronounced that right. Okay, can you hear me? No? Can you hear me? Okay, so, okay. One, two, okay. Now I can hear myself. Hi, good morning. It's a pleasure to be here. Thanks for being here. I'm John. That's not a typo. That's my name. I mean, I'm Brazilian and probably someone wrote my John wrongly. So, yeah. I suffer bullying all the time because of that. But anyway, that's my unique name. So, I work for Red Hat for two and a half years. And since the beginning of this year, we are working on the Istio project. And today, I'm going to talk, I'm going to give you an introduction on Istio, what Istio is, and what problem it solves or it tries to solve. Okay? So, Istio is not tired to Kubernetes, but for the sake of this talk, I'm going to talk on Istio on Kubernetes. Okay? And Istio is really close to microservices. So, the focus on this talk is about Istio trying to fix some problems on microservices on top of Kubernetes, right? So, I'm not going to go deep into the Kubernetes or, I'm sorry, into the microservices. Microservices are not the top of this talk. But anyway, we should have an overview of microservices to understand how Istio can fix some problems of microservices. Okay? So, this is microservices. Okay? So, instead of having one big application, you have now several smaller applications. So, it brings a lot of benefits, of course, like, for example, independence of the teams, right? So, now we have several services, and for example, each one written in a different programming language. So, you are not tired or you are not enforced to write your services in one language because all the rest of the application is in that language. Okay? So, developers are free to choose the right language for the application. Okay? That's a big benefit of using microservices. Another one is independence of the teams. So, the team writing the services can have its own schedule, can have its own team. That's completely independent of service A and service B. Of course, they are linked together through some API, for example. So, as long as they keep the API safe, they are free to move along with innovations, for example. But we are here to talk about, not to talk about the benefits, but to talk about the downsides of having microservices. The goal of this talk is to show the downsides so that you understand why it's still necessary. Okay? So, if before we were in a single monolith application, now we have several applications talking together in a distributed system. Okay? So, as everything that relies on a network, it may cause, it may trigger several issues just because of the network and the nature of the distributed systems. So, for example, deployment is a problem. So, instead of having to deploy one single application, now you have to deploy several applications. Resilience. So, what happens if one of those microservices fail? So, how to behave upon failures? Networking, of course, this is the biggest problem of, because before everything was in a single application, in a single executable, just function calls, now we have to make API calls, for example, through the network. And secured, of course, because now our data is passing through the network, probably in the plane, right? So, Kubernetes solves some of those problems, like, for example, deployments, because Kubernetes is a deployment platform. So, the Kubernetes was born to fix that deployment problem. So, we don't have to worry about deployments because Kubernetes solves that. Kubernetes also has a service discovery on DNS, which that's another thing we should not worry about. It has a simple load balance, L2L4, but Kubernetes does not solve all of the services to service communication problems. So, now the services, the developers have to worry about all those problems. And developers use a tent to have fixes for those problems embedded into their applications. So, for example, full tolerance, time out and retries. So, if service A calls a service B and the service B is taking too long to respond, so how do the application handle that? So, the application can retry that call for a few times. But developers need to handle that. Developers now need to deal with networking problems. Monitoring, tracing, so observability in general. So, because of the nature of the distributed nature of microservices, you want to know where a request, what's the path of a request? So, the service A calls a service B, that calls a service C. So, you want to see that traffic. Okay? So, for developers, for a developer point of view, this is a fixed problem because there are several tools out there such as Netflix, WSS, that fix that problem. So, you can, for example, if your application is written in Java, you can just import a bunch of libraries or frameworks, and you have all those features embedded into your application. So, that's quite easy to do. I'm not a Java developer, but I believe that's easy to do. But anyway, this is how your microservice is going to become. Okay? So, here you have your actual service or your application, and then you have lots of dependencies embedded into your service. So, it's not a microservice, it's a macroservice, right? Yeah. And this library approach is not that scalable, right? Because if you use another framework or another language, like Pearl, for example. So, you have to deal with several libraries, okay? So, you have to maintain those libraries, you have to keep track of them. So, you are gaining more problems than you are fixing, right? So, in this case, let's suppose that service A is written in one language, service B is written in another language. So, you have two stacks of frameworks to keep on your developer side, right? That's a problem. So, what if we took all this common functionality out of the microservice itself and put it to another layer, separately from your application code? So, that's amazing because the developers of the microservice should care only about the service itself, only about the application. Developers should not worry about other stuff, like client-side load balancing, or retries, timeouts. No, that's a network issue. So, let a proxy handle that, okay? So, that's the goal of Envoy. Okay? So, Envoy is a proxy. This is Envoy. That was written specifically to fix that problem. Okay? So, what's Envoy? Envoy is a service proxy developed by Lyft. It's open source. It's written in C++. I mean, it's supposed to be light and fast. And it's a network filter. It comes out of the box with L7 filters, Lyft 7 filters. That's why it can do advanced load balancing based, for example, on the headers of the root crash. Hopefully, we are going to see a demo of that. ATTP 2 is a first citizen class supported, including GRPC on Envoy. It has embedded service discovery, health checking, stats, metrics, and tracing. All of those features are embedded into Envoy. Okay? So, if you are familiar with Kubernetes, you know that SideCard deployment model, which is in a Kubernetes pod, you can have more than one container, right? So, in this pod, in this SIRC-A pod, we have the application container plus the Envoy container. So, that's the deployment model used by Envoy. As you, to make this happen, right? So, this is a scenario without Envoy, without Enproxy. So, when micro-SIRC-A calls service-B, it talks directly to service-B, right? But with Envoy, it's a transparent proxy that intercepts all the calls in the network. It intercepts and makes the call on behalf of the application, right? So, it's an IP table's rule that redirects all the traffic in coming and out coming of the micro-SIRC to the Envoy, to the local Envoy, right? So, in this scenario, service-A does not talk directly to service-B. Instead, Envoy is communicating with the another Envoy, right? So, Envoy is intercepting all the traffic between micro-services. So, this is how your service-Mesh looks like. I mean, this is the concept of service-Mesh. It's a bunch of services with this proxy inside of the pods that are communicating each other. So, services do not talk directly to other services. Instead, Envoy talks to Envoy, right? This is a concept of service-Mesh. But configure a fleet of Envoys can be verbose and error-prone without automation, right? So, we need control plane. We need something or someone that configures all those Envoys and setups all those Envoys. Like I said, developers should not worry about that. Developers should worry only about their code, right? So, we need automation. We need a control plane. So, I show you Istio. Okay, so Istio is the... What is Istio? Istio is the control plane for the service-Mesh. Okay, so it abstracts all the Envoy concepts because Envoy has its own life. It is independently of Istio. So, Envoy is a no-person project. But, of course, Envoy has its own configuration tools, its own configuration formats, and it's really difficult to configure Envoy. So, Istio abstracts all those Envoy concepts and makes it easy to operate through YAML files. So, if you are familiar with Kubernetes, you are familiar with Istio because all you have to do is to edit some YAML files and you can use the Kube-CTL command line. Okay? There is an Istio-CTL command line as well, but that's for things specific to Istio. For all other stuff, you can use the Kube-CTL. So, Istio is a project created by Google with the help of other companies like Lyft, Envoy and IBM, and Red Hat is now joining the effort to make Istio work well on top of OpenShift, which is our version distribution of Kubernetes. Istio is a new project. I mean, it has a little bit more than one year and it just reached the version 1.0. So, how Istio looks like? When you install Istio on your Kubernetes environment, you install the control plane, right? So, the control plane is what the operator installs when he installs the Istio, right? The control plane is made up of several components. The first one I want to show you is the pilot, right? Pilot is one of the components of the Istio and pilot is the guy that is responsible to configure all those envoys in your service mesh, okay? So, by the way, we call data plane all those services mesh, right? Control plane is the guy who controls all the data here. So, the pilot is the guy who configures all the envoys. So, you don't have to configure Envoy manually, right? How does it do that in two ways? Okay, so it keeps listening for the Kubernetes API so that when a service is added into the mesh or a pod is added, it gets noticed about that and for those information in the format that Envoy recognizes. In the same way, users or operators can write some rules about trafficking, for example, in the EMO file. So, when you use those... use Kibstia, for example, to apply those rules, it gets into the pilot and pilot forwards all those information to all the envoys in your service mesh. As you remember, Service CA does not talk directly to Service B. Instead, Envoy intercepts all the connection and because of Envoy intercepts this connection, it can do things like traffic control or it can add things like timeout or retries. That's an interesting feature. So, for example, if Service B responds with an error, like a 500 error, you can configure your service mesh so instead of returning the error immediately to do a retry. So, you can configure the service mesh to try the Service B three times before actually returning an error. And this is done transparently so the Service A doesn't know or doesn't have to know that it's trying three times. So, the application is not aware of that, right? Another component of the STO control plane is the mixer. Mixer is an interesting component because it can allow you to do things like... So, here's a request coming and for every request coming into the Microsoft One, Envoy will check the mixer if that request is allowed. So, for example, you can do things like an API management or quota management here and if this request is allowed, then Envoy will pass the data to the service or to call another service. Okay, when the call, the request comes back, the response comes back, Envoy will report that to Mixer. So, Mixer is a component that you can do checks and reports. So, when the response comes back before answering to the caller, Envoy calls the report to Mixer. So, Mixer can do things like pass all this data or this information to Prometheus, for example. So, you have observability or metrics or tracing or logging out of the box. Okay, so application developers don't need to write a single line of code to have all these features out of the box, right? So, like I said, you can do quota management, telemetry tracing like Yeager or ZPK and Mixer is pluggable. So, you can write your own adapter or your own plugin here to interact with Mixer, right? So, this is an interesting feature because you can, you know, if your company needs to do something that's not in the default set of plugins, you can write your own. The third component I want to show you is Citadel. Citadel is the security component of Istio. It's responsible, it's a certificate authority, it's a CA, so every time a new service, a new deployment happens in Kubernetes, Citadel is listening for that and creates certificates and put those certificates into the invoice. So, why is it needed? Because it can do these mutual TLS automatically. So, let's say that service A calls HTTP slash slash service B. So, for the service A point of view, it's talking plain text. But under the wire, it's talking TLS. So, this happens automatically, of course, if your operator configures in that way. But this is a feature that you can enable in your service mesh, okay? It can enforce mutual TLS between services. Also, it allows authorization and auditing, right? Do we have enough time? I can show you something really introduction. So, okay. I have downloaded Istio 1.0 here and I have installed, pre-installed the Istio in my mini-cube. So, I have a mini-cube here which is a Kubernetes on my laptop running on top of VirtualBox. And, for example, if I run Gimpset.org at name spaces, you can see there is this Istio system name space. Okay, so Istio is installed into its own name space. So, I can do, for example... So, this is Istio installed, right? It's quite easy to install Istio. I'll give you the links, but it's quite trivial to install Istio. So, it's installed. What I'm going to show you right now is... This is a simple application that's in the Istio.io website. It's a book info store. So, this is the entry point of the page. So, the entry point calls the review service and calls the detail service, right? The review service has three versions deployed. V1, V2 and V3. The V1 does not call the rating service. V2 and V3 calls the rating service and shows a black star in V2 and a red star in V3. So, this is service A, service B, C and D, right? In our service mesh. I'll have five minutes. Okay. This is the same application with Istio installed. This black box here is the envoy, right? So, as you can see in the arrows, the service to service communication is made through all the envoys, right? So, I'm... I'm trying to... So, I'm going to create this name space here called book, right? And I'm going to... Yeah, let me... Okay, it's creating... I'm going to show you this. So, I'm going to install those... that book info. As you can see here in the deployment, this pack, okay? I just have one container which is details. Here's the image. It's going to download, right? But if I show you the deployments, the deployments are untouched, okay? Yeah, but I want to show the pods. So, as you can see, all the pods here have two containers. Okay, so the deployment specifies just one container, but when we install, it's still automatically injected the sidecar or the envoy container. We can see that by issuing this. So, let's get this pod, for example. So, here is the pack, okay? So, we have the first container and then we have the second container here, which is the sidecar, right? Okay, let me run out of the time. Anyway, let me do that. Oops. Oops. What? What I'm doing wrong here? That's right, but... Okay, anyway, it's not working. Did I create everything? Yeah. Okay, anyway, I'm running out of the time. I'm sorry, the demo didn't work. But you can try this demo in this website here, Istio.io, which is the main website of the Istio project. So, there's the doc session. There is a tutorial session. You can try installing this booking for application, play with it. Of course, you can use a proper Kubernetes cluster, or you can use your Minikube, for example, like I'm doing here. But if you don't want to use anything in your computer, you can go to this learn.openshift.com website and you can try everything in the browser, right? You don't have to install anything in your computer. You can try Istio directly on the web browser, right? I don't know if you have time for questions. Okay, any questions? Or... It's just too fast. I don't know if I managed to pass all the ideas. Okay, anyway, thank you very much. Yeah, I was interested in the load balancing aspect of that and how that would look if you were to have multiple containers for a single service. Would you have an envoy for each of them, or would you have one envoy to talk to each of them? I'm not hearing you. Am I not coming through? Was that better? Sorry. I was interested in the load balancing aspect and how that would look. Would you have a pod and an envoy still for each load balance container as well? Does that make sense? I'm still not hearing about the load balancer. For example, Kubernetes comes by default with running, robbing, load balancing for pods and for services. But with this, you can do load balancing that I was about to show in the demo. You can do load balancing by, for example, by headers. So you can do some A-bing tasks, for example, based on headers. So if the user is authenticated with the login acts, you can go to this route. Otherwise, go to the default route. So it's a smart load balancing which, of course, is made possible by Envoy. So Envoy is the guy that's doing the third job here. And Istio is just leveraging Envoy features and showing it to the operator in an easy way to configure. Okay, thanks. Any other questions? Okay, thank you very much. I'll be outside.