 Good evening. Can you hear me? Is it loud and clear or do I need to use the mic? All good? Okay. Let me try one more time. Good evening. Good evening. Still not good enough, right? Let me try one more time. Good evening. Good evening. That is awesome. Okay. Welcome. Thank you so much for coming and spending your valuable evening. Really appreciate it. And I feel so great to be part of this meetup group because it is an opportunity for me to learn as well as share at the same time, right? Let me introduce myself. My name is Anil. Anil Sagar. I am a customer engineer at Google. So today we are going to learn about secret source of microservices and APM management. So what is the secret source? We are going to find out. But before we find out, let me ask you a few questions. What does microservices mean to you? Why do you need microservices? Anyone? Is it to debug? Or else? Sorry? To avoid monolithic applications. To avoid monolithic applications. Great. Why does companies need microservices? Is it for deployment? Yes. Is it for deployment? Is it for scalability? Is it for scalability? Exactly. Save resources. Save money, right? More than anything else, right? Yeah, absolutely, right? Save money, right? So why do you need APM management? What is the importance of APM management? Decouple front and back end. Sorry? Decouple front and back end. Decouple front and back end. That is great. What else? What is the importance of APM management for the companies? For the companies that you work for? Uniformly apply policies. Uniformly apply policies. What else? That is a great point. Monetization, right? Today, a lot of people think APIs is a technology. APIs is a way to connect your mobile application to your back end. APIs is a way to connect your website to back end. APIs is a way to do integration. That is not true. That is not fully true. That is partially true. What does API really mean? APIs are business products. When I go to Wikipedia, I see APIs application programming interface, right? You might be surprised to hear APIs are business products, right? So let me tell you a story. Why APIs are business products? Why API management is very critical for any enterprise? Before we go into Kubernetes and technical and all the stuff, right? Because it is all related, right? We will stitch together the story. How do you use Google Maps here? Right? Almost everyone. How does Google Maps make money? It is silver. Exactly. So what is the answer? APIs are the business products, right? Google makes money through APIs. Today, take any company. Google, Facebook, they all have APIs. They all integrate with other businesses, right? And indirectly, they create new business revenue channels. So API management is very critical for any company, right? That is operating today in the space. So what is the secret sauce of microservices and API management? What is in it for you, right? What is in it for you? Let's find out. How many of you heard about these two logos? How many have you seen? First one is this Theo. That is the secret sauce to microservices. How many of you heard about Apigee, right? So Apigee is an API management from Google, platform from Google. It helps manage APIs, secure APIs, scale APIs, develop APIs, right? Analyze APIs, monetize APIs. But we are going to focus more about Theo and Kubernetes and microservices today. We are going to talk less about Apigee, okay? So the agenda for today, what are the challenges of microservices? Microservices are really cool, right? Everybody talks about microservices today. But there are challenges in building these microservices, right? Deploying them and then scaling them, managing them and finding out what is going on. So Kubernetes is a platform to manage the containers, right? So let's say you are managing the containers. You are building the microservices. How do you manage these microservices? So that is where the Istio comes into the picture. So we are going to talk about microservices challenges and we are going to talk about the Istio and the service mesh concept. And we are talking, we are going to talk about microservices and APIs, how they come together, right? And we are going to see a cool demo, right? Starting everything from scratch, okay? So what is a microservice? So people talked about it, right? Why do you need microservices? What is a microservice? So how do you define it? Anyone? What is a microservice? Sorry? What is a microservice? Sorry? Independent components. Independent components, okay? What else? Breaking down and monolithic applications into different components, right? But... Group together, right? So microservice is an architecture style, right? How do you develop applications? It is an architecture style, right? The things that you do will fall into these buckets. For example, breaking down the applications into multiple small applications, right? So these are fine-grained, right? And have a single responsibility, what they do. And independent, different teams can develop, right? Using different languages. Let's say you belong to an e-commerce company. Let's say, for example, like Lazada, right? There are different teams that develop different applications. One team is working on cart. One team is working on product. One team is working on catalog. Different teams have different skill sets. For example, one team knows Java well. Another team wants to develop on Node.js, right? Another team wants to use PHP, right? They want to develop these applications at different periods of time independently and scale them, manage them without depending on other teams, right? So language and platform are going to stick. Like I mentioned, one team can use Node.js, another team can use PHP, another team can use Java, right? So it's language independent. So why do you need microservices? Why do you need microservices? Because if you have a traditional deployment model, let's say you have a business logic and a database and monolithic application, if you want to scale, then you need to scale everything together, right? No matter what. Think about an e-commerce application. How many people actually browse products? How many people actually go and do the checkout and buy the product, right? If it is a monolithic application, then you have to scale everything, right? No matter whether the users are looking at the products or they're actually coming to the cart and doing the checkout. Think about the microservices. If you actually break down these services into product, catalog, a different service, cart, a different service, checkout, a different service, then you can have only two machines for the checkout and maybe 10 machines for the product catalog. So microservices offers a lot of flexibility and also optimize your resources, right? So how does it look like? Let's say you break down this monolith, the large application, right, into different, different components. So it looks like a service mesh. For example, you will have a front-end business logic, you will have a back-end business logic, you have a database layer, right? You will have invoicing a separate service and these services you are going to run using things like Docker where you actually package these services and then run on Kubernetes, right? Which can orchestrate these containers, the Docker containers. So for a simple like two services, it looks like this where, for example, invoices is running on, let's say, four containers, right? Front-end business logic is running on four containers where you are load balancing it. What happens if there are more services? It looks like this, right? Let's say in any enterprise, you will have multiple applications. Each application will have multiple services, right? Think about you have 10 services and you are running in 10 machines. What will happen? It's 10 to 10, it's 100, right? Assume you have 10 applications, it will be 1000. So it will look like something like this, right? And you see the networking and the service calls going from one microservice to another microservice, right? What happens if something breaks in middle? How do you find out what is going on? How do you find out what is going on? It's very complex, right? So it's easy to create microservices. You can write a software code and you can deploy in a container, you can use Kubernetes and you can start running them. But how do you manage these microservices? So that's the challenge the industry is facing with the microservices. It's not easy. It sounds very cool. Microservices, let me break down. Once you break down, once you start running them, you will end up with something like this. Let's say a particular service fails here. How do you know which services will get impacted because of that particular service? How do you trace? How do you monitor, right? How do you route? So these are all the complexities that you will face. So obviously there are platforms which help you solve the management of these containers. The Kubernetes, that's what people talked about it. So there is a Docker which helps package the applications. There are others as well. But the problems that you face once you deploy is how do you do load balance? How do you do fault surveillance? For example, if a particular service fails, how do you see the impact of that fault across many other services? How do you observe, right? Which service is talking to which service, how the traffic is flowing in this mesh? How do you monitor? How do you log? How do you do circuit breaking? For example, if one service breaks, how do you find out what is going on? Think about the Internet, right? Think about the Internet. Today you are accessing some server somewhere, right, in other continent, right? You are not worried about how the traffic is going through which routes and which servers, but still you end up getting that page loaded on your website. Think about the same thing for microservices. How can you do the similar thing that Internet works in the microservices world, right? So there are open-source tools that you can use on top of the Kubernetes when you deploy the applications. To do, for example, if you want to do load balancing, you can use ribbon. For the service registry, you can use Eureka. For the tracing, you can use Zipkin. And monitoring, you can use Prometheus, right? These are the various tools that you can use. But integrating all these things in microservices will take a lot of time. Because think about injecting all these components into microservices. How do you inject into that many number of servers? If you start writing this code, load balancing service registry, circuit breaking, fault tolerance, and monitoring inside your microservice, it is no longer a microservice, isn't it? It will become a monolith. One start injecting all this code into microservice. So how do you do that, right? So that's where the things like, Istio will come into the picture, the service mesh. So let me give you a simple example. Let's say somebody gives you 100 threads, different colors, threads that you put into a bag. It will become a thread ball, right? If somebody asks you to find out which end is where, right? And if some end is broken, how do you go and fix it? It's not easy, right? So you need a platform to manage it. So that's where the Istio comes into the picture. So Istio is an open source platform. Open source platform developed by Google, IBM, and Lyft, and most of the components came from these three companies, okay? Just like Kubernetes, how it took over the world, Istio is the next big wave that is coming to the market. Istio is the next big wave that is coming to the market. Let's say you have a monolith, now you have broken down into microservices. You are using Docker to package microservices. You are using Kubernetes to manage those containers and you are running them. But how do you manage these microservices? So that's where Istio comes into the picture. So let me take a brief pause if you have any questions. What I said in the story. Anyone have any questions? Any questions? Okay, so Kubernetes is platform to deploy, scale, and execute the containers. Istio is a platform to manage the microservices, right? So with Istio, you don't need to use these different components and worry about how do I integrate these things into my code because Istio is going to help because Istio comes with a proxy that sits next to your service. It's called Sidecar, Sidecar proxy. It automatically injects these proxies next to each microservice which handles the load balancing, routing, right? Security between the microservices, right? Secure braiding and all these components. So this particular proxy is called NY. It came from Lyft. It can handle 2 million requests every second. It is battle tested. So that's part of the Istio. So Istio pilot is the configuration engine where you can actually define all the rules which can automatically go and deploy these rules in each microservice that is running inside the containers. In the Kubernetes environment, okay? Think about the service-to-service communication. How do you secure it? So what is the transport level security that you are aware of when you want to expose the services? How do you secure the services when you expose them? Oath is more at the API level but the transport level. SSL, right? So what about the security between these microservices, right? What about the security between the microservices within the service mesh? Yeah, how do you manage the certificates? How do you renew them, right? Think about in the service mesh, right? If you have like 10 microservices running in 10 containers, you have 10 applications. How do you secure the connection between these microservices? How do you manage them, right? So Istio provides out-of-the-box capability, right? Where it can actually create these certs and secure the endpoints and whenever the certificate expires, it can also renew it. So Istio manages that. Istio also has a plug-in concept, right? Where you can intercept the traffic and execute some policies, right? So that you don't need to actually put all this code inside your microservice. If you put all this code inside your microservice, it will become monolith, okay? So this is how the entire architecture works of Istio. For example, if you have a microservice that is running inside a container of a Kubernetes pod, right? And if you have another microservice, so the pilot can give the rules to this NY proxies and manage the traffic. For example, load balancing, splitting the traffic and all those things. So I have a microservice. Do I need an API management? The answer is yes or no. Why? I already have APIs. Why do I need API management? One on three points. One? One on three points. Throttling kind of thing. Throttling kind of thing. Yeah, yeah, yeah. What else? You won't always have one, right? You will eventually grow. Yeah, you will eventually grow, absolutely. So let me again take an example of Google Maps. Can anybody guess how many integrations Google Maps has? Like Uber, like Grab, like Gojek, who uses Google Maps? Can anybody guess? Thousands. Thousands? Millions. Millions? Ten million. Ten million integrations, right? So Google exposes these services, right? So how does this external parties consume these APIs? How do they come and find API documentation? How do they test? How do they ask for access? How do they understand about these services, right? Discovery, right? How do you discover, right? So these things fall into API management space. You have APIs, but how do you manage them? How do you scale them? How do you analyze them? How do you monetize them? How do you package them? So API management plays a very, very critical role in that, okay? So let me talk about the digital value chain. Today, most of the developers or IT teams in the company, they focus only the first half, this far left. I have my backend systems, right? I have an API team. They're building APIs. They're exposing APIs, right? But the real problem is how do you make it consumable to partners, to external people? How do I package it? How do I onboard them? How do I share the services with them, right? How do I share the services with them? That's important even within the company. Within the company, you have different lines of business. How do they find these services when they're developing different applications? How do they ask access, right? So how do you bridge this gap, right? Starting from backend to API teams, building APIs, the application developers who access these APIs, build applications and give it to end users. Today, most of the teams are focusing on exposure of APIs. Let me build microservice, let me deploy in Kubernetes, let me expose it, right? But how do I package these things? How do I onboard my partner? How do I share these things? That is where the API management comes into the picture. So, for example, API catalog and Discover. Today, you go to developer.google.com. You find Maps API, right? How about the same thing for your company, for your APIs, right? How many of you have developer portal in your company where you can access the APIs, right? Three, four hands, right? How many of you onboard partners through a developer portal just like Google Maps and share the services within five minutes, right? Two hands, right? How many of you actually analyze which partner is accessing which API through which app, which is talking to which backend and how many API calls are coming? Do you have this data on the hand, right? So again, few hands, right? So think about the same thing for your internal teams, internal applications, how do you find gain visibility? So that is where API management comes into the picture. So, the microservices management platform, the Istio solves these problems at the microservices level and the API management solves these problems. For example, it can be onboarding a partner, it can be providing a catalog, right? All these things. So when we talk about microservices creating these APIs, we are only talking about develop and secure. But we need to connect all these dots as well. For example, how do you package these APIs? How do you publish them? How do you scale them? How do you monitor them? How do you analyze them as well as how do you monetize them just like Google Maps? API management plays a very, very critical role, okay? So these are the different patterns. I won't go deep into that, right? So on a high level, when it comes to technical nature, the Istio addresses these problems and the API management addresses these problems, right? So let's go into the demo. I know slides are boring, right? Most of the times for the technical developers. Let's look at the demo, how the story comes together, right? The Kubernetes and Istio and API management, okay? So any questions? What do you heard? If we are speaking from an on-premises perspective, not from a public cloud provider side, like before presentations, talk about Ingress Controllers. So now if I'm thinking of exposing my service to the outside world, like I look at several options, right? From Istio Gateway to Kubernetes Ingress Controllers and now when we speak about API management, we have API gateways. Right. When it comes to choosing between these three, what are all the factors which I should be having in mind? So obviously if you are building the microservices, you will use Kubernetes or Docker to manage the containers where you are running the microservices. So once your containers are up and running, if you have many containers and many microservices, if you want to manage these microservices, then you will use Istio, right? And then great, you have used Istio, you are managing the microservices, but you need to expose them to developers. You need to package them. So that is where the API management components will come into the picture. For example, Apigee is one of them. Again, there are a lot of open source as well, like Kong and others that you can use, right, in the market. So you need to connect all these dots. Just not managing the microservices, just not building and using the Kubernetes, how do you expose them? So there are a variety of options that are available, right? Depending on the needs, you will pick and choose what you need, right? But these are the best options because if you talk about managing the containers, right? Kubernetes is the best one. If you talk about managing the microservices, Istio is the best one, right? Because again, it's open source. It's all battle tested inside Google, Lyft and IBM, right? The components came from these people. They are already using this in production, right? And then when it comes to API management, again, you have a lot of things that are available in the market. But if you are looking at enterprise version and Apigee is the leader in the API management space, right? That's also from Google. Anything specific? Okay, great. So any other questions before we go on to the demo? So we're going to use Google Cloud today to see how everything works together once again. So what we're going to do is we're going to launch a Kubernetes cluster in Google Cloud and we're going to deploy a sample bookstore application. That sample bookstore application has different microservices. For example, product catalog, reviews, and details. Let's say there are three microservices that are running inside this particular cluster. So I'm going to go to Google Cloud and launch a cluster. So I'm going to use the Google Cloud console to do this. So this particular cluster will have multiple VMs, the virtual machines, right? And you can scale them on demand and you can run your applications, microservices, inside these machines, which are managed by Kubernetes. So this is a console of the Google Cloud. So I'm going to just make it a little bit. So as you can see here, I'm launching a cluster called Istio Cluster and four nodes and four machines on this particular cluster in the Singapore data center. So I'm just running a command and launching these virtual machines, right? So I'm going to take few seconds. There you go. So if I go to clusters, if I just refresh this, we'll start seeing them. Second. I'm going to take couple of seconds to launch the cluster. So you can do it using UI as well. And if you're a developer, if you're friendly with the command line option, then you can just go to the console and execute a command, which is going to talk to the API, the Kubernetes API, engine API, and I'm going to launch the cluster. There you go. You can see, right? The Istio cluster is getting created. Four nodes and each having four cores and 15 GB memory, right? So once the cluster is created, what we're going to do is we're going to launch a sample application within the cluster and install the Istio. We're going to see everything in a nice visualization at the end of the demo, right? How you can able to trace and monitor your microservices that are running inside this particular cluster. It's going to take around couple of seconds. Maybe I can answer any questions if you have related to API management, Istio. I can take up some of the questions, yeah? In the meantime. Istio is also doing the work by itself. Istio is also doing the work by itself. There any way of problem with the Istio is something like from the customer to the client itself, instead of going to the API gateway, when Istio itself is getting popular, something like that, whatever it was, this is a problem with capacity, something like that. So Istio today has traffic management capabilities like traffic splitting and load balancing and all those things. But if you're looking at particular capabilities like throttling or rate limiting, right? Then you have to do at the API management level, not at this point of time, okay? But Istio also has plugin and adapter concept, right? There are again some plugins wrote by the developers, right? But generally these things you can decouple and do it at the API management level, not at the Istio level, okay? Maybe in future it will come up. Any other questions? Yup. Yup. So there is a difference, I'm sorry. Thank you. There is a difference, as I understand, Google views service mesh and Istio. Istio is an open source product. Service mesh is intended for GKE and the much commercial open source. Is this a functional difference between two of those products? Okay. I haven't heard about a product called service mesh. It's a concept, right? If you have microservices that are running in multiple clusters of nodes, obviously each microservices will be talking to another microservice, right? That network is called a service mesh, right? So how do you manage this service mesh is where the Istio plays? How do you manage this service mesh? Also, Google is coming up with a managed Istio as part of the Anthos. If you heard about Google Cloud Anthos, Istio is part of the Anthos on top of the Kubernetes, right? So Anthos is a hybrid cloud APM management platform where if you have applications or microservices and you are running on Kubernetes, Istio will be on top of it which can manage these microservices and help expose them, okay? The second question is what's the overhead sidecars in terms of latency, memory, and how much is it? That's a great question. So I mentioned that Istio NY Proxy was able to handle 2 million requests in a second, 2 million requests in a second, right? And also the overhead is a few, I can say milliseconds, right? Not more than that, not more than that because it's very, very lightweight and it is battle tested for 2 million requests per second. But I don't know the exact numbers, right? Maybe I can able to share it with you later, okay? Does it require one more set? Which one? Istio implementation. Istio actually runs on top of the Kubernetes. All the Istio components that I talked about, for example, the mixer and the pilot and Citadel and all those things is actually the services that are running inside the Kubernetes. So you will see that when I actually install the Istio, if I go and execute, for example, Cube City will get parts. You can see all these Istio services running inside the Kubernetes. Any other questions? So we have the cluster ready, right? So we created the cluster. So let me get the credentials so that I can interact with the cluster, okay? So we got the credentials. So let me just bind it. So we have the Kubernetes cluster ready, right? So GKE is a managed service from Google which can actually create this Kubernetes cluster and have everything ready for you so that you can start running applications inside the Kubernetes cluster. So it's almost done. Yeah, there you go. So now we're going to install Istio. All right, using simple curl, right? So we're just downloading the Istio 1.1.7 version. Do you try to know the option to install Istio in the GKE? So currently it's not available, right? So we're still using the open source but as part of the Anthos, it is going to come as prepackage solution, right? Okay, there you go. So let me install the Istio. So as you can see, right? So Istio is actually having multiple Kubernetes configuration files. What we are doing is just we are looping in and running the kubectl apply command which actually installs all these things and run Istio inside the Kubernetes cluster. Okay, I think I forgot to run the cd command. Yeah, there you go. So if you go and look at the different services, it should be getting started. Maybe time for questions. Yes, yes. Okay, now we're going to install our sample application which is the bookstore application. So it's going to take couple of seconds to have these parts created and the application comes online. As you can see, right? So initially there was a talk about the parts. You can see the parts where actually the containers will be running inside these parts and you can see different services. For example, we talked about Istio Pilot, right? That is actually running inside the Kubernetes cluster, right? For example, things like tracing and things like Ingress Gateway which is a load balancer for the Istio. So all these are services that are running inside the Kubernetes cluster itself. Okay. So it's going to take a few seconds for all the things come online. In the meantime, we're going to execute some commands. As you can see, these are the different services that are running inside the cluster. And let's look at the parts. Yeah, so you can see that parts are getting started which are running these applications inside the parts. Okay. So we're going to apply some namespaces. Okay. These are services. As you can see, this particular bookstore applications has a product page, rating space, reviews page, right? And the details page. So these are the four different microservices as part of the bookstore application. Okay. So we're going to see the parts where they will be running. Okay. We're going to take some time to get the parts initialized. So I'll take some questions. If you have any. Any questions? Yes. So how do you install some of the services? Okay. So how internal it might is and which node it is requiring some service. Yeah. You have four nodes. Yeah. So where that this service location, this service location, internally manage where to put these services or any such logic or something or it will just keep all the things and one more something like that. So Kubernetes manages this, right? And you will define this particular microservices, how many replicas you want to create which know and which cluster you want to run. So you will define that as part of the ML file, the configuration file. So Kubernetes almost automatically picks up these configuration files, right? When you say. Yeah. It goes down. Yes. So obviously there will be multiple nodes, right? We have seen there are four nodes, right? And each node will have multiple parts. Each node will have multiple parts where your services are running and replicated. Even if one node completely goes down, you have another node. Is there also running inside these particular nodes? Which one? Okay. So that is managed automatically by the Kubernetes. Okay. You do not need to manually manage that. Think about if you have many applications. You have many nodes. You do not want to manage that manually. So Kubernetes replication controller and scheduler will take care of taking these applications and then deploying inside these particular nodes. Even if one node goes down based on the replication factor, it automatically finds out other nodes which have space and launches these parts and deploys the containers inside those parts. So Kubernetes automatically takes care of it. Okay. So any other questions? So we are just waiting for the nodes, parts cut to come up. Okay. As you can see, right? This particular microservice is running on two parts, right? This particular microservice is again running on two parts. Even if one goes down, you have the same service running in another part. Okay. So we are just waiting for the reviews to come online. And also you can see, right? There are different versions. For example, reviews, there are a couple of versions, three versions, and the ratings there is a V1 and details there is a V1, right? So there are different versions of microservices. We are running inside this particular cluster. Yes. What you said is it's not like the main container and the Istio container? Yeah. Even Istio services are running inside the containers. We will see all of them, right? Okay. So all the parts are online. So we are going to test our application, right? And so we are going to do it directly by running up and accessing the application inside the container. So one second, let me bring up the shell console separately. So as you can see, right? So here are the different workloads, right? So as you can see your application, ratings and reviews, and also you can see the Istio services within the same cluster. Even the Istio services are running inside the Kubernetes cluster, right? Okay. So as you can see what I am doing now is I am directly executing the command to get the product page just to get the title of it. Right now it is not exposed to the Internet. It is within the container. It is not exposed to the Internet. So when I run this particular command, you should see, yeah, there you go. Sample bookstore, the HTML title. But we are going to expose this particular application to the Internet and access it from the UI, okay? So we are going to do that now. Let us do that. Okay, one second. Something is wrong with my machine. Okay, let me try one more time. Just give me a minute. Okay. So we are going to expose this application to the Internet and we are going to do some routings and rules. For example, 50% of my traffic I want to send to V1, 50% of my traffic I want to send to V2. You are going to see all those things. So let us do that. Okay, so let us get the ports and the IP addresses and then let us expose to the Internet. Okay, so we are going to get the ingress port of the Istio cluster. Okay, so let us get the gateway URL. Okay, echo. So let us see our application and access it from the Internet. How does it look like? Sorry. Okay, so this is IP address and let us access that slash product page. Okay, so as you can see, this is my bookstore application. So I have different microservices coming together as a one single application. So I have the book details. I have the reviews and ratings. So these are the three different microservices that are running. You might have seen when we launched this particular applications. You have different versions of microservices running. So for example, there is a product page V1 and the details page V1. But the ratings and reviews, we have, for example, we have multiple versions, V1, V2, V3. So if you see here the applications in the current form, if I keep repressing the page, you will see the reviews getting changed. So one V1 has, for example, the ratings and the V2 has the ratings filled in and V3 does not have any ratings. So as you can see, now I am spreading the traffic, only for the ratings. I am spreading the traffic only for the ratings where you see the ratings getting changed. Let us apply some rules using SEO and let us see how that affects the application. So before we do that, let us actually see the visualization how this particular cluster is running. So let me go back to my application. So let me apply some rules and let us see that particular destination rule. How does it look like today? So as you can see here, this particular amul file, we are routing to various reviews. For example, we are routing to V1, V2, V3. So we are going to change this little bit and see how it affects the application. So before we do that, let us apply some plugins and see the visualization how the cluster will look like. So I am going to install these plugins to monitor, trace and visualize the entire cluster and understand the topology of the microservices that are running. Next one I am going to do is the Keali console which gives the visualization of all these microservices which is very interesting to see. So let me do that. So let me copy these configurations. So let me go to the component cluster. So let me apply these rules. Great. And we are going to set the namespace and set the username and password for this particular console. Let us say admin and admin. So let us access this particular visualization tool. So this is the URL. So I am going to log in into this console and let us see how it looks like. So this particular Keali console automatically gets injected into the Istio Sidecar proxy. It also intercepts all the traffic and gives the visualization how does the deployment topology look like. So let us look at our bookstore applications. As you can see, so you have a product page microservice, you have V1 version, and it is talking to the details microservice which has a V1 version. And in the same product page you will also access the reviews which has three versions, V1, V2, V3. And then it is talking to the ratings where you see that rating widget. So let us change some rules and see how does it look like. So let me make few calls and see the visualization how the traffic is flowing. colon 80 slash product page. So let me access a few pages and let us see how does it affect. Sorry. Yeah, there you go. See now you can see how the traffic is flowing. So it is going to the V2 and V3 version coming from the product page to the V1 and it is accessing details and it is going to V1, V2, V3 and then V1, the rating widget. So that's why if I refresh this multiple times, so one call is going to V1, one call is going to V2, one call is going to V3. That you can see in the Keali console dashboard you can visualize how the traffic is flowing through different microservices. This is very helpful for you because when you deploy these microservices when you think about large applications it can have multiple microservices and you want to understand how the traffic is flowing. So what happened here is this particular Keali console related code is injected into the sidecar proxy of each and every microservice and it is intercepting the traffic and sending the data to this visualization tool. So let's apply some rules and change the traffic behaviors. For example, you have V1 version. You are launching a V2 version. You don't want to send all the traffic to V2 version. You don't know whether it will work or not. Probably you will send only 10% of the traffic to V2. 90% traffic goes to V1 and you want to test and you also understand how this traffic is flowing through different kinds of microservices. So let's do that. So let's apply some rules using Kubernetes which talks to the microservices running inside Istio. So right now what I am going to do is I am going to route all the traffic to V1 only. So I am going to run this rule. So I am going to go back to my deployment. So I am going to run this particular rule. Let me reconnect. It got disconnected. So you can look at the rule. So let me show you the rule. Let me do cat on this. Let's say cat on this cat spaces. So as you can see here I am going to route to the ratings only V1. Earlier we were routing to V1, V2, V3. Now we are going to route only to V1. So I am going to apply this rule using kubectl command. So kubectl apply. So what happens is the Istio pilot takes this rule automatically propagates these rules to the envoy which is sitting next to the microservice which intercepts all the traffic and applies these rules. Think about if you want to do it manually. You don't know where the service is running on how many number of machines, how do you replicate, how do you go change it, how do you deploy them without affecting the run time traffic. So great. Our rules are deployed now. So you will see only V1 where there is no ratings widget. So even if I keep refreshing it. You can also start seeing the same thing in the Keali console. So let's say if I just refresh this you can see the traffic only going to the V1. Yeah, there you go. So now you see the green dots. It is only going to V1. Earlier it was V1, V2, V3. Now you can start understanding and visualize the entire service mesh. With these tools and plugins, if you are deploying microservices, how do you actually get visibility? What is going on? When you are doing the changes to your microservices. So that's one simple rule and let's look at a few different rules as well. So I'm going to go back. So now I want to route based on user identity. For example, I have this particular rule. Let's say let me copy and see this rule. At let me do it one more time. Let's say if a particular user locks in, then you want to give V2. Otherwise you want to just give V1. So you can apply all these rules and then deploy using the STO which can propagate all these rules and which can help you do the testing and before you do actually the releases. So just in interest of the time I can just skip these things. Again you can do, for example, routing based on user identity, injecting an HTTP delay fault. For example, you want to inject a delay into a particular microservice. You want to see how other microservices will get affected. So you can do all these things in STO without actually going and coding these in individual microservices. So that's all I have today in terms of demo. Again, we haven't covered the API management and how does the API analytics and monetization, everything works. If you are interested on that, you can reach out to me. I'm happy to share all those things. So any questions? What you have seen? Yes. So there is a Jipkin, there is a plugin. So once you just enable that plugin, you can also trace what is happening. You can see the locks and metrics. Sorry? No, it's a different UI. I mean different plugins that you can use. So you can use, for example, Grafana and Prometheus for monitoring and alerts that can automatically project into each SideCard proxy. So you don't need to manually go on code within the microservice. SideCard proxy is getting bigger and bigger. That's true, but these are very, very small rules that you are defining it. These are rules. It's not like an application code. You're not actually writing the code. Again, these are all configuration-driven approach. You're not actually creating a code. You're not actually writing code. If you start writing code, you will have complications in scaling them and managing them and latencies and all those things. You are using Istio platform to define these rules which automatically propagates and, like I said, these platform is badly tested inside the Google and Lyft, right? So we have developed this so that it can address all these complications instead of you writing the code because everything here is configuration-driven approach. So you don't need to worry about the performance and scaling because the platform takes care of it. Is there any documentation around securing Istio? Yeah, again, like I said, Istio is open source. If you go to Istio.io, you can find all the details, including these installation steps and all those things. Any other questions? Yeah. So circuit break is a feature. Yes. We take the rest of it offline. Sorry. Yeah, no problem. Thanks again to the speakers, Daniel, Anil and Anto, who's taking for me. Google very kindly offered t-shirts for everyone. So everyone here should be able to get a t-shirt. They're at the table. We also just be mindful and take just one at this point in time. Thanks for coming. Grab some swag. We don't want any stickers off tonight, so please take them. But t-shirts, please. Thank you.