 Welcome. Yeah, thanks. Thanks. How are you? Yeah, I'm good. How are you guys? Fantastic. Background here. What is this background that you're using today? It's not my home. It's virtual background. Oh yeah, I wish it was my home as well. So, I'm wearing this t-shirt. Okay, this is the topic for the day for you, right? We're going to teach everyone on the call and for the viewers on the YouTube recording post event like what to do or steal, steal right? Yeah, and I could say how to use, when to use, what to use, and is it only the fancy technology that I can integrate in the Kubernetes or not? Yes, yes, yes. Definitely, it will be a very basic, but it will be more around the deep technical stuff that I'm throwing on the audience. Alright, fantastic. Fantastic. And thanks a lot for joining in. So, guys, Apoorva is joining us from ThoughtWorks. He is our senior infrastructure consultant there, living in Pune and he's doing a lot of stuff around cloud native and Kubernetes, CI, CD, observability, and service mesh, one-on-one, oh, service mesh is to you, and you know, one-on-one, two-one, or three-one, I would say. I'm pretty sure about that. So, Apoorva, walk us through in the wild, wild weeds of service mesh and is to you. The floor is all yours. Yeah. So, I'm just sharing my screen. Your screen is, yes, we can see your screen. All right, take it up. Just a second. Yeah, I'm just switching the tab. Okay. So, yeah. So, today, we'll just look for the service mesh with one-zero-one. Here, for next 30 minutes, I'm bombarding a lot of technical concepts. So, it's not, if you look at the topic itself. So, if you are just thinking that this is only the basic stuff that is available on the Internet, it's not, but it's more deep inside on the service mesh and it's on how it is running behind the scene. Some guys, if you know, that is fine. If not, I'm just going to explain each and everything. So, I'm just starting from the scratch, how traffic is, traffic flowing inside the Kubernetes cluster and then I will compare the same stuff with the service mesh. So, you will get an understanding and idea how service mesh is working. So, let's get started. Before starting the session, I just hope everyone should have some idea about the Kubernetes and the service and how network works in the Kubernetes so that it will be helpful for each and everyone to understand those concepts. So, yeah, again, already done with my introduction. So, I'm not wasting my time over here. Let's move on to the next slide. So, yeah. So, here, we'll see the introduction to the microservices. I will not spend much more time on the microservices, but then we'll go, what is service mesh? Then we'll see the basic networking in the Kubernetes, as I said. Then we'll go to the introduction to the service mesh, Istio, what are the offerings that you will get from the Istio. Then we'll see the traffic routing with the help of Istio. And then we'll see the traffic or the networking in the Kubernetes with the help of basic Kubernetes networking versus the service mesh Istio networking. So, you will get the deep insight about how traffic is flowing inside the cluster. So, let's get started. So, in 21's era, everyone knows, I will agree that everyone knows what is microservices, how those works. So, basically, it's loosely coupled. It is running independently. It's language independent. There are a lot of free hands that you can do it. Those services, not like monolithic services, those are independently. Those can be owned by a small team. There are a lot of things that you can do with the resiliency. Durability, you can see, for example, you can take as an example of Amazon.com, where you can go and purchase something. So, sometimes the payment system is down, but still site is working. So, you can call it as a microservice type of architecture there. And these are the some of the microservices benefits. So, let's move on to the microservice architecture here. You can see in the slide, we have multiple microservices which are running. When the request comes down, here is one of the applications which uses Python, Java, Ruby, Node, everything. So, I'm not restricting to anyone that you should use the Python, Java, or Ruby, or Node. It's, again, totally language independent. And those are loosely coupled. I can send the traffic to the multiple applications. And those applications will process that data and you can get those results. So, here, you can see, first of all, the traffic is served by the Python application. And Python application is calling or sending those API traffic to the Java, then Ruby. And the process data is now sending to the Node.js application. So, you can see the, these are the some of the benefits of using microservices rather than going into the monolithic one. So, let's now dive into the services in Kubernetes. If you look at the slides, there are a lot of things I've bummed out on the screen. But don't worry, I'm just going to explain what all these things are. First, on the top of left side, on the top, you can see the cluster IP. Right top, you can see the node port. And on the middle, down side, you can see the node balancer. There is one more service which I haven't mentioned is external name. So, if you look at the cluster IP, so here is the top view, you can say, or how traffic is flowing inside the cluster. One is, you can consider host1, hostb is a two worker node which are running with my cluster. So, I have two worker nodes with me and I have deployed NGNX application inside that. And when flowing the traffic, how it flows, it will come to the port network. It is again pointing out to the Q proxy. And Q proxy is traffic sending those traffic to the port. So, again Q proxy, you can call it as a bunch of IP tables which pre-configured or when deploying any services, you can get those pre-configured stuff over there. And you can see the traffic is flowing from service to service. So, if you consider the default service, so in the Kubernetes default service is cluster IP. So, whenever you are exposing any service, within the cluster, it is cluster IP. If you haven't mentioned the type service IP, sorry cluster IP, it is by default, it is taking the cluster IP. If you look at the right hand side, it is node port. So, in the diagram itself, you can see how you are accessing the application. Let's consider I have one UI based application. I want to access it outside the cluster. So, what I will do, I will expose the application on the node port. And so, if you look at the access side, so this you can consider as a user. And from there, I am opening the port on the node. So, you can see the port is 3001. And from 3001, it is again sending the traffic to the Q proxy, Q proxy taking those traffic to the, to my port. So, you can consider if I am opening the port on node port, it is just opening the port on my node and that that port again connecting to my cluster IP and cluster IP is sending those traffic to my port. So, same in case of like load balancer again, if you see the extreme down image, here you can see again the access is, you can consider as a user. When he is trying to access the application, he has the DNS of my load balancer or he has the IP of my load balancer. And whenever he is trying to access my application, it is just accessing to the node port, node port to the cluster IP and so on. So, these are the basic functionality I could say which, which runs inside the cluster. I will not go deep dive how IP tables is mentioned, maintaining by Q proxy and all the stuff, because this is not part of this session. So, next, next slide, you can see this is the top view of any of the service, if I am deploying into the cluster. So, extremely like left hand side diagram, if you look at that. So, here you can see now, if you consider I have the top view of my Kubernetes cluster, you can see there are engine experts running inside my nodes. And here I am accessing the application. So, you can imagine how traffic is flowing from my load balancer to my port. So, first it will serve by your load balancer, which is the external one. Here I am considering as I am using EKS or GKE or any cloud service provider, which are offering like Kubernetes managed services. So, once I expose the service, so anyone accessing my application, which the user will hit first to the DNS or my load balancer IP and from there, it will flow internally. So, from that external load balancer, it will come down to my nodes, which are grouped together. So, at the load balancer level, all the nodes IP will get grouped together and those traffic will come inside my cluster. So, even if in case of, I am considering here I am exposing the port on my node, you do not have to worry because load balancer is taking care of grouping of those nodes. So, here what I want to say like services in Kubernetes implemented by Qt proxy component, which runs on every node and these components creates the IP table rules, which redirects your, which redirects request to your port and hence services are nothing, nothing else than the IP table rules. So, all those traffic coming from outside to the port, you can consider all those are IP table rules or you can consider as a, this routing is IP to IP communication. And this is how like Kubernetes API works in case of like master node or control node, even if it is go down, your traffic will be there inside your worker node. So, you can consider because it's all the traffic and all the rules which are there, those are with your node and not with your control node. So, this is what I want to explain like how normal traffic looks like in the Kubernetes cluster. So, in the middle diagram, I explained earlier. So, if I am exposing any application on load balancer, here you can see load balancer behind the scene creates the node code, node code creates the cluster IP and how this traffic flows. So, if you look at the, if you look at the extreme right hand side, there again, same you can consider there are two example, one is traffic coming from the load balancer, another one is coming from the ingress. So, in case of ingress again, on top of the load balancer, I am again using one layer called as ingress where I am defining the ingress specification, ingress rules. And according to that, that traffic will get to your actual pod. So, from the diagram itself, you can see like, you can compare the both ways like load balancer and ingress are almost same. We are just adding something on top of the load balancer. So, again, ingress controller will send those traffic to the Q proxy, Q proxy has the IP table rule, IP table rules which reside inside your node, then node is sending according to IP table rules, sending those traffic to your pod. So, I hope like you have again now good understanding of how traffic flows inside the cluster with the help of any service and with again ingress too. So, let's move on to some, some insight about the service mesh landscape. So, you can see here are some of the service mesh landscape you can see if you want like some of those who are unaware about like what are the service mesh available in the market, you can consider one is Istio, one is Linkardee, one is Consul, one is AWS native, we can call it as AppMesh, then AppMesh, Pong and basically the NYC. So, all this happening via the NYC. So, we cannot avoid the service mesh without NYC. So, basically these are the some of the offerings or these are the offerings that are available in the market. So, before jumping to the service mesh, I just want to compare and here you can get some of the idea as a lot of people might have gone through this image which uses handle what like ingress is handling the load balancing, SSL termination and virtual hosting only, where if you are using Istio, here they are only mentioning few of the point, but there are a lot of things that you can handle with the help of Istio, those we will see one by one in the top. So, again when you will get the idea when to use ingress, when to use Istio and when to use API gateway, because if your application is API based and if you want to play around API, then definitely you need to think about the API gateway. These are some of the slight difference in between Istio and API gateway. So, yeah, so now come to the topic where everyone is excited and want to learn about the Istio. So, before moving to the topic, I just want to say this may Istio completed four years or I could say the fourth birthday. So, already Istio is in market more than the four years now. So, you can consider like what amount of people or what amount of traffic Istio is capable of and how mature is. So, it's not like within one or two years back Istio was in place, it's more than four years now. So, again next May it will complete five years. So, there are multiple contributors. Anyone wants to contribute to the Istio, they can go, they can raise the pull request and do the things. So, let's jump to the offerings that we will get or if I am integrating the Istio with my application, you can see some of the advantages that I will get from the Istio. So, first is intelligent routing or resiliency or secure policy or telemetry. So, I'm not going to explain each and everyone but some bold points you can capture it like if I want to do the dynamic routing configuration, I want to send the traffic 50% or 90% traffic to services service B that I can do it with the Istio. If I want to do rolling out type of deployment that I can also handle with the Istio. It could be AB type, it could be canaries, it could be blue, green, any type of deployment that also I can handle with the Istio. Some insight we will see in the next upcoming slides. Then we can say resiliency where I don't have to do the at the coding level. All the things which is handled, it will be handled by the Istio. So, you don't have to worry about things like circuit breakers or retries the health check or timeout. Those will be again handled by Istio. You don't have to worry about the code. You don't have to do anything in the code. So, again, next point is security and policy. So, definitely it's one of the big constraints in any of the domain. It's not about like BFSR domain or any opens or domain but definitely you need to think about the security first. So, it basically offerings the basic concept like giving you the mutual TLS and TLS where you can integrate in between the service and all those traffic will flow on your HTTPs only. It will not be over the HTTP. So, all those things and encryption at the traffic level, those we can do it with the Istio. Then again the telemetry. So, telemetry is, we are going to see in the next couple of slides how telemetry is getting captured. So, in between those two services, there are multiple traffic is flowing and those traffic we can capture with the help of Istio. And those how it will get captured, we are going to see. And the different offerings like Kialli dashboard or if I want to trace those traffic, I can use the zipkin kind of services and I can trace the traffic. So, if I have service want to service B or if some user is accessing the service outside the cluster, if you want to access the service inside the cluster with the telemetry, I can trace starting from top to end. So, again apart from this, what else Istio is value proportion adding to my existing integration. Again the secure commission as I said and again if I want to mention over here, as I said it's not relying on your L3 routing which is the native traffic routing in Kubernetes, it's rely on L7 layer of traffic. So, all the traffic which flows inside the cluster will be on L7, it's not L3 or L4 routing. So, you can imagine it's application layer and what are the things I want to give it to the pod or I want to authenticate my API or I want to do authorization or I want to filter the API, I can do it before landing those traffic to my pod. So, again apart from that I can do the secure of the traffic, I can do with the MTLS or external traffic I can encrypt. Apart from this service level observatory as I said with the help of like all those golden signals which are flowing within the from pod to pod or service to service or outside to inside, those all will get collected and those you can view beautifully on the dashboard. In next slide we'll see the dashboard how it looks like it's captured, it's basic capture only but you can see like all those are getting captured and then traffic management and operation agility, I'm just going to explain in couple of slides how traffic flows and how you can distribute the traffic and how you can control the traffic inside the cluster. So, this you can consider the some of the golden signals I can capture like here you can see the world traffic or some service to service traffic everything I can capture, I know need to do any extra things get deployed in the cluster and that will get captured on the traffic and I can show it on the dashboard. So, this is what I was talking about the operational agility, let's consider I have the application, I'm doing the rolling out operations, I have application where I want to send the traffic from you can consider as a candidate deployment where I want to only distribute the traffic 5% on my old application 95% on new application that I also can do or you can consider the downside diagram where I have two kinds of user one is Apple user one is Android user my Android user are more than the Apple user then also I can do the candidate type of deployment where I can split the traffic so both the versions at the same time I can deploy that can be handled and you can consider the scaling of that those versions also can be possible with the help of STO and you can do the rolling out of those features only so those versions you can do it rolling out of those versions can be possible again coming back to the security you can enable the MTLS encryption again you can do the authorization based on what what you want to do at your organization level that you can do again you can configure RPC level access control for your RPC API that can be possible. Now, let's come down to the architecture of STO so before moving that we will go how it works and the first component which is the basic one and which is most important in the STO is your sidecar so if you look at the sidecar offerings so there are like it will get deployed with your workload it is considered you can call it as a in Kubernetes you can consider Q proxy is the main component you can call same NY proxy as your Istio proxy or the proxy which which will serve the traffic and it will throw the traffic to your part and all those policies which I talked like I want to filter out the APIs or anything that I want to do I can enforce in this sidecar and sidecar will take care of that again sidecar is connecting so if I want to connect in between application one to be so again it's again those sidecars or those Istio proxy will talk to each other and from that what traffic is flowing that I can do it with the telemetry those report I can take it out and I can see what about what traffic is flowing from service A to service B and here the most important thing I don't have to worry about embedding of client libraries so I have not to worry that I need to integrate something inside my application because it's sidecar is running totally outside my application so if you look at the flows of sidecar versus listen then it take care of the route then it send how it comes to know how to send the traffic to the cluster and again at the end at the end level like end points are there so what are able to receive those traffic so this is the flow you can consider from listeners to routes routes to cluster cluster to endpoint so this is some of the basic about the envi proxy so there are it's a light very lightweight low memory consumption then more than a hundred plus or a thousand plus the testing already done some things which it's not owned by Istio community it's its open source so it's basically developed by a company called Lyft and there are a lot of people contributing to ny proxy and some of the contribution are also from the Istio side like fault injection or request tracing with the help of zipkin or traffic routing splitting everything those those contribution is from Istio so let's now dig down to the real architecture of the Istio where you can see the component architectural diagram here you can see the multiple components so from Istio 1.5 I guess yeah 1.5 onwards so the control page which you are seeing at the down which content pilot mixer citadel all this get together with the help of Istio D so there are no more components you can see if you go and check the parts which are running inside your cluster for the control panel for Istio you will only see the Istio D so inside Istio D these are some of the basic components which are running behind the scene apart from that as I said right Istio using your sidecar proxies so if you look at the diagram you can see the my application is running along with my sidecars and you can call it as a Istio proxies proxies are taking care of the traffic and descending the traffic to my service services so so I could say the ny is taking care of the networking and the policies which are enforcing inside that proxies it will take care of that those policies pilot is like pushing of those service communication policies so all the policies which you are configuring it is taking care by the pilot so again same if I am enabling the mtls kind of stuff so it will take care by the citadel and and those those those are the things which which will take care by the Istio and how this traffic are flowing you can see over here so as I said right the all those components are getting integrated within within the same component called as Istio D and if you look at the diagram you can see like how traffic is flowing so if I consider the traffic is coming from outside ingress traffic is coming outside coming from outside and it is not hitting to your control plane here is a major important that you need to consider you can call it as same kind of like a cube proxy which I had mentioned earlier here all the traffic which communicate with each other those are from proxy to proxy so control plane is not taking care taking or intervention in the traffic so once traffic come inside the cluster it will flow to your proxy to proxy and it will go out so if traffic is going out so this is this is the way like how ingress traffic flows from from external to internal pod and from pod to egress so again earlier we see the the traffic how traffic flows in in kubernetes networking so you can compare that that kind of network with with the Istio one right now so if you look at the diagram if if you can consider at the load balancer lever my user is there and if you if you want to access my application and I have integrated my Istio with my application so you can see once traffic land so from load balancer to it will come down to Istio gateway Istio gateways will send the traffic to your ny proxy or you can call it as a Istio proxy and from proxy to proxy the traffic will get flow so you have two nodes of cluster you have two engine x pod sorry one is engine x pod and I want to route that traffic from engine x to the python pod you can see the traffic how it is flowing and at the same time if you ask me where is now now my kubernetes basic networking so here is the question it will not get involved in the in the routing of the traffic when you enable the Istio on your name space so here it is not l3 l4 type of routing it's again now completely on the l7 layer of traffic what are the rules policies everything I configured at the Istio level those will take care by the Istio so we can see the q poxy components no more in the diagram but if you ask me can I delete my existing service no because Istio is again using the your basic services to configure at Istio level yeah and again again the question is again same kind of like if if my I already explained like if my master master node goes down what happened same here if you ask me what happened if my control place got down you no need to have to worry about it because all the traffic rules policies which are there which reside along with your application in the Istio proxy so whenever if my control place is going down you don't have to worry only thing is if I am spinning up new application it will get spin up with the Istio proxy but it will it will not have the those policies because it's with the control plane and those control plane is inserting those policy in my Istio proxy so you can compare how traffic flowing with Istio and without Istio from this and and the last thing the traffic management you can do it with the Istio where application rollout you can do it in percentage distribution or traffic steering with with content-based application that I can do then resiliency and efficiency so resiliency as I said right for resiliency or circuit breakdowns retry timeouts or health checks those those things I don't have to worry about what application code has I don't have to worry about that because all those things will taken care by the Istio again efficiency if you can call it as there are a lot of things like L7 load balancing TLS offload or HTTPS or GRPC proxy those those are things taken care by the Istio again as I said when when I am using the Istio traffic Istio networking it avoids the L4 traffic because it's not I2to IPV communication now it is leveraging those those kind of traffic routing it's not L4 it's again the L7 layer of traffic so there I can integrate the health checks and everything and those again the if you're talking about the TLS those those will be NY using boring SSM so those traffic will get encrypt with your TLS and the traffic will flow inside your cluster so I hope like you have good understanding like you can compare I already compared like how L7 traffic flows and how L3 L4 traffic flows inside cluster so now again this is again depends on your aspects or your concern what things I want to manage in my cluster I want to do the traffic routing on L7 or not or because again you are adding one more layer in the in the application so definitely it is slowing down the transaction rate so some some kind of latency 300 to 500 millisecond will get added into your transaction so you need to think about that so according to that you can take the decision and you can integrate the Istio so yeah that's all about Istio thank you Hey Apurth thanks thanks a lot for for walking us through the Istio and the proxy so how many time you you spoke the word proxy can you guess it sorry how many time you times you have spoken the word proxy I'm just kidding yeah because there are two proxies one is Q proxy yeah one is Istio proxy and you can call it as Istio proxy is your NY actually proxy proxy everywhere fantastic so I maybe I think I I've been watching your watching I was waiting for this session actually so one question from my side like how can one use proxy or like Istio with open telemetry like for example if he's building a microservice infrastructure and he's having you know lots of different backend system maybe go python go lang whatever right so and this this project like open telemetry which can help solve you the metrics related stuff right instead of you as a developer coding building you know some some sections in your code which will spit out metrics can can Istio help in or how does it work with open telemetry project what's your take I think about that I guess I mentioned it or not I'm not sure but if you look at the diagram here my traffic is I said right from ingress to it will go to my proxy Istio proxy and traffic will flow at the same time my control plan is talking to my proxies and it's pulling those data from my proxies so here if I'm integrating the keali type of dashboard so keali can address all the traffic which are coming from outside to inside to wipe kill my part so it is you can call it as a continuous observation or continuous tracing of my traffic yeah and yeah the native offerings from the Istio so you don't need to do anything apart from deploying the Istio or anything you don't have to worry or anything extra that you have to configure for telemetry yeah yeah yeah I'm more specifically looking on the apm side of the application performance monitoring stuff like without writing any code can can maybe uh you know uh the world of Istio can can maybe somehow use jagger or or or other basic offerings is from Istio so you don't have to do anything understood so you can view all those things on a single dashboard if it is a jagger dashboard or if it is a keali that depends on what type of observation you want to do it that you can do it again you can integrate the zipkin where I can do stuff like circuit breaker or timeout or everything that I want to handle so here I don't have to worry about the code which is in which language or I don't have to write in fancy things that would take care of the all those things all right yeah very cool very thanks a lot apur for walking us through and this has been a great talk