 Good morning. Good afternoon and good evening to all of you in which part of the world you are today. We will be diving into the world of service mesh and how it help in enhancing and meeting the security and compliance need. But before we do that, a quick round of introduction. My name is Nenath Desai. I'm working at InfraCloud as a staff engineer in DevOps and SIS space. At InfraCloud, I mainly work to help our clients to handle their need around DevOps, SIS and platform engineering. Mainly around infrastructure design modernization using cloud native technologies like Kubernetes and all the tech stack that revolves around Kubernetes. As part of my regular consulting work, I happen to work on an implementation of service mesh for a few customers to aid their security and compliance need. And that's how I thought whatever I have learned can share with you all. Okay, so agenda wise we will discuss a range of topics that are central to the service mesh. We will explore how microservices emerge as a transformative architectural pattern and the factors that fuel their rise. Then we will dwell into the critical aspect of security challenges in microservice architecture. We will discuss complexities that come with a decentralized system and how to address them. Then we'll turn our attention to the evolution of service mesh. I will take you through the historical context of why and how the service mesh came into the existence and their role in overall modern microservice ecosystem. Then we'll explore the role of service mesh in enhancing security and then moving on we'll discuss how service mesh aids in compliance effort as well. And then we'll highlight with some of the examples as well how it meets your regulatory requirements. And that eventually making compliance more manageable. And later we'll explain how service mesh platform improves in observability as well as making it easy to identify troubleshoot issues. Zero trust networking is a hot topic. And I will discuss how service mesh alliance with the principles of zero trust and finally we will explore service mesh deployment consideration and best practices. I will share insights on how to make the most of service mesh while avoiding the common fit faults. And then we will see the small demo of service mesh capabilities in terms of how they help with the authentication and authorization. So rise of microservice so before the microservice architecture pattern actually gained the popularity the predominant software architecture pattern was a monolithic architecture. And it was characterized by its single code base where your entire application was built like a single cohesive unit which means all the components module functionalities were bundled together and tightly coupled as a single code. Obviously with the time spent iterating on the same thing you normally learn a better and efficient way of solving problem. And so over past few years to bypass the challenges posed by monolithic microservice pattern came into emergence and it has gained prominence in recent years due to several compelling reasons. Some of them highlighted here like first is of course scalability. So it allows individual services to be scaled independent of other services. For example, let's say you have an e-commerce application which has a different microservices like product catalog which handles product information or pricing etc. You let's have another user authentication service which normally is responsible for user login registration and order management service kind of service as well which can help with the process of creating and tracking customer orders. So now let's consider a scenario where and your e-commerce platform experience increase in a traffic due to holiday sale event. During such event different part of your application may experience different level of workload. For example product catalog service due to high user demand product this service can experience a significant search in the request from users browser and thus it can cause a load on this service. But thanks to the microservice nature you can scale this service horizontally by adding more instances or containers. For user authentication service while it is important from user authentication perspective but on a sales event you may not experience that much of surge in traffic. So you can scale it to a minimal level without overprovisioning any resources. And for order management service because people will put lots of order this service also might experience a surge in traffic and so you can scale this as well horizontally as and when needed. So I hope this example demonstrate how each microservice can be scaled independently based on its specific loads or requirement. And this scalability allows you to optimize your resources quite well and maintain an efficient performance during the traffic variation and making sure that your application remains responsible at the same time cost effective as well. And this flexibility also extend to the technology choices as well. Now your team can select most suitable programming languages and framework for each of the service. For example for user authentication service maybe Node.js or Ruby can be used for quick development and ease of handling user related operations. For product catalog maybe Python or Java for their efficient data processing. These small code bases of microservices has also led to the faster development and deployment cycles. And in terms of resilience as well microservices help to contain failures if one service encounters issues it doesn't necessarily disrupt your entire application. And it also helps from the collaboration perspective as well among all the small teams that are working on the individual microservices. Continuing on the same your testing and maintenance has also become more manageable with microservices due to the smaller code bases. And this microservices pattern has also helped for the adoption of containers and Kubernetes and overall cloud native development. And microservices are kind of naturally fit for automated CI CD pipeline which has resulted into the rapid and frequent releases. I'm sure we know this famous line from Spiderman movie where Uncle Ben convinced Peter that great powers with the great powers comes great responsibility. Similarly in a microservice world with pun intended I would say it comes with some greater security challenges as well. Why I will explain now. So what we have seen over the years is due to the microservice architecture pattern the attack surface has increased. Microservices open does expose multiple endpoints and API. So it increased the potential attack surface attacker can target a specific service going back again to the same e-commerce application. If they know that payment services are one which is processing all the payments they can specifically targeted service to service communication has also become a challenge in microservice architecture. If we do not encrypt these services to service communication. It can cause a damage to confidentiality integrity and authenticity of the data that has been exchanged between the microservice. And without encryption network traffic between these microservices normally happens in a plain text and that makes it more susceptible for use dropping attackers can gain access to the network and intercept and inspect the data that is being exchanged between the services. We normally call it as a passive attack. It can also lead to the man in the middle attack where an attacker intercept the communication between these microservices by the impersonating one or both the parties. Authentication and authorization as well across these multiple microservices can become complex. We have seen already in past I think many banking scams in regards to this from data security perspective as well. We as part of cloud native development we prefer our data being distributed across multiple microservices like take an example of your cloud storage service where your user files are stored across microservices. So making sure that this data at rest and in transit should be secured is quite highly important in today's world. Identity and access management as well like managing user identities and access controls across these diverse set of microservices can become challenging. And so it's a need of our to have I am solution which is much robust in a way code vulnerability as well. So smaller code base in microservice can sometimes lead to the overlooked security vulnerabilities. We have seen couple of time people pushing sensitive data sometime as part of a hard coding and runtime security as well like monitoring your microservices at runtime is also vital. And due to the if the more number of microservices are there chances are that you may miss out on that part as well to continue on the same your API security as well. So APIs are the most critical part of the microservices. So properly ensuring and securing those APIs in like input validation or rate limiting these are the essential part to prevent SQL injection or abuse kind of attack. So now the more number of microservice are there it is important that you have a registry which contain list and information about all these microservices. But at the same time, if someone gets access to these registry mechanism, then the attackers can redirect all the request to some malicious services. Securing the service registry and discovery mechanism is also important from microservices security challenge perspective. Logging and monitoring as well like centralized logging and monitoring did are important for detecting the suspicious activities. For example, let's say these many hundred or 200 microservices. If you do not have a centralized logging mechanism, it's become really hard to gain the insights about how your microservices are behaving. Container security as well, like if you are using container based microservices, then one need to ensure there are properly configured and isolated mechanisms to prevent container escapes or privilege escalation attacks. And as now it is already happening, most of us are using the Kubernetes kind of orchestration platform. So maintaining a security of these kind of platform is also important as it host your entire application. API gateways as well if I have to say is I have seen it has been quite widely used option. So protecting incoming and outgoing traffic on these services are also important. And we have also seen that many microservices rely on third party libraries and component. So securing them is also a big task. So how does service mesh happen? Well, the world was in a need of someone to come ahead and help us with some super mesh power to handle securely and regulate this microservice world. So jokes aside, the emergence of service mesh can be traced back to the evolution of application architecture. Like initially three tier web app DB kind of application practice was been practiced. However, as application grew in complexity, the challenges arose and companies found themselves writing custom libraries. So client libraries, which helps you with the request load balancing circuit breaking retries and from the instrumentation perspective as well. So these libraries were challenges when it comes to making some updates in them because it then require you to restart all your services as well. So to address this challenge proxies came into the scene proxies offered a solution that bypass many of the limitation of the client libraries. And unlike client libraries proxies can be upgraded without the need of recompilation and redeployment of your application. And this brought the flexibility and ease of maintenance for your microservice. And as then later cloud native practices gained prominence, officially a mesh of proxies emerged and it introduced the concept of deploying and managing proxies separately from your application lifecycle. So by creating a distributed mesh of these proxies, one was able to standardize the runtime operation. And it also provided a centralized API and then become a crucial component of microservices. And that's how service mesh has emerged over the years. Okay, so how does service mesh help basically in security and that too, especially in the world of Kubernetes. Well, from authentication and authorization perspective, service mesh enables strong authentication and authorization mechanism. For example, it allows only authorized services to access the sensitive data from a sensitive microservices. The encryption mtls feature basically of service mesh helps you to protect your data in transit or at rest as well. So you can use any service mesh tool of your choice like linkerd or Istio. And this has helped to avoid eavesdropping or manual middle attack basically from traffic control and routing perspective. There are popular service mesh mechanism which provides you a way to control the way traffic is flowing between the services and the way load balancing is happening. So their access control and circuit circuit breaking abilities has also provided us a way to prevent security vulnerabilities getting in. Okay, and from observatory and monitoring perspective as well. So service mesh does emit metrics that can be captured and monitored by various monitoring tools like Prometheus and so on. So service mesh platform is basically designed to provide the extensive observatory and monitoring capabilities. So for example, service mesh like NY proxy in Istio or you can say linkerd proxies, they generate a wide range of metrics that you can use. And based on those metrics, you can get the information about request rate, latencies, error rate, success rates and all the other relevant data. And solutions make sure that to provide this information either via HTTP endpoint or GRPC interface or side gas. And you can set alert and visualization as well on them. And from distributed security policies perspective, it provide a centralized policy management and enforcement tool across all your microservices. So it allows you to define security policies such as access control or authentication rule, all in one place, considered like a single plane of the glass. And it provides you a framework for real time policy update and changes without a risk of misconfiguration and basically overall improving the security posture for your microservices environment. From a service identity and authentication perspective with service mesh, you can implement mechanisms like MTLS and JWT tokens that ensures that only authenticated service can communicate with each others. And thus it standards the overall security of microservices by preventing unauthorized and unrequired accesses. From runtime security perspective as well, it helps to detect the anomalies by continuously observing the network traffic. And from API security point of view as well, it can help by doing the validation of inputs or incoming requests and help us to detect attack like as I mentioned SQL injection. And from security upgrade and patching perspective as well. So with the sidecar proxies approach, one can independently upgrade their microservices. So even when you're upgrading proxies, it will not recompile or redeploy your entire application itself. So that's how service mesh does help you from the security perspective. And I know we all do hate it, but it's something that promote fairness, safety and ethical behavior in our life. Yes, I am talking about the term compliance. So let's see how service mesh helps with compliance based on the different industries regulations. So thanks to the enforcement of fine-grained policies by service mesh, it helps you to the meet requirement like those of GDPR to ensure data protection and privacy. The access control mechanisms provided by service mesh helps to comply the standards like HIPAA or PCI DSS. So service mesh makes sure that only authorized entities can interact with the sensitive data. Apart from that, it does provide you a centralized auditing and logging capabilities to accurately gather report and brings the traceability. And when it comes to data protection regulation like GDPR HIPAA, the data encryption with MTLS feature that I mentioned prior is crucial as it maintain the confidentiality of data during the communication between different services. To continue on the same, to comply with standard like ISO 27001, service mesh does offer governance and traceability features. Thus allowing organization to monitor and control all the service interaction. And by identifying services and controlling traffic accordingly, it helps to ensure that data flows aligned with the regulations like GDPR's data residency restriction kind of. And lastly, service mesh does support RBAC that aids compliance with the security standard like NIS SP853 by allowing organization to define and enforce the rollback access control policies. So well, that's how the service mesh does helps with the compliance angle as well. And I mentioned prior that the way service mesh helps with the access control is by fine grained access policies. The service mesh platform like STO LinkerD does allow organization to define precise rules, policies you can say, using which you can control and provide a secure access to only those who are needed for those services. The RBAC feature is also powerful for example in STO in particular. It enables you to define the roles and permission for services. It is kind of similar to the traditional access control mechanism but tailored to the microservices. For example, for the payment service, you can specifically define a particular role. And the NY proxy kind of feature does help you for the dynamic authorization as well. Does it basically allows to make a decision at a real time based on various factors like user identity, request context or even the state of the system. Apart from that, from identity and trust verification perspective as well, these are important factors and that's where your MTA list kind of features does help you. Okay, so that's how service mesh announces the access control and how does service mesh helps basically from observatory and monitoring perspective is as I mentioned or talk prior that it helps with the imitation of the matrix that you can collect from monitoring perspective. And you can readily integrate it with any monitoring platform of your choice. Platform like STO and Linkardy are already offering a centralized logging aggregating of all the logs from microservices. And it helps when it comes to the troubleshooting and mentoring compliance as it is a need of different audit requirement as well. And this distributed application does require us to understand end to end flows like from the moment someone tries to access our application, how different, how the client is flowing from one service to other service. So service mesh out of the box does provide capabilities to integrate with different distributed tracing tools like Jaeger or Zipkin. So that allows you to see microservice, how it is handling overall request and it can help you to identify the bottlenecks as well. And of course, you can integrate these monitoring platforms with the tools like Grafana and so on. And thus, it can help you to the visualization part of all your microservices current state as well. So for example, this is very small Grafana dashboard template that can give you the insights about how service mesh capabilities are able to fetch you the insights. So zero trust security model is a kind of paradigm shift in network security that always emphasizes on never trust always verify approach. With service mesh, all communication between the microservices is subject to strict verification and validation. And regardless of whether it occurs within the same cluster across the cluster or across the different cloud providers. Every interaction is getting evaluated against the defined policies and security rules and ensure that only authenticated and authorizes authorized services only can communicate with each other. So this approach significantly reduce the attack surface and limit the lateral movement within the microservice ecosystem and thus it does enhance security at the same time minimize the potential impact of the bridge. Okay, so I'm sure when we want to adopt some new tool we always want to understand some of the best practices that can help us with adopting that technology well. So these are some of them that I have listed. Needless to say always do plan ahead the deployment of service mesh, take into the account, the size of your application and the complexity and available resources. Secondly, evaluate different solutions, there are multiple micro service mesh available in the markets. First start understanding your specific use case and requirement and then at the end of the day see every service mesh platforms has their own features at the same time trade off. So choose the one that aligns with your needs specifically. Third, I would say is always do start small like whenever I was working on our clients requirement, we always started with one micro service, try to move it into the service mesh, try to gain that confidence and then move to move other services as well on a service mesh. And the capabilities like circuit breaking and retry policies of service mesh do help you to handle the failures gracefully. So do make sure to use them as it helps to prevent cascading failures and improve the service reliability. And needless to say, always focus on observatory and monitoring, always follow the shift left approach when it comes to security. The traffic management feature of service mesh is very powerful and do make use of it to do the traffic splitting as well as load balancing and for the can read deployments as well as that can help you to do the zero downtime kind of deployment server. Okay, from namespace and resource isolation perspective as well do configure a specific compute for every namespace that your development teams are going to use. Yeah, so this is crucial from the documentation and training perspective, always make sure that your development as well as operational team are very familiar with handling service mesh, then only decide to move any micro service on the service mesh. And from disaster recovery and high ability perspective as well, always plan for these kind of scenarios and do make sure to implement redundancy wherever necessary. There are tools like Valero and all that can help you with those aspect from performance testing point of view as well. Make sure you do the end to end performance testing for the microservices been moved on to the service mesh and try to ensure that there is no significant latency or bottleneck is been introduced in your microservice architecture. If you are using a commercial service mesh product, be aware of potential vendor lock in and consider open source alternative if portability is your concern. Backup and restore strategy as well, do make sure to have it in place already. And those will save your time when it comes to the typical disaster scenarios. And from so service mesh policies as well, do make sure to enforce strict policies when it comes to the traffic routing or security or access control. So see, service mesh deployment can be complex. But by following the best practices, certainly you will be able to successfully implement and manage service mesh for your microservice architecture and helps to improve the reliability, security and observability of your services. Okay. Well, now it's a demo time. I will give you a small demo of service mesh capabilities around authentication and authorization using Istio as a service mesh here. See, these capabilities are available in all the service mesh tools, but in the end I had to choose one. And so I decided maybe to use the Istio itself. So we will have one sample app and see how MTLS work for internal secure communication among microservices as well as how it helps with JWT tokens to authenticate end users as well. So to save time what I have done is I have recorded both the parts of it will play them one by one and explain. But prior that let's understand it with some visual diagrams. So for example, we'll have two namespaces called foo and bar with two microservices. One is HTTP bin and another is sleep. And both will run the NY proxy basically they both are kind of onboarded on a service mesh. And we'll use another instance as well of HTTP bin and sleep microservice, which is running without sidecar that means without service mesh in a namespace called as a legacy. So what we'll do is we will try to send a traffic from each sleep and HTTP bin pod of each namespace to other pods in the other namespace. So what we will ideally see is by default Istio does tracks the server workload which are migrated to Istio proxies and it does configure the client proxies to send a mutual TLS traffic between those workloads. And for the non service mesh service it use the plain text traffic. So basically we will use MTLS for communication between services from foo and bar namespace. But when it comes to communicating with the services from the legacy namespace, it will use the plain text format. So let me play the video of it. So as you could see here, we have three names to assess foo bar and legacy foo and bar are enabled with Istio sidecar proxies, whether as legacy is without any sidecar proxies. That's the reason here if you will see they have only one container which is of those microservices. But the containers or pods from the bar and foo namespace are having two containers, which indicates that the sidecar proxies has been enabled. We can verify the same by describing the pod from either foo or bar namespace and that's what we are going to do. We are going to describe now and see in HTTP bin service from the bar namespace. So you can see here that Istio sidecar proxy has been injected in that, which will make sure that any communication that is happening with this microservice will be via this sidecar only. And now what we will do is basically from each microservice of each namespace here using this simple curl request. We are trying to connect to other services right now no Istio policies has been enabled. That's the reason it will allow communication between the services which have sidecar enabled to the services which even do not have sidecar or service mesh been enabled. So basically it will allow communication in the empty list for those which are having sidecar. But at the same time for the one like legacy one, it will use the plain text communication and how we can be very much sure of that is if you will see now. Whenever you enable a sidecar Istio proxy, what it does is that it add this ex forwarded client cert header in the curl request that originally comes from our microservices. So here what we are doing is from the pods from our foo namespace. We are trying to send the curl request to the pod from the foo namespace. And ideally, basically these two namespaces are now enabled with service mesh. That's the reason it should use the ex forwarded client cert, which will validate that the communication is happening in a empty list by using the empty list feature. So to repeat again, the ex forwarded client cert header gets automatically added when service to service intercommunication is happening using the mutual TLS. So this proves us that it is using empty list for the secure communication. And now let's try to again send a similar request, but this time the curl request is to the microservice HTTP bin from the legacy namespace. And since now the request is going from sidecar enabled microservice to non service mesh or non sidecar enabled, ideally this header will not be present. That means the traffic flowing is happening in a plain text format, which is risky in a way. Yeah, so if you could see it was not able to gather anything. And now since we want to make sure that all our microservices are communicating with each other in a secure way alone, we will enable a peer authentication policy of the studio. So what this policy does help is for inter service communication with empty list mode as a strict. It makes sure that unless and until empty list is enabled for the microservices, any traffic that is going to and fro from this microservices with service mesh will be rejected. Okay. So now we have created this policy and now we will again try to run the same curl request. Now what should happen this time is that any communication that is going to happen with legacy namespace based microservices should be rejected. So our services with is to enabled will not be able to communicate with legacy namespace based services, because that communication was a plain text one, which is risky. As I can as I mentioned it can lead to use dropping or minor middle attack. So if you see here for communication from food namespace to bar namespace, it is generating 200 response, but for sleep microservice from legacy namespace when it is trying to connect to our HTTP bin service from the food name says which has basically is to sidecar proxy enabled it generated the exit code 56 that means it was not able to gather any data. So what happened here is the peer authentication policy observed that okay someone trying to send a plain text request to the parts or containers inside my particular namespace. Let me reject it. And that's how it is giving the exit code 56. Okay, it will happen the same for even if your service tried to communicate with the other microservice from the bar namespace as well. It will also get exited with error code 56. So I hope you are able to understand how service mesh does helps you with this empty list feature to avoid any unsecured connection to your microservices and does it helps that all this intercommunication inside the cluster is also secured in a way. Okay, and now we will move to the next demo, which is about how an end user will be authenticated when he will try to connect to your microservices which are covered by your studio service mesh. So for any authentication purpose, if you could see here, what we will try to demo here is if someone want to connect to HTTP bin microservice, then that external end user request should connect it on port 80. And it should get routed to HTTP bin service via Istio gateway called so here we are given name called HTTP bin gateway service. Okay, and we have also configured virtual service as well. So request will go from HTTP bin gateway to your virtual service. So if you describe the gateway here, gateway service here, you will be able to see that it is trying to show that any request end user request that comes should come on port 80. And if it is coming with HTTP protocol, then route that request to the Istio ingress gateway. That's what the purpose of this gateway policy, gateway resource, sorry. And then we have also created this virtual service. What this virtual service basically is doing is that it is telling that when a request come in at the Istio gateway, that request should come to me on a virtual service. And then I will route that request on port 8002 service HTTP bin. That's what it is trying to denote here. Okay, and we have also created a request authentication mechanism here. If you will do as you can see here, the request authentication policy called the JWT example. So the way we had a peer authentication policy in Istio to secure the service to service communication. If you want end user traffic coming to your service mesh based microservice, then you need to define a resource called as a request authentication. This request authentication policy in Istio service mesh helps you to validate that the request that is coming is coming with the JWT token, which is been issued by a particular predefined user. Here in this case, we are saying testing address secure.io and it is also making sure that to check if that request is been validated with the public key, which is already, which is something that Istio is already aware of. Okay. So, but here could be the case that this request policy authentication policy can detect whether the request is coming with the right valid JWT tokens or not. Only if the some value for token is assigned but in case let's say no value or no token is passed. Unfortunately, it does pass the request inside. And so for the very same purpose, we have created another authorization policy as you could see here on screen. What this policy basically does is it is telling Istio that if any end user request comes without any JWT token, you basically need to deny that request. So, we'll do a demo. We'll again try to send a curl request here if you see we are not passing any header, any JWT valid token. So it will give you 403. Thanks to that authorization policy that we defined. And now thanks to the request authentication policy that we have defined, let's say I'm randomly passing some JWT tokens like dead beef or something. So what it should happen is it should reject that request and that's the reason you are able to see it has given 401 error. What for this demo purpose I have done is I have stored or picked up the token which is available here on Istio sample examples. So I have stored that inside a token variable and now that token is basically this one, which is a valid token. So any end user request that is coming with this token should be allowed by our Istio and that's what you will see now. So here I will again run the same curl request now here this time if you see I am passing it the JWT token which has been rightly issued and now as you could see it is giving 200 response. So that's how Istio service mesh basically helps you with the end user authentication and authorization. Again, to repeat Istio was just one example. These features are pretty common in any service mesh that you use. And that does helps you with the security and compliance perspective as well. So that's very much from this webinar. And I hope you find it insightful and took away something new from this webinar. And you can reach out to me at any time on LinkedIn or Twitter for any query around service mesh. And thank you so much again for watching this. Thanks.