 All right, we're going to go ahead and get started. Thank you everyone for joining us today. Welcome to today's webinar, Service Mess to Service Mesh. I'm Taylor Wagner, the operations analyst here at CNCF and I will be hosting slash moderating today. We'd like to welcome our presenters, Kavya Perlman, the cybersecurity strategist at Wallarm and Rob Richardson, technical evangelist at MemSQL. Before we get started, I'd like to go over a few housekeeping items. During the webinar, you are not able to talk as an attendee, but there is a Q&A box at the bottom of your Zoom screen, so please feel free to drop your questions in there rather than the normal chat window and we'll get to as many as we can during the Q&A at the end. A reminder that this is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. So please don't add anything to the chat or to the Q&A that would be in violation of that code. Basically, please be respectful of all of your fellow participants and the presenters. And a reminder that the recording and slides will be posted later today on the CNCF webinar page, which is cncf.io slash webinars. With that, I'd like to hand it over to Kavya and Rob to kick off today's presentation. Thank you, Taylor. What if there were no traffic rules and barely any traffic lights? Just one traffic cop, basically for the namesake and every car, just a car or automobiles just in the town, had someplace to go, just no set of standard rules on how to get there. So what ends up happening is these cars, people, police, traffic signals create a messy situation. In fact, imagine traffic cop is giving people tickets and telling people where they should go. However, none of this is really helping the situation. Now, think about another scenario, something more futuristic, where there is not just a set of rules, but a real-time continuous intelligence sharing, full visibility, full connectivity, kind of like the fully autonomous vehicles. Contrary to handing out tickets now, we can now route our traffic and communicate intelligently and efficiently so that we could maximize on all the knowledge each vehicle and the cop has. This is the analogy we would like to use today and we would like you to keep in mind this analogy as we go through the webinar together. Presenting with me today is Rob Richardson, my very good friend. Rob began as a software developer in 2000, at times when we needed to deploy our website so he got good at server administration for Windows and Linux. As our community got better at source control, Rob learned and taught CVS, SVN and Git. In the time, community got better at unit testing software so he learned unit testing and dependency injection. We moved towards continuous integration principle so Rob learned cruise control team city and eventually Azure DevOps, leading workshops and courses for individuals and companies learning system automation. Now we are in a containerized world. So Rob has been learning and teaching Docker and Kubernetes since 2016. Rob is first and foremost a developer and a teacher so he has grown with us through the software mess. Now Rob is a tech evangelist for MemSQL where he gets to share his passion for software development and application architecture with the world as an international conference speaker. But even still, Rob can be seen tinkering with the code and teaching the few at events like AZ GiveCamp and southeastvelly.net user group. This is funny. One of the things that he is most proud of is, and because I've done this intro a few times for Rob, it's always funny when I read this, he's most proud of the comment that he posted at .NET Rocks podcast. They gave, they rated on the air and sent him a mug. Woo-hoo for Rob. I met Rob Richardson as he was teaching at security, teaching security at Phoenix at a DevSecOps conference. He was doing a Kubernetes security talk and I could totally see his passion for teaching various technologies including cloud native security. That's what brought him to MemSQL where MemSQL recently launched their cloud native managed database, MemSQL Helios. Rob and I have written blogs together, given a few talks together, including CNC and Fintech forum at Wall Street, New York City, talking about Kubernetes security. So Rob, thank you again so much for this collaboration. I am really excited now and I'm glad that we continue this journey together for teaching and learning. And as always, I look forward to learning with you today. Most definitely. Thanks for the kind words, Kavya. I'm really excited to join with you, my good friend. Kavya is a cybersecurity strategist at Wall Arm, an application security company that protects APIs and cloud native technologies. Just last year at KubeCon, Kavya was part of a big launch where Wall Arm extended their capabilities to support service mesh architecture and Envoy proxy. Here's my favorite part as we dig into the history and story of Kavya, she's amazing. Kavya was a third party security advisor for Facebook during the last US presidential election. So she was able to review technologies of various sizes and innovative things like cloud native and virtual and augmented reality and see how it could impact a platform as big as Facebook with two billion users. Due to her work and contribution to the security industry, she has won several awards. She's also known as the cyber guardian. She's the founder and CEO of the XR safety initiative, a nonprofit organization dedicated to helping build safe and immersive environments via virtual reality and augmented reality and mixed reality. I'm really excited to see the skills and talents of Kavya and really thrilled that I get to share the stage with you, my good friend. Oh, thank you, Rob. So let's dig into what we're gonna talk about today. We're witnessing the rise of microservices and cloud native technologies. However, one big challenge of microservice architecture is the overhead of managing network communication between services. Many companies are successfully using tools like Kubernetes for deployment, but they still face runtime challenges with routing, monitoring and security. Having a mess of tens, hundreds or even thousands of services communicating in a production is a job only for the brave technical hearts. This is where service mesh comes to clean up the mess. In the next 40 minutes, we'll discuss the service mesh. We'll discuss our history of getting from monolithic to microservices. We'll discuss the challenge that we had with API gateways and the market that it created for a service mesh. We'll dive deep into the principles and practices of service mesh, looking at both Istio and Linkerd as examples. We'll do a demo of both Linkerd and Istio and then we'll summarize with service mesh best practices and cases where you may choose to use it or may choose not to use it. So what is service mesh? Robin and I have spent a great deal of time reviewing various definitions of service mesh and we arrived at one of the simplest one being our favorite. A service mesh manages the network traffic between services in a graceful and scalable way. Service mesh is the answer to, how do I observe, control and secure communication between microservices? A service mesh answers the question, how do I observe, control, secure communication between services? So let's look deeper. A service mesh intercepts traffic going into and out of a container, whether between containers or from outside of the resources because it intercepts all cluster network traffic, it can monitor and validate connections, mapping out the communications between services. It can also understand service health, intercept failures or inject chaos. The beauty of intercepting all cluster traffic is that service mesh can do freely interesting things to validate and route traffic. In general, we choose a service mesh when we are looking to solve one of these problems. Observe, observe traffic in the cluster, discover, map or log. Control, control traffic in the cluster, access policies, split traffic between versions. And finally, secure, secure traffic between network resources such as HTTPS between containers. Now let's take a look at the difference between monolithic and microservice architectures. Monolithic architecture is a traditional model for designing and developing software. The monolithic applications consist of a single self-contained unit in which all the code exists in a single code base and in which modules are interconnected. At the deployment time, the entire code base is deployed and scaling is achieved by adding additional nodes. However though, much like the evolution of automobiles, the more complex the system became, the more challenging it was to maintain self-contained solutions. The problem was that as the code base and the applications grew in terms of functionality and complexity, the more challenging it became to iterate on it. Each component of monolith had to be tuned to work perfectly with the other components or else the entire system applications would fail. And prior to actually doing the Facebook third party, right after doing the Facebook third party security work, I joined as a head of security for virtual reality firm and the oldest virtual world second life at Linden Lab. And so as a head of security with all those legacy system, I really got to witness this entire sort of messy situation and moving on to microservices. And this is where microservices are really great. A microservice architecture involves the building of modules that address a specific task or business objective. Microservices are created in order to overcome the issues and constraints of monolithic applications. Monolithic applications have a tendency to grow over time and in size. So as the applications become larger and larger, this sort of tight coupling between components results in slower and more challenging deployments. Microservices solve these challenges of monolithic system and because they're much more, being as modular as possible. So in the simplest form, they help build an application as a suite of small services, each running its own process and are independently deployable. These services may be written in different programming languages and may use different data storage techniques. In our monolithic old architecture, we dealt mostly exclusively with north-south traffic. But with microservices, we must increasingly deal with traffic inside the cluster. With monoliths, different components communicated with each other using function calls within the application. Edge gateways abstracted away common traffic, orchestration function at the edge, such as authentication, logging, rate limiting, but communication conducted within the confines of the monolith did not require any of those activities. While East West traffic presents a greater challenge due to replacing our function calls with communication over the network, it allows us to use whatever transport method we want as we replaced function in invocation with APIs over network. This means that the different services within our architecture don't have to know about each other. If our API is consumable, then we have the flexibility with everything else. This can provide a big advantage. For instance, if we're a big organization and if we acquire another team, we don't have to worry about the coding language they're using or how they do things. So with the increased East West traffic that comes with microservices, we now need the ability to properly orchestrate it, which is the same issue we faced with our monolith at the edge. We needed to effectively route the traffic. And that's where we first learned about the API gateways, which Rob is going to tell us more about over to Rob. Most definitely. I love this analogy of Northwest and North South and East West and thank you for teaching that to us. North South is into containers and East West is between containers. And that's really where we start to hit this wall with API gateways. An API gateway is great at standing in front of our cluster and being that initial gate, that initial barrier. Assuming that our user interface is in the browser of a thick client spot, the user interface will connect to the API gateway and the API gateway will fan out that traffic to each microservice. But what we see down here at the bottom is that some of our microservices have misbehaved. They're not doing that thing about microservice should own its own data source. So they're going directly to other microservices data sources. And an API gateway being that boundary around our cluster really can't help us here. It can only really say, well, you know, the traffic was valid coming in. And so when we reached the limit of API gateways, that's when we started to dig into service meshes. We want a way to be able to control the traffic not only into our cluster or out of our cluster, that North-South content, but also between our microservices East-West as we go through our cluster. So here's an example of a service mesh. In our service mesh scenario, service A wants to connect to service B. Now, instead of service A connecting directly to service B as it would without a service mesh, service A is gonna reach out into that sidecar proxy that's defined in that same pod. So as service A got deployed, the sidecar proxy got included in there. So service A reaches out to the sidecar proxy and the sidecar proxy reaches out to the service meshes control plane. Is service A allowed to connect to service B? What's the URL for service B? Those details come back to the service A sidecar proxy and the proxy then connects to service B sidecar proxy. And we can do mutual TLS if we choose to do so. So service B sidecar proxy then reaches out to the control plane as well and says, hey, is service A allowed to connect to me? The control plane confirms that and service B's proxy forwards that traffic on the service B. The cool part is that service A and service B are now able to communicate, but all of the details about MI authorize to connect to that other service, all of the details about mutual TLS, all of those are handled by these sidecar proxies and the service mesh control plane. We can do similar analogies if service A wanted to talk to an external service or if ingress traffic was flowing into service B. All of that detail is managed by the service mesh control plane and all of those sidecar proxies deployed with each service allow us to get those insights, allow us to collect telemetry and logging and really get a feel for how the traffic moves through our cluster. So we talked about observe control and secure. As we start to dig into the features of service mesh, we get a really good feel for observe control and secure. On the observe side, because we have these sidecar proxies proxying all the traffic in our cluster, we can start to monitor that network traffic. We can see failures, we can log failures, we can log up time. Towards control, we have access policies. Is service A allowed to connect to service B? We can create additional policies like only things within my namespace or only things with this RBAC token are allowed to connect to this cluster or this container. Towards secure, we now have mutual TLS and it's mutual TLS that didn't require code changes in our applications to ensure the content. We don't need to worry about trust chains. We don't need to worry about certificate revocation. All of that is handled by the service mesh. Digging a bit deeper, now that we're proxying traffic between all services, we can create some of these higher level services. We can do things like monitoring service health and logging when systems are up and down. We can dig into more complex logging, grabbing all of the response codes and validating service health, detailing traffic between services and keeping track of how a request flows across the system. Because we're proxying all traffic between all containers, one of the really cool things is we can ask the service mesh for a network topology diagram, a network architecture diagram. Now the beauty here, it's not what the developer thought would happen, but it's what's actually happening in the cluster based on actual traffic patterns. Digging further into the features, because we're routing all of the traffic, we can do some really intelligent things with that traffic. So for example, if a service is failing, we can flip the circuit breaker and suddenly no traffic is flowing to that service while that service heals. So when the service comes back online, the service mesh can notice and start routing traffic to it. In the meantime, it's just going to intelligently fail all of the requests to that service so that clients aren't waiting for that content. Similarly, we can do A-B testing where a portion of the traffic goes to the new channel, the new version, while we validate that that behaves as expected. Once that system is contained and healthy, we can start to route more traffic, eventually strangling the content from the old version. Similarly, we could create a beta channel or a canary release where we can say, here's that newest feature for those people who are able to see it. So we can grab details like HDDB headers or authentication tokens and route content to the new versions while keeping the majority of the content at the original versions. All these are possible with these advanced routing rules because we're proxying all traffic across all services. Digging deeper, we now have dashboards over the top of our service mesh where we can take a look at on the left, the service health and history of each service. On the right, there's that network topology diagram where we can ask the services exactly what they're doing and show actual traffic routing across our service mesh. So because we have these proxies validating all of the rules, we won't end up in this scenario where a microservice accidentally calls into a different microservices data store. We can create those rules to ensure that each microservice owns its own data store and only those authorized to connect to each container are allowed to do so. So we're gonna look at two examples of service meshes today. We're gonna look at both LinkerD and Istio. And so before we do that, let's look at kind of a high level what LinkerD is about. Now we could do a bake off comparing speed or features, but that's gonna be transient and that's gonna evolve over time. Instead, let's look at kind of the methodology. The methodology of LinkerD is that they focus on a simple setup. They're really proud of their install procedure and just a core piece of functionality that allows you to get going. If you need advanced scenarios, then they invite you to grab third party components and strap those on. All of those core pieces, they build in-house and so they're really great at contributing to the go and rust communities as they build out the features necessary to create this content. Similarly with Istio, Istio's methodology is to create a suite of features that you can toggle on and off. So by installing their software, you have all of the pieces that you need to go. Istio is also really good at combining the best from the industry. So they include a whole lot of third party products. LinkerD in version two uses an Envoy proxy as well. Istio uses an Envoy proxy, metrics from Grafana, a Prometheus dashboard, a Jager tracing dashboard and we'll see other dashboards as well. We can see on the right, because we have this methodology of proxying all traffic, on the right is this virtual service that allows 75% of the traffic to version one and 25% of the traffic to version two for this service. That's possible in Istio, given these advanced routing rules. So let's dig into a demo. We're gonna take a look at Istio and LinkerD. So I don't have LinkerD running yet. I just have an empty cluster, but let's start out doing exactly that. Let's go to the LinkerD startup page. I am gonna have to break out of the slides. The Istio, the LinkerD getting started page is really elegant and walks us through all of the processes for getting that installed. I'm gonna do exactly that. I'm gonna do this LinkerD check and it can go validate that my cluster meets the necessary recommendations for LinkerD. Once I've got that in place, let's go download LinkerD. So I did install the LinkerD command line. I did put that in my path and so I can just do that LinkerD install. LinkerD check is great. LinkerD check will now go see how did it do? It's gonna watch those pods and validate that they start up correctly and keep track of all of the details in LinkerD to make sure that everything is running correctly. I love that it enumerates all of the pieces of my cluster and validates that it's working correctly. So this just takes a minute to get going. I'll scroll up. Nope, I won't scroll up. That LinkerD install, I just piped that off to kubectl apply and ultimately that gets that content into place. So great, now my LinkerD install is ready to go. Let's go take a look at it. LinkerD dashboard. So LinkerD dashboard will start this dashboard and I'll be able to see all of the pieces of my cluster. I'm gonna switch over to the LinkerD namespace and I can see all of the containers. I can see the various details with each one. If I switch over to deployments or other content here, I can then flip over to the Grafana dashboard where I can see the actual metrics for this service doing all the things that it needs to do. Now this is really great. I get to see the service health. I get to see Grafana dashboards. I get to see all of the content involved in my LinkerD dashboard. So I'm gonna break out of that and let's switch over to Istio. Oh, one more thing. That dashboard was really good for harvesting all the statistics, but in time I may want to flip over to doing that from a command line interface where I can harvest this and push that content elsewhere. I do have a Prometheus dashboard where I can grab that content using Prometheus, a Prometheus sync rather, but I can also grab these metrics from the command line where I could use that to pipe it to other content as well. So switching over to Istio, let's flip over to the Istio cluster where LinkerD focused on that really fast setup experience. Istio focuses on having pieces that allow us to turn things on and off. So for example, here's the profiles that allow us to turn on and off various features. So if I want to default to all things on, which I do in this case, I have the demo profile running, then I have all of these features enabled. And so if I don't find a profile that exactly matches what I'm looking for, I can definitely turn on and off features as I go. Grafana, Istio tracing, Kiali, Prometheus, all these dashboards we can enable or disable by just turning them on and off inside Istio. So the Istio docs are really great at getting us started. I already have the Istio setup installed and I have this demo app. Now this demo app is really cool at kind of highlighting those advanced routing rules. Each of these boxes are a spot where it has a proxy involved. So I have an ingress proxy that will hand me off to the product page. The product page will call into a detail service to get the product details and it'll also call into a review service. Now the review service goes and gets the ratings from this node app and then it'll show different stars or not stars depending on the version. In version one, it shows no stars. In version two, it'll show black stars. And in version three, it'll show red stars. So I've got that application up right here. I can push refresh. And I see that now I have no stars. Those stars are gone in version one. Well, the interesting thing here in Istio is that I've got this virtual service that routes all traffic to version one. Now I could choose instead to perhaps route traffic. Let's start up version two and I wanna start by just putting 20% of the traffic towards version two. The other 80% will stay to version one. So I'm gonna go grab that YAML and set that in place. Cube CTL apply that YAML file and now I've got that traffic ready to go. Flipping back over to the browser, 20% of the time I will get black stars and 80% of the time I'll get no stars. And it looks like I'm hitting the 80% of the time that whole time. That's really cool. So let's flip over to version two. We've got everything ready to go and oh, I see I applied the wrong YAML. Let's go back and apply that 80-20 rule. So that 80-20 rule, now I'll see 20% of the time I've got those black stars. So that looks good. I've got version two ready to go. Let's set version two completely in place. Now I'll always get version two. I'll always get those black stars. Well, in time, let's start looking at version three. I wanna do, I don't know, 50-50 traffic between two and three. So let's go grab that YAML file and set that in place. And now I can see that about half the time I'll get the red stars and about half the time I'll get the black stars. When I'm comfortable, I can flip over completely to version three and now I'll only see the red stars. In a similar way to upgrading between the various things, we could also upgrade across, upgrade based on other conditions. Like in this case, the end user has to be Jason. And if Jason is logged in, then he'll get the version two system. Otherwise, everyone will get the version three system. So we can do those advanced routing rules because we're proxying all the traffic between all the things, we can do really interesting things to say, for example, some of the traffic goes here and some of the traffic goes there. I would love to be able to dig into all kinds of interesting features with Istio and Linkerty, but sadly, that's as far as we can go on those demos. That was really, really cool. Awesome, Rob. I love that demo. And every time I've seen it, I've learned more and more stuff from you. And one thing for sure, thank you demo gods. You did not get upset with us. That's always a fair, this really, really was cool. So yeah, thank you for that demo. And let's see, I wanna do a quick recap of everything that Rob sort of talked about. And so a service mesh proxies all the traffic through the cluster. We now know that at its very most basic level, because it stands between all the traffic, it can monitor traffic, learn from it, and infer service health and log failures. As you saw, now if that's just the crawl, now let's take a look at the next layer, which is the walk. As we saw in the demo, the walk scenario, advanced routing scenario, because it proxies all the traffic, we can add additional service abstraction, such as routing traffic between two versions of the service or stopping traffic with a circuit breaker. Onto the next layer, which is run, because the service mesh proxies through all the traffic, we can get actual service topology, who calls what basically? And as Rob just demoed, this isn't just, this isn't the developer's hope of what will happen. This is the actual traffic through the cluster. A service mesh is able to do all the things because it observes controls and secures all the traffic, both North, South and East West traffic. So there we have it, a traffic proxy plus a control plane. That's literally what is service mesh. In fact, in the wise words of our good friend, Zach Butcher, who is the author of Istio Up and Running using service mesh, if it doesn't have a control plane, it ain't a service mesh. With that said, service mesh is not the preferred solution for all scenarios. And for that, I'm gonna hand this back over to Rob, who will help us dive into some of those complexities. Most definitely, thanks, Kavya. I agree, service mesh is a great thing that allows us to observe, control and secure the traffic. And with that, there's some downsides, there's some costs associated with that. On the left, we have our Kubernetes cluster. We have the control plane with the API server, the controller manager, the scheduler. We have the nodes that have the kubelet, the C advisor, the kube proxy. And then we have the work that we need to do, the pods that contain our containers. On the right, we have all of the details of our service mesh. Each of those services have another proxy and we have the entire control plane for the service mesh. What we see is that we pretty much have double the container count in our cluster. We have the control plane for Kubernetes and we have the control plane for our service mesh. We have all of our services doing the work and we have all of the proxies that allow traffic between things. If we're gonna run a service mesh, we need to be comfortable that we're probably gonna double the number of containers in our cluster. And we'll probably significantly increase the computation in our cluster as well. We'll probably not hit double, but we're doing, because the proxies are a lot lighter weight than the Java Tomcat services that we have running in our cluster. But we're also doing TLS that we weren't doing before, mutual TLS between each service. So it's not unexpected for us to think of maybe doubling our container count and maybe doubling or maybe 1.6 times the amount of compute in our cluster. This is a non-trivial cost. This creates additional spend in building out our cluster. We need a cluster that is roughly twice as big to be able to handle a Kubernetes cluster and a service mesh. That's not unexpected. If we're after the benefits of securing, controlling and observing our traffic, this is perfect. But if we're just reaching for a service mesh because we have a Kubernetes cluster and we just wanna throw a service mesh in just to see what happens, we may be disappointed. Service mesh isn't the perfect solution for everything. If we're comfortable with that additional compute cost and we really need those features, then service mesh can be our perfect solution. And with that, I wanna talk about some of the benefits of service mesh. One of the key benefits that we are able to observe all the traffic move through the cluster, creating transparency. Naturally, we get to a more comfortable place where we can troubleshoot when all the request response is happening transparently because then it's easy to track down calls which are failing and fixing, replacing the service within a new instance. On top of that, using service mesh, debugging hundreds of microservices becomes easy and fast. Service mesh helps us gain control on the network through features like circuit breakers, splitting traffic through AB tests. This essentially enables resiliency and enhances network robustness. When it comes to the secure part, we can get mutual TLS between containers without having to break certificates into our containers or tell the containers to flip to HTTPS or validate trust chains, et cetera. Basically, any of the heavy lifting associated with certificates, we can now do all of that inside the service mesh meshes. There can be downside to reaching out to service mesh too quickly. Robin, I actually wrote a CNCF blog post about this whole topic and as Rob just explained, I just wanna reiterate this in our conclusion that we both came to. We must remain cognizant of the cost of additional resource requirements for a service mesh. You need a service mesh if you have any of these business needs. If you're running highly sensitive services like PKI, PCI, et cetera. If you're running untrusted workloads, if you need security in depth, if you need AB routing or beta channel, if you're running multi-tenant workloads, reach for a service mesh for observing, controlling or securing traffic in a Kubernetes cluster because the service mesh intercepts traffic into and out of each container, it's a great way to monitor and control traffic. Whether you're looking to secure this traffic with mutual TLS or authorize into service communication or monitor traffic between services, a service mesh can be a great choice to clean up the mess. And with that, I do wanna hand over our contact details, you guys. I am Kavya Perlman on Twitter. You can find me on LinkedIn. You can also Google me. You can also reach out to me via wallarm.com using requests at wallarm.com. And then there's my good friend, Rob Rich, who is available via Twitter at Rob-Rich and has this wonderful website where he puts out all of the slides that he uses and many, many other informational stuff, robrich.org. And so now we would love to take some questions that I'm looking at are coming in. Rob, you ready for some questions? I am. This is so much fun digging in with you, my good friend. I love it. It's been so wonderful. And we actually have a stack of questions. So let's begin. Oh, very nice. Let's dig in. Yep. So the first question. And there was a question. It was funny. When you were talking about API gateways, somebody actually had a question about API gateways and you'd literally like literally at the same time we're answering that. And I hope that. Oh, that's perfect. So we are all kind of thinking alike. So the first question is, does Service Mesh have a built-in queues for queuing requests when a given service fails so that the request can be retried when the failed service heals? Most definitely. You can't enable retries within your services inside the Service Mesh. There is some benefits and drawbacks to that because I could automatically retry but is the calling service gonna time out waiting for me to finish retrying? And so it's definitely possible but it's one of those features that you want to consider carefully. Maybe I can retry once or twice but retrying for five minutes or a logarithmic backoff that may last a really long time may not be a great use case. The Service Mesh definitely can do it but you may want to steer away from that. Yeah, I remember you talked about it as we were discussing our webinar is like it may actually compound the problem if you retry too soon. Right. All right, onto the next question. How Service Mesh helps us in a situation where we make a lot of calls to external APIs which we don't control, we can control traffic to those APIs too question mark like having a circuit breaker for external APIs. And I think you sort of talked about it, yeah. In the same way that you have a proxy between services we'll have an egress proxy and an ingress proxy. And so as you're calling that external API you'll go through the proxy and the proxy can do things like automatic retries, circuit breakers and all of the features that we expect from Service Mesh but now we're using that to contact external services. Maybe they're on virtual machines, maybe they're even outside your cluster. Yeah, and then is there a way to control traffic across multiple K8 clusters? Yes, there is definitely a way to control this and that's where Service Mesh comes in handy, is that Rob? Yeah, definitely. Controlling across clusters gets a little bit weird cause it's like, well, which cluster gets to own this? But by all means, Service Mesh can definitely help you there. And we have the next question. Hi, is there any change required on the application pods in order to support canary upgrades? That is a good question, I like that. Is there any change involved in the pods themselves to support canary upgrades? Let's think through that a little bit because that will really help drive home some of the principles of Service Mesh. We create, for example, these routing rules that tell us there's some traffic going to one service and going to another service and here's the rule that says what makes the traffic go between them? Do we see any changes in our code to make this happen? What's really cool is all of this lives completely in the Service Mesh. It doesn't need to be in our code at all. We just happen to run version two and version three of our service. We run those two sets of containers and the Service Mesh takes care of everything else. Hmm, wonderful. And I am gonna breeze through just a couple of questions to let's say, I do encourage people and everybody to reach out to us directly to get some of the answers. We're very active on Twitter. So please feel free to anytime, tag at us and ask these type of questions. But I do wanna take another question which is, how does a Service Mesh relate to an ingress controller like Nginx, can they coexist or do you have to use only one exclusively? That is a great question. Let's go back to this Istio diagram and I'm gonna use Istio as an example but the same occurs for others. Here in this bookstore app, we have this ingress proxy. And so this could be an Nginx ingress proxy. In this case, it's an Envoy proxy. And so our content comes into that proxy and at that point it's now controlled into our Service Mesh. Was there an Nginx ingress controller ahead of that? Maybe, probably not. Probably it hits that ingress Envoy proxy straight away. It's possible to do both. If you really, really want SSL termination in your Nginx ingress proxy or you want really interesting rules there, you may choose to put the Nginx ingress proxy behind the Service Mesh ingress proxy. But I've generally found that the Service Mesh proxy, the Service Mesh ingress is efficient for the majority of my needs. Cool. And then there is another question here is are Lincard and Istio commercial or self help, a commercial off the shelf or open source? And it is my understanding it is open source but there are commercial lead related tools available. One of the ones that, for protection or for other secure control, other security services, there are tools available. One of the ones that I was part of which was the wall arm launch I earlier talked about, following many, many requests from the customer, wall arm extended its apps and API security to work with some of the distributed application using Envoy proxy. So it could not just protect North Salt API in the applications that use Envoy as an alternative ingress controllers at the front end of the Kubernetes cluster. So it can also now protect the Edge traffic, East West Envoy API for Service Mesh and Istio. So definitely, right Rob, it's open source, Istio and Lincard but there are other commercially available stuff as well. Right, exactly. Both Istio and Lincard are free and open source and there are other service meshes that you could consider or other security products that you could choose to layer on top if you wanted to. And I think with that I just wanna take one last question which already we went over but it would be a nice recap in the session today. What are the strengths and weaknesses of Lincard and Istio? And this is so funny because we've always talked about it. We are gonna dig even deeper into those two, Lincard and Istio people came up to me during KubeCon as well asking like, hey, which one is which? And we're still like on this journey, Rob and I are on this journey to really find out like in the very micro granular detail what is really going on? But that's why Rob, what do you think we have established in what context, which one should be used? I think it's one of those questions like iPhone and Android where there isn't a right answer but there's a right answer for you. And so as we looked at it, we kind of looked at the methodology. Istio, everything is in the box. And so if you don't want to have to pick features you just wanna turn them on and off, Istio can be a great choice. It also includes the best of open source packages for monitoring and traffic routing and all of the pieces that we want to add are in the box. By extension, Lincard focuses on very simple implementation very simple installation. And so you can get up and running with Lincard really fast but you may hit a wall where it's like but I want this advanced feature and at that point then you have to go pull in a third party package. So for example, in Istio we saw the routing between AB traffic and that's just in the box. I can create a virtual service and I can route traffic across two things. With Lincard, I need to pull in a third party package that will monitor those Istio or the Prometheus Syncs and be able to make intelligent decisions there controlling the network service. I needed to pull in that third party package. So ultimately, would you rather the erector set or would you rather the shiny box? Ultimately you and your organization are gonna make the choice that is exactly perfect for you there. And I completely agree that is the proper service mesh for you. And I think this also kind of answers a little bit of the next question which I wasn't planning to take but it seems like this person really needs the guidance. They're trying to do the transition from EC2 to Kubernetes and interested in features like service tracing, Navy testing, both service meshes look interesting but due to complexity, which one would you recommend to start? And also if we need to, what kind of consideration if we decide to change from one another? And these are the types of scenarios where you really want to sit with an expert. I know we have tons of the experts at Wallarm and you can reach out to them request at wallarm.com but I would encourage you to separately reach out to us directly or somebody in the Lincordie and the SEO community and just really truly engage even if it's like potentially under NDA with a commercial entity to sit down and truly understand what exactly is your use case? Because we really, after this it gets down to those nitty gritties of what exactly are you a fintech? Are you assess? Like what are you trying to achieve? How much heavy lifting do we need? And how flexible do you want to be? All of those things have to be considered before these type of decisions are made. So with that, thank you. And maybe into your scenario, just a smidge. Yep. Knowing that you're really trying to optimize to avoid complexity, you may not actually need a service mesh yet. You may just be at the spot where the services operating in your cluster are sufficient and the additional computational cost isn't worth it. Yeah, that's very possible. And that's why- They can consult an expert like my good friend, Kavya and figure out the things that you need to do there. Totally. We're always here to help out the community and please feel free to reach out to us anytime. And thank you, Rob. This was awesome. As anticipated and loved it and thank you everyone for attending the session. We look forward to continuing our journey together via CNCF. Most definitely. Thank you, everyone.