 So, hello everyone. I hope you enjoy DevCon so far. This is our last talk in this session room. And this talk is about journey into the world of service, meshes and meshery by Nitish and Aditya. And if you have any questions, please leave it in a Q&A section. And let's go into the talk. Awesome. Thanks Lucie. So, let me share my screen. Okay, is my screen visible? I hope my screen is visible. Nitish, can you confirm once? Yes, yes, it is. Okay, nice. Hello DevCon. How many of you are sitting in your chairs with a cup of coffee, wearing tops and bottoms that don't match each other? Don't worry, we are also doing the same thing. It was during the stage of mid pandemic in 2021, when we didn't have any idea about what is a service mesh. That's when we decided to explore about it. I'm Aditya Krishna and I'm an associate software engineer at Red Hat. I'm also a meshery maintainer at LaFI. Along with me today, we have Nitish Kartik. Hello everyone, I'm Nitish Kartik and I'm from Chennai, India. And I'm a maintainer of meshery and I've been contributing to the most of the projects which are revolving around service meshes as well. Thank you Nitish. So, what can I expect from this talk? This is an introductory talk on service meshes and a couple of projects which revolve around it. The idea is to give you a taste of what service mesh is and a glimpse of what is happening in the community regarding service meshes. So, let's get started. Now, what is a service mesh? Service mesh is a way to control how different parts of an application share data and communicate with each other. Service mesh can help you to manage communication between services on your own without letting you compromise on security. And of course, your debugging process becomes a lot easier as well. And come into service mesh architecture. So, this is how service mesh architecture looks like. In this, we have two gateways where network traffic enters the service mesh with respect to the ingress gateway and exits the service mesh with respect to ingress gateway. Along with this, we have two more components that is data plane and control plane. This is responsible for service discovery, authentication, and a couple of other things. In data plane, it is responsible for all the magic regarding service mesh. And finally, we have control plane. You can consider it something like a brain of the service mesh. So, why should you care? Your organization can benefit from a service mesh if you have large scale applications composed of many microservices. As your application traffic tends to grow, you might need complex routing capabilities to manage the flow of data between them. Service meshes are useful to manage transport layer security connections between services. They can also allow developers to focus on adding business value to the project rather than worrying about communication between the services. For DevOps teams that have an established CI CD pipeline, a service mesh can be essential for deploying apps, application infrastructure to manage test automation suits using Jenkins or Selenium. And it can help you to manage network policies and security policies as well. You might have heard about many service meshes out there like Linkerd, Istio, Kuma, VMware, Tango service mesh, and a couple of others as well. So, why there are so many of these? As different teams within a company adopt different service meshes, the main challenge of managing multiple service meshes becomes a reality. Instead of using one big service mesh, it makes more sense to install many small service meshes in multiple clusters. Now, to talk about what are the differences between them, you can visit vid.ly slash devconf.sml which directs you to this page, which gives you a complex comparison regarding various service meshes and its focus on categories, non-functional aspects like is it open source or not, what type of language are they using to build the service mesh, and a couple of others. And it also gives you a thorough detail regarding functional aspects of the service mesh, like does it support Prometheus integration or proxy injection, and a couple of others. So, now to help you manage these small service meshes, which you hopefully deploy across multiple clusters, we have Messery, about which we'll be explained by my friend Nitish. Over to Nitish. Thank you, Aditya. As Aditya explained, it is actually a need of the hour to manage multiple service meshes and people are actually looking to deploy multiple meshes in their clusters, and they don't actually want to learn how each and every service mesh implements their functionality that they provide. So, that is where Messery comes in. So, Messery is actually a service mesh management plane that sits on top of your service mesh architecture, and it enables the adoption operation and management of any service mesh and their workloads. You can actually learn more about it by going to Messery.io. And now let's look at the architecture of Messery. So, Messery essentially has three major components. One is the Messery server, which you can relate to like the Kubernetes API server, where the client will actually communicate with the server. And we have Messery operator, which would be deployed inside the cluster, and it would handle things like synchronizing the state of cluster with Messery and some of other things like help checks and all those two. And we have a bunch of adapters for each and for the service meshes that Messery supports. And these adapters are what allows you to manage, what allows Messery as a user of Messery to manage multiple service meshes at the same time. And the communication is with Messery, and the adapters are generally done via GRPC communication. So, let's see what service mesh patterns are. So, since service meshes have a strong control over the networking aspect of your infrastructure, it allows people to do things like circuit breaking, retries, etc., which slowly evolve into a set of good practices or patterns as we call it. So, service mesh patterns are about enabling the use of repeatable architecture templates. So, these are some of the characteristics of the service mesh patterns. And service mesh patterns are still actively under development. And if you are interested in it, then I highly encourage you to open to the community and live the community. And then, yeah, you can contribute to it. And there are many more patterns out there, and some of which are already created by our community members at LAYEFL, which you can check out by visiting the links, github.com slash service mesh patterns and mesery.io slash catalog. So, we have a plugin in Messery called Meshmap, which allows you to, you know, visually edit and manipulate your patterns and also visualize your infrastructure. So, without wasting our time, let's go to the demo right now. I'm sharing my screen. Whenever my screen is visible, Aziz, can you confirm? Yes, please. Okay. So, we have Messery over here. And as you can see, it is connected to a doctor to stop QB discusser. So, I can check the connection between the QB discusser and Messery by taking one at which would give you the status of it. And as I discussed in the Messery architecture slide, we have Messery adapters, which allow you to manage multiple service measures. So, let's see how, let's actually try to configure one of the adapters. So, I actually have one of the adapters running at in local host 10,000, Messery ECO adapter. Okay. Now that I have connected to the Messery ECO adapter, which is running at local host 10,000, I now should be able to manage the life cycle of Istio service mesh. Okay. So, we have options like deploying a service mesh and then automatic sector injection. So, this is basically where we can manage the life cycle of service measures. These are the services that Messery currently supports and the list is actually growing. So, and it also has support for Grafana, Jager, and Kiali, and some of the other add-ons that Istio natively supports and the same for all the service measures as well. So, yeah, and we actually have the ability to create patterns and edit patterns and all those of them. We can actually deploy patterns over here. Although I would not be able to cover all of the aspects of what Messery is trying to accomplish, I'll just give a taste of what are the components that are available over here. So, we have a pattern configurator which we can use to create and add and manipulate patterns and deploy them. And we have support for multiple for the adapters that are deployed currently. And apart from that, we also have service mesh performance and profiles. So, we can actually compare how each and every service mesh contrast and we can analyze the performance of each and every service mesh that you want to. So, yeah, that is a bit about what Messery is and what we're trying to accomplish. So, if that's, if that's of interest to you, then you should actually join the community and yeah. So, let's, I would like to now hand it over to Aditya Krishna to talk about what SMI and SMP are. Aditya, over to you. Thank you. Thank you. It is about SMI, Service Mesh Interface. It is a specification that covers the most common service mesh capabilities like traffic policy, traffic telemetry, traffic management and a couple of other stuff. The SMI is specified as a collection of Kubernetes custom resource definitions or Kubernetes CRD along with extension API servers. These APIs can be installed onto any Kubernetes cluster and manipulated using standard tools. Coming to SMP, which stands for Service Mesh Performance, it is a standard for capturing and characterizing the details of infrastructure capacity, service mesh configuration, workload metadata and it also captures complex details like environment and infrastructure details, number and size of your nodes, service mesh and its configuration, workload and application details and it also does a statistical analysis to characterize the performance. So, that's about the brief of SMI and SMP. If you have any doubts, you can reach out to us at discuss.layerfire.io and feel free to get in touch with us on our community Slack at slack.layerfire.io and we are also welcoming new contributors interested in contributing to open source for which you can visit github.layerfire.io. Thank you everyone. That was our talk regarding service measures and misery and we are open to Q&A. Thank you. Sorry. I'm hoping just starting to playing with me, but thank you for your talk. It was really interesting. There's no Q&A so far. So, if anyone have a questions, please reach out to Aditya and Nitesh. If you're from Red Hat, you know where you can find them or they will be in Hopin for a few minutes. So, thank you. Thank you for our session.