 Hello everyone. Good to be here. My name is Anitai Human and I'm joining this call from Nigeria. Today I was speaking on topics solving the sebaceous mushroom adopters dilemma. And in this session I'll be answering some of the questions that adopters frequently ask and as well as giving you in detail for sebaceous meshes. Well, let's get to it. During this session we're going to look at an introduction, an introduction. Then get to start with sebaceous meshes. Then talk about the functionalities of the sebaceous meshes. Why we should adopt the sebaceous meshes? Or why could we actually adopt the sebaceous meshes? We'll talk about the sebaceous mesh architectures. Then the sebaceous mesh abstractions that we look at the developers or the sebaceous meshes adopters dilemma. And finally we're going to look at the sebaceous mesh management scheme called meshes. A little bit about me. I am Anitai Human, a software developer. And the MeshMids Adelaide 5 community. A developer advocate at Cubano, a technical writer and an open source advocate. Now let's get to the topic of today, right? Well, a sebaceous meshes, the first question every single adopter will ask. But sebaceous meshes in the layman's terms are simply a way to control how different parts of an application or microservices communicate or share data with one another. They're otherwise considered as a microservice platform, which is true. And if you're familiar with microservices, you might be wondering, service meshes, microservices on their own are able to deal with the service-to-service communication that goes on within them. So why exactly do we need these service meshes in the first place? Well, the answer is simple. It is very possible to code the logic that governs the communication between each of these services without the use of a service mesh layer. But in a situation where the communication starts becoming more complex over the day or as time goes by, then the need for the service mesh starts becoming more valuable. What happens in a sebaceous mesh is the sebaceous meshes takes the logic that governs the service-to-service communication out of an individual service and abstracts it to a layer of infrastructure. And so every service mesh or every mesh is a dedicated layer for a service-to-service communication. For collaborative applications that are built on microservice architecture, service meshes are a way to comprise a large number of discrete services into a functional application. And the good thing is you do not have to run a microservice application before you can actually use these service meshes. Although you have higher chances of benefiting from a service mesh when you run more services. Over 85% of applications that use microservices are currently using service meshes to manage their microservices. And that's to tell you how important the need for microservice meshes is becoming as the game go by. Now let's look at the functionalities of the service meshes, right? People adopt service meshes for less number of reasons. But it's often as a result of the use cases they're looking to benefit from. And some of the capabilities that people frequently look out for in a service meshes or people deploy a service meshes for are one, the observability. Using service meshes, you can generate all kinds of traces, logs, and metrics. You can also ingest this into your monitoring system of choice to get a value without instrumenting your applications. This is one of the very, very important capabilities that adopters prefer to look out for. Because you can handle the consistent metrics across the fleet. You can trace the flow of requests across the services. And you can also deal with the metrics without instrumenting your applications. Another important capability is security. Of course, the central of service meshes concept is identity. And so every service mesh owns a unique ID that they identify with. And with these IDs, you can use them to facilitate a secure connection to the service communication that goes with your application. And another very useful functionality to look out for is traffic control. When you lay down proxies and are able to control configuration and direct traffic, you realize that there's so much that you can actually use with service meshes to achieve. And this includes traffic steering, traffic splitting, ingress, and s-quash rounding. Another very useful capability that people frequently look out for is the resonance, the control of overcares. Of course, you will be able to do so much with so much of chaos engineering with the use of service meshes. You can configure the mesh to provide a bunch of resonance to your service. And you can even add retries for your field request. And with the use of the service meshes, you can handle the timeout, systemic, systematic fault injections. You can control the connection full size, as well as the request load. You can handle the circuit breakers and the health checkers within your application. And these are all the capabilities that people frequently look out for when they use the service meshes. Now let's look at why they actually adopt the service meshes. The answer still comes down to the use cases that they're looking out for when they're adopting a service mesh. But it's often to avoid the bloated service code, to avoid duplicating work to make service production ready. And this can be seen in load balancing, auto screening, rate limiting traffic, and routing. And also to handle, to avoid inconsistency across services. That includes the retries, the bell overs, deadlines, consolidation, and so many others. And also to handle the diffusing, the diffusing responsibility of the service mesh, of the service management as well. Another very important reason why people adopt the service meshes is to help with modernization. With the use of the service meshes, you can modernize your IT inventory without writing your application. You can adopt microservices and also use the regular services, and that is perfect in crime. Like I said earlier, you can actually use these service meshes without running your application on the microservice. And you can also adopt new frameworks that are arising while you build your application. And finally, you also get to move to the cloud. Of course, the introduction of the cloud has made the development process a lot easier. And speed has sped up everything when it comes to the production. So all of this saves the developer so much time and improves the developer speed, which is another very important reason why people want or why you would want to adopt a service mesh. Now let's move on to look at the service mesh architecture. The service mesh architecture is divided into three layers. The first which is the data plane. And this layer is considered the work of the service mesh. This is where all of the service proxies are logically grouped and they are responsible for a lot of purposes such as executing of traffic control, health checking, routing, load balancing, auto scaling, authorization, observability and so on. Now moving on to the second layer, which is the control plane. This is where an operation interface with the service mesh. It deals with speaking to the proxies and updating the configuration for a given service mesh. So in the control plane you should be able to see that your service mesh will provide policies, configuration, as well as platform integration. It takes a set of isolated, less tight class proxies and turns them into a service mesh. And also it does not touch any packet requests into the data path. And finally the last layer which is called the management thing. This is the layer at the top which helps to federate the service mesh deployment. The management plane helps to perform a lot of things that you would most likely not expect from the control plane. And this includes providing the federation, the backend system integration, expanding policies and governance, continuous delivery, integration, work flow, chaos engineering, application performance and so many more. And much later into this we're going to look at the service mesh management plane called METRIC, which enables operators, developers and service mesh owners to realize the full potential of the service mesh. Of course with service mesh comes the abstractions to the rescue. Some of the abstractions that exist in the service mesh are the service mesh interface, which is a standard interface for service mesh or Kubernetes. Then the service mesh performance, a format for describing and capturing the service mesh performance. And then the multi vendor service mesh into operation, which is a set of API standards for enabling the service mesh federation. So all of these three abstractions are the underlining project as satellites. The mesh or the service mesh project is so. Most of these projects, like the service mesh interface and service mesh performance, are also mesh projects that utilize the mesh project in the layer 5 community. Now you can see a quick view of what the service mesh performance looks like. If you want to know more about the service mesh performance, suggest you check out the links provided on this slide, which I'll provide the link to much later. And you understand all about service mesh performance. This is actually one of the projects recently adopted by the CNCF. And then you can also see a quick view of the service mesh interface as well. Now moving forward, we're going to be looking at the adopter's dilemma. As an adopter or as someone who is new to service meshes, you must have tons of questions, which of course I did when I got introduced or when I newly heard of service meshes. And I always ask which service mesh is actually the best for an organization because since there are like numerous service meshes out there, how does one get started with the service meshes? And what is the catch using the service mesh? And all of these questions. And there are over 20 different service meshes. And within the layer 5 project, within the layer 5 community, we have a project that creates the landscape of these different service meshes and related technologies. And so on this landscape project, you're going to see that it allows you to compare which service mesh is a better option for you or an organization and which has more capabilities to help you in one way or another. Why are you using your service meshes? Of course. And also it helps you to understand which will be the best fit while running your microservice applications. And if you'd want to find out more about this landscape project, then you can look at the layer 5 projects at layer 5 website at layer 5.io slash landscape. And you get answers to most of the adult questions that you're probably asking right now. Moving on, let's look at the service mesh management, they called Merchry. But before we do that, let's look at the organization behind this project, Merchry. Merchry is a project that is used by the layer 5 community. Layer 5 is an organization, it's a community that offers cloud native management software that explores the unique positions service meshes have in changing our developers' rights applications, how operators run modern infrastructures, and how service mesh owners are able to manage their own SaaS operations. And then with tools from cloud native infrastructure and the application, layer 5 has empowered a lot of developers, operators and service mesh owners. Layer 5 community still was three cloud native foundation projects, chairs, network and service mesh groups within the CNCF, and it's one of my favorite open source community of course. One of the popular projects within the layer 5 community is the Merchry, which is considered the service mesh management plane. Merchry is the largest open source project that exists within the layer 5 community. And within the Merchry project, there are a few projects that also satellite this project and some of them are extensions whereas others actually stand alone project. Some of these projects are already donated to the CNCF as you can see or like I mentioned earlier, we have the service mesh patterns, service mesh performance, service mesh interface and so on. As well as Merchry itself are all projects that are currently donated to the CNCF. These projects are viewed. Merchry has participated in quite a number of internship programs and some of which are the Google Season of Doc, the Google Summer of Code, the Linus Foundation mentorship program. The Google Infrastructure Initiative and as well is a project under the CNCF which has solved satellite projects such as the service mesh performance and service mesh interface. Merchry is simply the multi-mesh management platform that handles the life circles, the workloads, the performance, the configurations, the patterns, practices, skills and filters within your application. And this project already has quite a number of adopters and supporters, most of which are looking to incorporate Merchry into their release process in order to measure the adherence to service mesh standards. Some of the adopters that Merchry has are the Hashikov console, the network service mesh, we have the octa-ring, we have Linkety and so many others. This is a picture description of what Merchry architecture looks like. You can see that it actually does so much for your application and handles all of the service mesh management that goes on from the life-circuit down to the workloads and the configurations and deals with the patterns and so many others. So all of these are what moves on the Merchry project. If you want to join the Merchry community, you are so much welcome to actually jump on this. It is a warm, welcoming community and this project, Merchry project along with all its sub-projects are built within the Clare 5 community. And the Clare 5 community has already engineered, has engineers from different organizations of that intel, Red Hat, Rackspace, Hashikov, Maintainers and so many others from different open source organizations. And the Clare 5 community, particularly Merchry, runs as number one most popular Linus Foundation mentorship project. Clare 5 is an open source community that looks out for sustaining open source governance and not just open source contributions. We have over 300 contributors with 15 maintenance across different organizations. And so far we've had over 1,000 Merchry users and over 600 followers, over 1,000 stars on GitHub repositories. And of course our Slack channel has over 2,000 Slack community members of the Slack community. Within the Clare 5 community we have a project called the Merchry program which helps, which highlights members of the organization of the community to help mentor and onboard others that are coming into the project. And of course we have been highlighted as the number one most popular project on the Linus Foundation mentorship program. There's actually so much going on within this community. Aside the projects, aside the service mesh projects that are run, we also have a community of supporting, where we support each other to grow underneath. We go together as one community. Thank you so much for your time and thank you for listening. If you have any questions for me right now about service mesh and all that I've said in this section, please do want to go on and ask. You can also connect with me via Twitter or GitHub or LinkedIn. And if you'd want to join this awesome community in Clare 5 community that represents Merchry, you can as well do that by jumping on the links provided and you'll find out more about this particular project and community. I hope this section has enlightened you one way or the other about service meshes and how you can handle your adopt your issues as an adopter or your challenges as an adopter who is new into the service mesh ecosystem. Thank you so much for your time and have a nice day.