 So the next talk is going to be on The Truth About Adopting a Service Mesh by Lin Sun. Through my session, The Truth About Adopting a Service Mesh. Let me quickly introduce myself. My name is Lin Sun. I am the Director of Open Source with solo.il. I've been working in the ISTO project for four plus years. I have a lot of patents, 200 plus, and I wrote a book about a year ago. ISTO explained to really helpful users to quickly get started with ISTO. Soft process for adopting a service mesh. Many questions to ask yourself. What is service mesh? Do you need a service mesh? What are the service mesh projects available out there? When do I start? What are the surprises and benefits? And what's the next step for you? If you decide service mesh is the right for you. Challenges with microservices. As you all know, as services are moving to be cloud-native, Kubernetes-based, a set of new challenges arise. How do you observe interactions among your services? How do you secure the communication among the services? And how do you increase the resilience of your services? And how do you control traffic as new versions come in? You want to be able to precisely control how the traffic will go out to the new version. What is a service mesh? Fundamentally, service mesh is a programmable framework that allows you to observe, connect, and secure your microservices. Service mesh solves application networking problems for you, so you don't have to solve these problems in your application so you can focus on the business logic of your application. Service mesh provides discovery of your services, secure the communication of your services to allow you to traffic control and shifting and shaping and mirroring a lot of functions around traffic control. Service mesh allows you to apply service-based access control or mesh service. The fact service mesh has the application information allows it to do intelligent things. Service mesh also provides telemetry collection, increase your service resilience, and this is all on the basis of without you needing to write any code in your application. Last but not the least, service mesh provides API programmable interface so that you can program in the service mesh which program the cycle proxy for you. A quick architecture overview of what service mesh is, how does it work? There are two key components, control plane on the top and data plane at the bottom. Control plane is what you interact with the service mesh, the program service mesh, and service mesh in turn to program the cycle proxies in pink. The proxy will start with default configuration which are a little dummy, and then through your intelligence programming, the proxy can be customized to your specific applications. The proxy for instance can upgrade the connection to be secure connection, but with the telemetry data for you, the proxy can control which endpoint the traffic sense too. So a lot of intelligence are built into the proxy to help you mediate that traffic. So do you really need a service mesh? Only you can decide. The reason is you know your organization really well. You know how many microservices you have, how many languages your team are using, you know, are you running on Kubernetes, VMs, how much is your journey on cloud native, what protocols your applications are using, are you using any SQL sets, what are the scales are you looking at for your services and do you need to consistent collect telemetry data so only you can decide. Do you have supporting infrastructures? Keep an eye out. If you're not using service mesh today, you don't think you have a need. I do want you to keep an eye out because you know your organization might change for explosion in a number of services and increasing of languages and the complaints about maintaining multiple framework language, library, binaries, libraries and certification or patient every 30 or 90 days. How are you maintaining that? If you are doing in-house certification management for your proxy yourself or your application service yourself or if you ever need to have better observability system to be able to ping out which service has the problem. Also, it's important to check if you have the supporting infrastructure. Do you have metrics, logging and tracing systems? Are you using GitOps? Are you having CICV and Git repository? Are you using Kubernetes and VMs? A lot of things to consider. At the end of the day, service mesh is a critical piece of your infrastructure. We want you to make sure you think wisely and pick it wisely because the fact the proxy is done. It will take down your data plan. It will take down your services. Now let's get to the next topic. How do you select a service mesh? If you do decide your organization where meets a service mesh? The first thing I want you to think through, remember that PsyCOP proxy is on the data plan. Is it you going with Envoy or something else? The industry can just set Envoy as the default proxy for service mesh. Maybe because Envoy is really bad for test and production environments like Lyft. And also Envoy is very mature, very diverse community so as you can see ECO, Console Connect, Enflash, Kuma, Open Service Mesh, all these great projects landed with Envoy. There are certain exceptions though, so you want to check that first. You want to also think about maturity, production deployment, how many users are deployed, the service mesh in production, because that's battle test. Is there somebody else to do the testing for you? ECO, LinkedIn, Console Connect, I think they are the three top adaptive service mesh in production today. Production support and multi-vendor, right? If that vendor fail, do you have somebody else to go to? Are also extremely important. In this case, ECO is multi-vendor. It was founded by three big companies and now it has a growing ecosystem. More vendors coming into ECO and providing production support of many to ECO solutions. Certainly some of your workloads are probably on Kubernetes, so you want to watch out for Kubernetes supports. ECO and LinkedIn are certainly ahead of the game in this scenario. VM support, you want to make sure if you have workloads on VM, you want to make sure it can continue support your workloads. Maybe you don't have any plan to move them to Kubernetes. Or maybe you want to enjoy the benefit of the service mesh while they are running on VM. So check out the service mesh that has decent VM support. CNCF did a survey last year. If you haven't seen it, ECO is about 45-ish most dominant service mesh in production towards the end of 2020. Sorry, my slides turning are not quite working. Okay, there we go. Where do I start? If you do decide ECO is the service mesh you want to go with, I would say definitely start with the edge because that's the easiest way to adopt a service mesh. As you can see, you can have envoy proxy at the edge and then gradually add your services to the mesh. You don't have to add them to day one. In fact, most common cases just don't add them yet. After you move to envoy-based API gateway, which is issue ingress gateway or blue edge, you can then start looking at adopting other services to service mesh. I want you to pick a specific user case and iterate. There's no magic. Check out your business needs, not the password, right? So a lot of the users I work with, they adopt ECO for mutual TLS. They want to use ECO to upgrade the connection to mutual TLS among their services. They want to do authorization authentication among their services at the edge. So that's the most common case in my opinion. And then we also have users about ECO wanting better telemetry collection, wanting better resilience. So they can do global failure and do that securely, right? Do that in a better performance way. So you can always default local and the local fails. The traffic would automatically route to the remote cluster or the cluster in a different data center but running the same service. Canary release for a specific service. It's also a very common cases, but I think it's less likely because you could potentially work around with a different service. So having more work with ECO is a lot easy. So you don't have to do actual canary versions and everything. You could potentially just using ECO resources to ship that traffic automatically for you. And the next thing I want to also think about is how your team going to integrate with ECO. What are your teams? Who owns resources and functions? And how much your team needs to learn? Ideally you want to minimize their learning to the service team and your platform team should be able to understand ECO, ECO's resources and be able to hide that complexity from the service team so that your service team can potentially leverage the mesh without much learning at all. Just inject the ECO and maybe use the default policy from the platform team. What consumption pattern are you going to provide to your service team? Ideally I want them to be self-service so that you can enable them easily and how do you enforce the policy at the global level and maybe at the team level and namespace level but still allow them to override as needed. What are the surprises? Of course as every journey there are surprises for service mesh. I think some of these are actually general to all the service mesh. The proxy and the application container at start time it could be a surprise for you. The proxy actually may start after your application container starts. This is a problem because the traffic is not secure and the traffic may not be able to reach any services outside of Kubernetes so your application container is not going to function until the proxy gets fully started. Similar problem at start time your application continues to start first I'm sorry your proxy could start first which after disconnect your application container and it would end traffic send 502 to your user so these are not desired. Fortunately there are work runs for this for the first one there's annotation you can apply at the part level called hold application until your proxy ready the second one there's a pre-stop quote which we highly recommend all the users to use especially on the issue ingress gateway so make sure you use those I had to pull into production debugging call and turned out there was no pre-stop books if you're using state for set I want to encourage you to use issue 1.10 the reason is the container networking behavior has changed and it was different than Kubernetes container networking behavior prior to 1.10 so it was super confusing if you're running with state for sets so check out 1.10 and then the last now the list of many of our user find out the default for timeout and retries was a huge shock for example we had retries 2 and actually has no timeout so sometimes that's not what user wanted thankfully it still provide easy way for you to override that either globally or at the round level you just have to know the surprises ahead of time to help you troubleshooting and debugging sometimes the issue API a bit very rich it may be a little hard to navigate some of these key resources at the edge I want you to think through gateway and virtual services when you're adopting is still for security purposes check out peer authentication policy they're using is still for authorization or request based authentication check out authorization policy and request authorization authentication policy they're using is still for like routing traffic traffic shifting or resiliency check out virtual services destination rule if you are importing external service into the mesh having both those running on VM check out workload entry service entry what are the benefits the most important thing I actually really use slides from our friend a T-mobile the reason is it's much easier for a user to say the benefit than me for work on the projects and interest for me as you can see he said 50% reduction in MTTR and most importantly savings of engineering our world 100K that is huge but thanks for the T-mobile team willing to try is still and I believe they have 100 clusters with is still in production what are the next step if you are convinced right is still you need service mesh is still is the right way to go I highly recommend you to check out solo and sign up for workshop I actually wrote solely on the first workshop and co-authored with Christian poster on the second workshop so the foundation for is still is the first shop allow you to get started with is still easily the second workshop teaches you how to deploy is still to production environment so very helpful wheels and badges and also check out blue mesh if you have a need is still production support all opinionated layer on top of is still another services mesh check out blue mesh we support is still long term support and minus 4 now it was in license 3 when I first made the charts and we have this bill and different upstream bills available to so definitely check it out next that's all I have thank you so much to come to my session and let me know if you have any questions follow me on Twitter and follow is still and solo I want to it up thank you okay that was the end of the presentation drop any questions in the Q&A for Lynn how are you doing today I'm doing great let me know if you guys have any questions I'll be here is there anything in addition you want to say other than the presentation so if you're really interested in is still in 60 minutes I'm actually going to give a workshop how do you get started with is still so you can find my workshop in the tracks just go to tracks workshop and then you should be able to get to my workshop I think pre-registration may be required so if there's a space you should be able to get to all right great we'll just wait for a couple of minutes to see if there's any questions okay I don't think anyone has any questions that was a very comprehensive presentation thank you so much well thank you I appreciate all the feedback so far really appreciate it thanks for having me take care bye