 All right, let's go ahead and get started. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, Intro to Service Smesh Interface, also known as SMI. I'm Karen Chu, Community Manager at Microsoft and CNCF Ambassador, and I'll be moderating today's webinar. If you'd like to welcome our presenters today, we have Locky Evenson, Principal Program Manager at Microsoft, Thomas Rampelberg, Software Engineer at Boyant, and Stefan Proton, DX Engineer at WeWorks. And before we get started, just a few housekeeping items. During the webinar, you will not be able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions and we'll get through as many as we can throughout the webinar. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically just be respectful of all your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that, I will hand it over to Locky, Thomas and Stefan to kick off today's presentations. Okay, thank you very much, Karim. And welcome everyone. I'm gonna stop the share because we're gonna be doing this fireside chat style today. So very excited for you all to join us and to learn a little bit more about service mesh interface. So I think we should start at the beginning when we talk service meshes. So I think the first question we wanna answer and ask is what is a service mesh and why are they important? Thomas? Oh man, you're asking me the tough questions. So what is a service mesh and why is it important? So what is a service mesh? Well, proxies are pretty awesome. Why bother putting proxies at the edge? Put them next to all your applications. So the basic idea is that with Kubernetes because you have side cars, we can start to put proxies that are right next to your application inside your pods. That's the deployment model, but what does that actually get you? Well, especially in a microservices world, it becomes difficult to kind of keep three things consistent across your organization. One is observability. So you want metrics that are standard and consistent across all of the applications, whether they're written in Go or JavaScript or Ruby. Number two is you wanna be able to have MTLS as all of us are moving into this cloud native ecosystem. Most of the networks are zero trust. And so you wanna be able to have all of the encryption automatically taken care of for you and a service mesh says that for you. Then the third major part of service meshes is reliability. Again, once you split out a monolith in the microservices, you end up with quite a few problems around reliability like retries and timeouts. And the service mesh makes it so that instead of someone having to implement that in every language for every microservice you have, it gets implemented once and then gets managed globally across your whole cluster. So it really is kind of a separation of concerns. Let the app developers focus on what they do really, really well and let the network focus on what it does really well and keep those concerns separate. What do you think, Glocky? Does that do a good job? I think you did a fantastic job. Do you have anything to add, Stefan? Yeah, I think it's important to say that service mesh is an evolution on top of CNI. Like CNI does all the layer for networking for you, magically behind Kubernetes. And CNI doesn't quite understand things like HTTP, like retries, like all this stuff, identity. So service mesh being mostly a layer seven network interface on top of CNI. It adds all these nice capabilities and of course some observability it was going on. Some, it's not the magic solution for observability. It adds more visibility, but you still need to do a lot of things with your applications to be at that level where traces actually mean something, but it's a great step forward in that direction. I think I've heard on the observability front it been referred to as bubbling up the golden metrics as defined by the Google SRE book. So you know, your P99s, your error rates and things like that. So you can add a glance, get a look at the health of a service. I'm going to actually add a quote and this is my favorite quote on service meshes and this is my favorite quote on service meshes today. What is a service mesh? And it's from William Morgan at Boyant. I'm going to read it, otherwise I'm going to butcher it, but it's a TLDR because I like the TLDR version of what a service mesh is. A service mesh is a dedicated infrastructure layer for making service to service communication safe, fast and reliable. If you're building a cloud native application, you need a service mesh. Do you agree with the final part of that statement? You need a service mesh if you're in the business of building cloud native applications? I can actually answer this one. No, I don't think you need a service mesh. You should be really careful. Stefan, I really liked your point about the observability in that a service mesh has been sold through marketing a lot as a silver bullet and it's not a silver bullet. It's a tool in the tool chest to help you build new applications. If you're doing microservices and you've got microservices and you need the observability, you need the encryption, you should use a service mesh but it's not a foregone conclusion. You should pick it and put it into your stack only if it actually provides you the benefits that you need for your specific types of workloads. I think you will need a service mesh when the service mesh implementations are, support stable and it becomes a commodity but we are not there yet in my opinion. So back when I was building apps on top of Kubernetes, I would always say from your first application to your 10th application, certainly putting a service mesh out there at the get go is a lot of work but once you're going from 10 to 50 microservices then the value proposition of a service mesh really becomes apparent for a myriad of reasons but until you have started a foundation of building apps on top of things like Kubernetes, it doesn't really become apparent at the get go. That was my take on it but in our evolution as we started adding more apps, it was definitely, hey, how do we see what apps are connecting to other apps? How do we define dependencies and a service mesh would have allowed that to actually happen? Okay, moving on. So we've identified what service meshes are and why they're important. We have lots of service mesh implementations out there in the ecosystem. So Thomas specifically works at Boyant who is delivering LinkedIn, a maintainer of LinkedIn and Stefan's actually building tools like Flagger on top of service meshes. So there are a lot of choice out in the world there for different service mesh implementations but in the world of SMI, why do you think it's important to have a common specification between meshes? Go for it, Stefan. I could talk from the developer point of view, like someone that wants to write automation on top of service meshes. So the more service meshes there are out there, the more people use them and all these different APIs for someone that wants to build automation on top of them, it becomes harder and harder. I'll give you an example. I started Flagger like two years ago and it was roughly Istio back then that had the traffic shifting capabilities that you could programmatically control. Then with the envoy rise, there are lots of ingress controllers that got the same capability and people asked, okay, maybe we don't want to run a service mesh, we just want to run some things at the age. So all these ingress controllers have their own API. Then Amazon one launched UpMesh, has its own API, so it adds up. And I think it's important to have some kind of standard. So a developer can, if I understand that standard then everybody tries to implement it. And it would be really, really easy to migrate from one implementation to another if that's what you are looking for and improve on the current implementation and compare each other with other implementations and so on. So I think there are a lot of advantages here. I really like, Stefan, I'm gonna go back to your point about CNI again in that I remember the dark days before CNI where every container orchestrator had special configuration options for every single networking plugin under the sun. And so you had to go and implement some special thing to get weave to work and something else to get calico to work and it was awful. And once CNI kind of becomes stable, it's now something everybody relies on. There's a single interface to get your networking up. And I really see SMI as that to your point moving up from layer four to layer seven. We've now got an interface that you can do so that everybody's got a common point to integrate with both the service mesh and the integrators on top. Yeah, so in SMI we have something called access control that tries to let the user define what service is can call another service, what user can call what other service and so on. To go back to the CNI parallel, it's about like net or policies before there were net or policies inside Kubernetes how would you implement those kind of restrictions for all the CNI plugins out there? Well, through custom resources, of course. So when you switch from a CNI to another well, you have to rewrite all your firewalls. Net or policies put a stop to that so you can easily migrate from one to another. So let's say traffic access control, like this today in SMI, all the service meshes and probably ingress control that are doing ingress or maybe egress. That would be really awesome because from a security point of view, from a security expert point of view it would be really easy to understand, evaluate the system because all these systems can use the same language, the same API. Thank you very much. I just want to acknowledge, I see lots of great questions coming in in the Q&A and we will answer them shortly. I would like to move on about SMI specifically. What does the spec cover? So SMI is a specification, not an implementation. I want to kind of defer the implementation question to afterwards, but what does SMI cover at the moment in the specification? Thomas, can you walk us through? Yeah, so actually it's kind of fun to go back to the point I made earlier about kind of the three value props that you get out of service meshes. We've got kind of three major blocks of functionality that you get out of SMI. I think this will answer Duffy's question, but if not, please poke us and add some follow-ups. So one of them is something called the traffic access control, which is the same idea as net policy, but for SMI at layer seven, it uses the identities in your cluster and allows you to manage access to your services. We have traffic metrics, which is where you get all of the observability. The really cool thing about traffic metrics from my perspective is that it's native to Kubernetes. And so RBAC works natively out of the box. You can provide visibility and multi-tenancy, which is not something that most of the service meshes actually provide today, at least on the read side of things. And then the third bit that we provide is traffic shifting, which is where Saffon and Flagger comes in. So we allow you to be able to do canary rollouts and shift traffic between services. So you can implement all kinds of interesting patterns. With Likertie, we've actually got error injection and shadow traffic working built off of traffic shifting, which is pretty cool in my book. Stefan, did you have anything to add? Yeah, about traffic metrics, besides the fact that, yeah, you can have RBAC for them, which is really great. You can also power other machinery, like an horizontal photo scanner, for example. Out of the box, you don't have to write your own adapters. You don't have to do much yourself. You just say, hey, I want to scale based on latency, or I want to scale based on request per second. And that's something that, for example, for layer seven applications, it makes a lot of sense to scale on latency, then on just, you know, CPU or memory usage, which are like the layer for networking, are solo level that it doesn't quite fit with what the application is doing. The application is processing requests and maybe you want to scale based on that, but based on that specific metric. And I'll talk about traffic speed, maybe. Yeah, the auto-scaling is a great one. I actually did a talk on that at KubeCon last year about why you want to scale on something like latency or requests instead of CPU and memory. Would recommend if anyone's interested in auto-scaling, to check that out. Thank you both. I think the only thing I would add, you've already taken the words out of my mouth, is that SMI was designed to be Kubernetes native. So the specification is defined as a set of CRDs. And then you can interact with these, obviously using your Kubernetes tools that you already know and love like KubeControl. So having access to traffic metrics for your specific service, doesn't require any additional tooling. And you can bubble that straight up into the tooling that you already know and are familiar with. So I think that's a, you know, a big value prop. We're not introducing another layer of things that you have to learn about. We just integrate directly into Kubernetes. Okay, given that we have you both and you're both implementers and maintainers of projects that have implemented SMI, I would like to just get your perspective on things here. What is the value of you as an implementer for you, Thomas Linker-D, and for you, Stefan, for Flagger, of implementing something like SMI? Stefan, do you want to? I'll take it. So the biggest benefit for us, honestly, is a definition of the feature set. You can go, before SMI, there was really a whole bunch of, you could go and implement any feature under the sun and call it a service mesh, which is awesome, but it makes it very difficult for implementers of service meshes to kind of get the basics together. And they have to go and solve the problems in their new way each and every time. With a common set of functionality, it gives us a baseline that's really easy to go take a look at and implement right on top of with Linker-D, at least. So for us, where we work, so we are not in the service mesh market, we don't develop a service mesh on our own. What our interest is from a different perspective. We write tools that are focused around observability, things like Cortex that allows you to scale up Prometheus and things like flux and flagger, which allows you to drive continuous deployment in a Kubernetes native way with very lightweight controllers that are running in your cluster and they take the CD out of the CI systems. About flagger, flagger does a couple of things for you. It allows you to do more than just a rolling deployment, a default Kubernetes way of shipping new releases in production. It allows you to do canary deployments, which are based on shifting traffic, slowly shifting traffic from one version to another and looking at metrics to decide if the new version is okay or not. It also allows you to do A-B testing for front-end applications. What that means is you can segment your users and say, hey, a user that has this cookie or this HTTP header will be routed to the new version and I'm just testing my version on a specific set. And it also allows you to do blue-green deployments with mirroring. So you basically mirror the traffic that goes to your production pods, to the new version, the blue version and that traffic doesn't return to the user. So all these three capabilities are impossible to achieve without the service mesh, right? You need layer seven proxy out there that actually allows control over layer seven. And the Traffic Split API, the first version was about traffic weight. So you can start with 1% and go up to 100% between a set of pods. This was the Traffic Split API V1 Alphawatt. And when that was implemented in Linkardy, it took me about a couple of hours to have it run. And I was really impressed. I mean, and the advantage for two-light flagger is once it works to Linkardy, when Console Connect will implement the traffic split, it will work with Console Connect without any kind of changes to Flagger source code to just work or Myesh or whatever other service mesh tomorrow implements Traffic Split. And what we are doing, the next iteration for Traffic Split, what we did was implementing HTTP header matches so it can do A-B testing and soon we'll be doing mirroring, I guess. And then I'll be like really happy when we have all these three use cases for continuous deployment inside SLA. Excellent. Thank you, Stefan. I actually am gonna send a link to the chat with all the ecosystem implementations for those interested. See Karen, go ahead, Karen. Oops, I'm muted. Okay, so I think there are a few questions. To start, can someone asked, I need a service mesh, what is the difference between service mesh and service mesh interface? The way I understand is service mesh is a concept which standardizes cross-cutting concerns like observability, security, monitoring, et cetera, and Console, Linkard, et cetera, are implementations of the service mesh concept. What is service mesh interface? Since no one else has jumped in and answered this one, I will give the pitch and go and use my favorite CNI example yet again. SMI is just the interface that we as a community have agreed on for service mesh implementations to go and build their functionality on top of. So you can think of it just like CNI or network policy or even Ingress is a really great example. In Kubernetes, you have Ingress and Ingress controllers. Your Ingress resources are read by all of the Ingress controllers and they give you choice. You can go use Nginx or Traffic or Ambassador or Glue or all of the really awesome solutions there and that works great for Ingress. SMI is the same just for service meshes. Anyone wanna add on top of that or is that a good answer? Thomas's answers are always great. But I'll speak for myself, but Stefan, I don't think I have anything to add, but Stefan, feel free. Yeah, it's a service mesh interface for me. It's a process more than a thing like, cause there are so many parties involved. It's an interesting process of how you can define a standard API without making it super verbose or adding every single future of a proxy into it. Like we could take like the whole Envoy API and say, hey, this is the SMI API. It will not be fun because a standard that's so verbose at some point it will be like, how can I implement it? Like it will take me years. So it's a fine balance between, hey, how we want to keep it as verbose as possible, but we don't want to make the API as metric. We don't want to say, hey, we'll go with some very high concepts that every implementation can go wild with them. So it's a hard thing to balance. Can someone help clarify the semantics of interface in the name? Maybe that's what's confusing some folks. To us? Yeah, I can. No, I absolutely can answer this, but I think interface is just a standard set of abstractions that define the most commonly used features and functionality of all the service meshes. So that's what, when I think interface, I think like that. So it's kind of the I and C and I, and the I and CRI and the C star I. So it's just a common abstraction that implementers can take and adhere to as a standard contract point. Do we have any others, Karen? Yes, let's see. Okay, how, I think this is what they meant. How do you related service mesh roles apply? Or apply example SDO and network policies apply by the overlay net examples, Celium. So I can take this one. Network policy is gonna be layer four. You don't have identity and you'll have to work with IP addresses and service names to go and get your authentication set up. But more importantly, because you don't have identity, it's a little hard to audit what's going on there with service meshes, SMI and traffic access control in particular, identity is a big part of it. And by identity, what I mean there is, is that both the client and the server know who they're talking to and can then give you policy around that, not just that a pod can talk to another pod, but actually that your production environment identity can only talk to the production identity environment. The way that that actually gets implemented inside of SMI depends on the service mesh interface. With Istio, what happens is there's an operator that takes the SMI resources and converts those into Istio policy. Great, okay, we'll do the next question. Implementation question. Is the intent to have an operator or other per service mesh provider that takes SMI defined resources and spits out Istio or Linger resources? And answer that? Yeah. Oh, no, Stefan, it's you. Go. So we have in the SMI project an example of this. We call them adapters. Yeah, there are operators that just do transformations between one API to another and we have the Istio adapter with just that transforms SMI specification into Istio specification. And afterwards the Istio controller pilot takes that and does its magic. We call them adapters and I think it's a good way of telling what's going on. It's just a translation. So if you... Oh, anyways, I was just gonna say that said in LinkerD we don't have a adapter. We've just integrated directly with SMI and I believe that the mesh folks have done the same thing there is. They've just gone and implemented directly with the API. So it really just kind of depends on how you want to build it out. Yeah, I know that HashiCorp has a console connect adapter for SMI in their own organization. So it's up to the implementer. Maybe if your service mesh provider doesn't implement SMI if you want to do it you can write your own adapters and that's an extension file. Okay, fantastic. We will answer more questions. I see lots of great questions in the Q&A box. Please keep them coming and we'll make time for them. I wanna move on to, you know, SMI has been around. It was announced at KubeCon EU and Barcelona last year. How has it evolved over time? And I think in the context of that question is what's the most exciting addition you've seen since launch? I'll start with you, Stefan. Yeah, for me, I say before for me is the fact that we are we are evolving the traffic shifting capability and it's not just, you know, or route random requests based on percent. Like you can now do user segmentation and that's an important aspect if you want to use a service mesh for front-end applications or applications that need session affinity. That's the point after all. If two services are talking to each other and they need to maintain some kind of session then that session should be encoded somewhere in the HTTP request or the GRPC request. And we now have a specification how we can look at that request, how we can extract metadata from it and based on that metadata do the routing capability, do the routing logic. Stefan stole my mind. I agree with that one. That's my favorite improvement that we've had so far. We've been having a bunch of really great conversations. We'd love folks to join in with us on SMI around identity and multi-cluster and mesh expansion lately that has all been really interesting conversation. But as you can imagine, it's a tough problem and so we've been kind of going slowly there. Yeah, I think for me just seeing the metrics being built out has been really beneficial. So we started with an idea of how we could do it and the way that it's implemented and the work that Tarun has done and been spearheading as a part of the community so that anybody can ship standard metrics in any service mesh using SMI, I think is really beneficial. And yeah, and this moves me to the next point. What do you think the biggest problems or challenges that we have yet to face coming down the line in the next year or so that you think we should tackle as part of SMI? So for me, it's ingress and egress at the edge. We have different opinions here. I'm for the idea that service mesh should define ingress and egress, not only how the traffic shifting is, how the traffic is routing, but also about identity, also about policies and so on. So I've been looking at the Kubernetes ingress V2 specification, which is no one actually implemented it yet, but the specification is there, looks awesome. It's close in spirit with what SMI is doing. For example, the traffic split specification looks quite the same. So I'm for adopting ingress V2 and making something for egress as well because egress is also an important concept in the, even if we are talking about putting two Kubernetes cluster in the same mesh, egress has an important role there. But if we talk about expanding the mesh to other compute resources like serverless, like VMs, like bare metal servers and so on, then yeah, we have to figure out how to achieve that. Yeah, I think there's kind of two categories for me. One is getting more folks to implement on top of it. The one that I'm especially excited about, I'll continue to bring up is Keali, which is just a fantastic tool. I'd love it if they were built on top of SMI so that all of the service meshes could really get the benefit of the observability that they give there. But from the spec itself, I think that the access policy is going to go, need to go through a pretty big revision over the next 12 months. We kind of put it together as a POC and now it's time to do a couple iterations on it and address a lot of the problems that Stefan brought up. It's not just about doing off inside a single Kubernetes cluster policy spans quite a bit more than just that. Yeah, plus one to all those things. I think the only thing I would, I really would love to see the ecosystem tooling, having a tool like Flagger that makes service meshes more accessible to developers trying to leverage service mesh is great. And Keali is another tool. Keali is a service mesh visualization or your service visualization tool. I go check it out. The only other thing I would add is, I think people as usage of service mesh are picked up, so as the expectations of what a service mesh can and should provide have also, the scope's been expanded. And I think for three tangible things, I know there was a question about multicluster. I think multicluster is a big area and there are some great blogs out there. Thomas put panda paper on a blog around rationalizing multicluster approach to service meshes. So I think coming up with a solution that's portable across service meshes in the SMI realm would be interesting. And then the rise of circuit breaking and traffic rate limiting, I think are also interesting as to how they may play out. But definitely not definitive. I also think different run time. So we started with Kubernetes, but we know that there are people using VMs and even functions and how can we rationalize creating a mesh that spans all those things. They're all aspirational. We'll pick up certainly a few of them, but I think as the community develops around service mesh, I hope SMI is a great place for us to all come together and say, this is what we need from service meshes and here's how we define it. Okay, yeah, go ahead. I wanted to say that in terms of what we deliver as the SMI organization, besides just fancy markdown, we also have a client go SDK. So in a way you can think about it like Kubernetes client go, it's all Kubernetes CRDs in there. So you just add that as a dependency to your app and it's very easy to create, alter, watch, interact with SMI from a controller for a CRD operator and so on. So we also have another thing that you can use is the metrics library. So we have two SDKs, so to say that are there for you to try it out. And finally on this topic, what are the cons to implementing something like SMI? I think it's worth at least answering that question for people. What are the cons for using something like SMI? I cannot say that. Please go ahead, Stefan. So we have versions, right? We have a version of a traffic speed then we release another version and another version, but all these APIs are not independent. Like for example, the traffic speed API depends on the HTTP spec API. You define your specification like for example, what kind of headers your application exposed and then you can use that object that HTTP route inside your traffic speed. And that creates a dependency between versions. You can use this version of the HTTP route with this version of the traffic speed and so on. And it's kind of hard right now if you look at the specification, you cannot understand what version works with what version and so on. So it's a learning process. If you want to go and do that, we will be trying to improve that experience and also for implementers. For example, when you switch from one version to another, then you will probably break some other integration with your own service mesh. How we are trying to solve it is using Kubernetes things like a webhook that could transform the objects from one API to another without having to encode this transformation inside your own implementation. So the webhook is a new thing in Kubernetes. We have to be careful if we ship that then people shouldn't use Kubernetes 1.14, for example, because it doesn't support that type of webhook. So it's a challenge to keep that compatibility and also offer these automated conversions between versions. So we will try to improve that experience in the future. One of the problems is a strong word, but one of the things that we've noticed on the LinkerD side is that when we own everything, it's very easy to just roll out new versions of the spec, but because SMI is a community effort, we spend a bunch of time talking about the specs and making sure that they agree with what we want to do with the project and that it makes sense for all of our users. And so at least from the LinkerD side, some of our functionality has slowed down because most of what we're doing right now, we actually want to be part of SMI instead of just extending it. And so we spend a lot of time being very thoughtful and slowed down on our velocity to make sure that the community benefits from all of that. And it's definitely a trade-off. Being part of a community conversation can potentially slow things like APIs down, but I think that for all of the advantages that you get out of SMI is far outweighs the disadvantages on that kind of velocity side of things. Thanks, Thomas. I think, you know, from my perspective, I think the question that I would like to answer is, you know, there are a lot of APIs associated with all these service meshes and they do do really in-depth tweakable attributes in very detailed APIs. And certainly the intent was never to build SMI to be something that covers all aspects of every service mesh. And to go back to what Stefan said at the start, let's think about the developer persona here and what they're trying to get out of a service mesh. And it may not be, you know, it could be that they do want to tweak every aspect of the load balancing and traffic shifting that they want to do. But typically what we find is most people are happy with, hey, I just want to wait some traffic here, roll out a new version, and can you take care of that on my behalf? So something like Flagger can do a fantastic job of doing that on your behalf. So, you know, the intricacies of all the implementation-specific APIs are quite a burden to understand not only what you want to do, how you express what you want to do. And the idea with SMI was to make that a lot simpler at a higher level. But we did expect that people who really want to dive down into implementation, specific APIs would continue to do that anyway. So, yeah, that was my kind of two cents on a con. You know, it's no intention to describe every implementation-specific API in SMI. So I want to move to, yeah, go ahead. Yeah, I wanted to say that when SMI is an additive to Kubernetes in a way, it doesn't get in a way of Kubernetes. So we don't have to define another set of Kubernetes services, services like objects on top of Kubernetes services. We default to them. We default to the, let's say, core DNS records and so on. We don't run our own discovery in parallel with that. And that lets the API be a light and not immensely verbose, but when you look at SMI from outside Kubernetes, then maybe we'll have to define these aspects that Kubernetes takes care of, but we shouldn't impose them if you only use Kubernetes. So that's yet another thing to consider from an SMI perspective. That's a really great point. In fact, we kind of bend over backwards to use the Kubernetes primitives that folks are used to as much as humanly possible because we don't wanna go implement it ourselves. And more importantly, failover needs to work no matter what. So I wanna turn over to how do people get involved with SMI? What are some steps folks can take in order to get connected? I'll take that, I'll jump in here. So you can go to smidashspec.io. Everything is documented on the website there. All the links are there and you can learn more about the specification and the ecosystem implementations. You can also dive down into GitHub there, which where you will see things like Stefan mentioned, if you're an implementer, there's the SDK there that you can utilize, the metrics API server that you can utilize in your implementation. And also you can go and check out tools like Flagger. And if you just wanna simply use SMI, you can check out tools like Flagger to take care of that. I think finally, we can invite everybody to community calls, they happen, they're all documented on the GitHub page and you're welcome to join. If you're interested to learn more about SMI or contribute to SMI, we would welcome your collaboration there. And finally to wrap up as we're getting to the top of our time, what do you wanna leave the audience with? We've got a lot of great opinions that have come out in the last 40 minutes. How do you wanna the audience to think about SMI moving forward? Just read the spec and open an issue if something, if you feel that something is wrong. I mean, it's the right time to do it because we are still at V1, alpha something, but it's still alpha. When we, the more we've evolved the specification, the more we get more implementers, the harder it will be to change it. So if you think something is off, if we miss something, if you have a great idea and you feel like, hey, it should be here in the specification, open an issue, join our calls, make your idea happen. And what surprised you just to tag off the back of that? What surprised you most about the SMI community and contributing to SMI? For me, it was the fact that it took me like five months to get the traffic split, HTTP things. And at the beginning, I felt like really stressed and, but after having so much conversations with and understanding different aspects of how people see this from their own implementation perspective, it made me aware of a lot of things that I, at the beginning I didn't quite consider it. So even if it's a long process and it takes time, I think at the end we try to do something that is not custom made for a specific use case and tries to cover more than just one view of the one side of the problem. And yeah, don't be scared about that. It will take time to get something in the SMI, but if it gets there, then it's a solid solution because so many people are involved and yeah, it's also an experience on how to negotiate with different parties. It's interesting. Like Thomas said, it's not just, hey, I have this idea in two days on, it's in the spec and in three days I've implemented and that's it. And afterwards I'll have like 1,000 TV issues and I have to change it and so on. It's a different approach. It's something I really appreciate about the community is having folks like Stefan and Locky around to bounce ideas off of. When I open an issue, it isn't, it's version zero for me and it's really great to have other folks who are super knowledgeable about the space and have their own viewpoint on how all of this should work, be able to jump in and get immediate feedback on that. It's just amazingly helpful to be able to get that, like just stalled of collaborating and kind of brainstorming together on issues like this. Yeah, I think the most rewarding thing for me personally has just been helping folks understand what service measures provide through something like SMI and help drive the community forward in what are the things that's important to them and I would hope that SMI is a place where they can bring things in. We've had questions about multi-cluster. Why can't we define an interop level for multi-cluster in SMI? I don't see that out of the realm of possibility. Come and help us build that. But hoping that we can build some opinions about what service measures can do and how users interact with them, I think is what excites me the most about SMI because it's a really deep and broad space. Service measures do a lot of things but having a way just to rationalize the things that you need I think has been incredibly rewarding. Any final thoughts? Should we do more questions? Yeah, I think we could field some more questions if somebody, yeah, go ahead. Okay. Yeah, so if you're selecting a service mesh, what should you look for? What does SMI compatibility bring to the table? I would say easy abuse. The API is not huge and you don't have to spend days and days understanding how can you drive the mesh to do something for you. I think that's one of the most appealing things right now. It's easy to understand. So why should meshes implement SMI APIs? The most obvious easy answer is because we want flagger support. I mean, I'm being totally serious. Flagger's one of my favorite ecosystem projects for Kubernetes and it's something that the LinkerD users were asking for again and again and again and so it was fantastic to be able to get that integration just by implementing an API. Yeah, also think like horizontal polo scaling and things like observability. Some kind of observability is okay. And I think there's high cognitive load to every implementation having its own broad API, which means as people approach and want to start using service meshes, having to understand the intricacies and the tooling. So some implementations require you to use a completely separate interface to Kubernetes, which is okay, but it's just something else that you need to learn in your journey and implementing SMI means that, hey, I'm comfortable with the Kubernetes API and now I can control my service mesh from the same point of entry. And that allows me to get on board and understand what service meshes do for me and for us. As I said, you know, service meshes, you may not need them for the first 10 microservices, but 11 to 50, you're certainly gonna be looking at them and you know, that period of looking at them and evaluating what they do and how they operate. You don't wanna be hamstrung into a specific implementation without taking a look at all of them and service mesh does, service mesh interface allow you to take a look at different service meshes and determine the best one for your specific needs. Great, another question. Is SMI supported by all the major providers or is it on the roadmap? So we cannot put on the roadmap an implementation. That's one thing. SMI is a standard, not an implementation. So we cannot impose the implementation. That was my point of the angle. Yeah, I was just gonna say, I did send the ecosystem. If you go to the website and click on ecosystem, you will see who's there. I will say that most of the major service meshes out there in the community have shown up and are interested in SMI or implementing SMI. So there is pretty broad adoption across the CNCF ecosystem in the service mesh space. So Anasas, that was a comment instead of a question. All right. All good. All right, well, thank you, Laki, Thomas and Stefan for a great presentation. I think we are good and we have, that's all the time we have for today. Thanks for joining us. The webinar recording will be online later today and we are looking forward to seeing you all at a future CNCF webinar. Thanks, have a great day. Thank you. Thanks for joining us. Thanks, Thomas and Stefan.