 Hello, KubeCon. Welcome. My name is Lee Calcote. I'm at layer five. I'm joined by Mr. Ken Owens. Who's at FISERV. We are, I keep interrupting Ken, but he's used to that. Apologies. So we are here to take you all through some of the initiatives within the Special Interest Group. SIG Network. SIG Network has a couple of working groups, one of which will be our focus today. And that one is the Service Mesh Working Group hot topic. So we'll do a bit of an intro and a deep dive. And we'll see if, if Ken will take us through what is SIG Network. Yeah, thanks. Thanks, Lee. So, um, you know, our mission statement in SIG Network is with an ever steady eye to the needs of workloads and developers who create them and operators who run them. SIG Network's mission is to enable widespread and successful development, deployment and operation of resilient and intelligent network systems and cloud native environments. It's really important and I have minds and I have a view of this SIG is that we want to represent, you know, the network develop the network guys but also the developers both in the application teams, in the business teams and also in the network and you know software design networking areas. And so we're really looking at a broad scope within the SIG. With this endeavor, we really do want to try to, you know, inform and clarify as our first goal. And that's important because, as you know, there's a lot of network can be a very broad statement and can mean a lot of different things in a lot of different context. So we're very collaborative and we want to be able to kind of work with different areas and different SIGs and different groups within our community to kind of help bring some of the cloud native networking aspects into the interview and into clear focus. We assist and attract projects. As we'll go through in a few minutes we brought in really interesting projects and we're really excited about how much we've grown in the last couple of years. And we also want to, you know, basically have as part of the CNCF charter we want to be impartial of the different projects. We're not kingdom makers here and so we want to, you know, it's definitely fine to have different service mesh technologies together and different network technologies being represented here in the SIG. So just a little bit about what we've done over the last several years. You know, KubeCon North America in 2019, you know, we had, you know, kind of the beginning projects, right? C&I was the first thing that we worked on as a group before we were a SIG. We were just kind of a networking work group at the time and Lee probably had some fond memories of the work group times together, the six of us, right? And, you know, since C&I we've worked with Core DNS, Envoy, GPRC, Linkerd, Nats, a friend of Cisco brought in, you know, some network service mesh discussions of the open source project they have going on. It really helped kind of kick eyes off in the beginning of 2020 at KubeCon Europe with, you know, some additional projects like G&I, G&E and Contra and, you know, the service mesh interface you hear more about today. And then we ended the year, you know, working with Chaos Smesh and Open Service Smesh and, you know, today we're excited that we're, you know, looking at an ambassador project in Mastery Ingress. We're looking at, you know, K8GB, we're working with Mastery and with service mesh performance, which we'll get into more a little bit later in the presentation. And on the horizon, we have a couple of things with Submarina is sort of the main thing we want to look at on the horizon. So we have a couple of working groups that you kind of, you know, as Lee mentioned earlier, the biggest one that we're working on right now is the service mesh working group. But we also have a very interesting working group, the Universal Data Plane API. Both of these working groups are active and, you know, we invite your participation and you'll hear more about that as we go through the presentation today. So we do have a couple of white papers out there. And we have several presentations, but the main one wanted to point you to is moving beyond ECTP, the surveying the state of layer seven protocols and the cloud native ecosystem. And so there's, you know, a lot to get involved in a lot to do and hopefully what you see today as we go into the deep dive and I turn this over to Lee, you'll see that there's a lot of interesting topics, a lot of interesting conversations we want to have. And in the community and you know that you'll want to get engaged and we'll make sure you know how to do that. So with that I'll turn it over to Lee. You know, Ken actually that the last presentation that you were just highlighting that while we have, you know, while the group is large and networking is vast that that presentation might be familiar to some as it was subsequently presented as a keynote at last last cube cone as well so I was just recalling that I'm, you know, speaking of recalling things like late breaking news earlier just a few hours earlier today was that ambassador the emissary ingress can we that project has been under review in the SIG for maybe longer than maybe let's not let's not talk about how long it's been there a while it's been under diligence and underwent a name change. It's in it's so you know congrats to that that team. Like, like, Ken was saying, the service mesh working group has a few different initiatives going on. The initiatives are interrelated. They are. Well, the focus is on service matches is a lot that that a service mesh provides a lot of ways that that you can use them. And that's part of the focus of the group is to identify patterns in the way in which users are using services, trying to curate a collection of those. At the same time we're also taking advantage of the CNCS labs. There is not an infrequent question that people will ask is, you know, inquire about the overhead of a mesh or the some of its performance characteristics and, and they, you know, I don't know that, you know, one of the initiatives that's within the group here is, well, it's to leverage the labs to help answer some of those questions. Some of those, those answers are point and time answers about a particular mesh or a particular type of workload. We've been fortunate to what Ken had said about some of those that have come to participate. One of the collection of participants recently have been from from Intel, and if you know, historically that that organization has a long standing focus on performance. And so they're bringing some of their their skills to practice here, which is nice so hopefully we'll see as a matter of fact I think they have a talk as well at cube con, and we'll see some of the you can find some of those results. So actually, if you're what if you're in chat right now, I'm asked for the link, we'll send you the link. So, of those service mesh patterns, though, there has been a collection of about 60 that have been identified, they fall into different categories. There's a fair bit of work that goes into really refining these in detail, and that work is far from complete. There's a there's a first segment of the first half of the segment of 30 is being iterated on and being discussed. There's also there's also tooling that's being worked on in context of the service mesh working group, we're going to talk about some of the tooling, but some of that tooling helps people run those patterns so if you're, if you're reading through the pattern and saying hey hey that's of interest, and you're wanting to use a tool to test it out. We'll talk, we'll talk about that tool here in just a bit. Part of what makes the working group fairly interesting is that there's a number of projects the ones that can had listed off that are. Well, some of them are service meshes themselves. Some of them are emergent standards or specifications that help. There's this interface with service meshes in a, in a standard way and so the first specification here is service mesh interfaces SMI. It's kind of interesting that I look at the statement here the very brief blurb about what SMI is and reflect on some recent conversations that we've been having there and the recent conversations have been about that last on these bit is that when SMI was first announced as a project in a very Kubernetes centric and and focused, and there's been an expansion of that. I don't think we haven't formally concluded but there's clearly much interest toward, toward helping SMI's services that are not running in Kubernetes but are running on the service machine so SMI is focused as a standard for how to interface with service mesh functionality in a uniform fashion. Similar to this as an adjoining specification, there's an SMP service mesh performance. There's a specification that provides a standard way of describing the performance of a service mesh. Some of that is just to be able to articulate that performance in a concise way to do it in a uniform way in which each of the service mesh projects are are beginning to engage with the specification. And it's our hope that through, well, through the assistance of the service mesh working group that each of the service meshes will be able to report on their performance with each release under a set of different scenarios and use the same standard to do that. So, there's there's a little bit in there. The, this third specification is, well, also just looking at this like and reflecting over the last like two weeks now there's been, I guess let me first explain Hamlet and then explain some recent news and so Hamlet is generated and put forth largely by VMware but in collaboration with Hashicorp and Google and begins to define really like to put it in my own to concisely I guess is to is really to define a set of interoperable service catalogs. So like to the extent that you're running multiple meshes, whether they are homeo genius or or heterogeneous types of service meshes that inevitably you're going to want those workloads to be able to interact between the different service measures so you're going to want it to be able to federate them and so that's the crux of the focus of this specification. Here recently there's been, well there's about three areas that that multi cluster kind of Federation discussions are happening there's some of that going on within the Kubernetes. Well, I forget the name of the SIG but the there's multi cluster being discussed there as a new API, a little bit of that being discussed in service mesh interface, and a little bit here in Hamlet and and Ken will you know quickly tell you that that's a great example of what this SIG network is about and it's from this vantage point that we would be able to identify some of those. You know that duplicities is the right. The correct word exactly but to be able to make sure that individual efforts are at least aware of one another and can, and can collaborate and so we're so if you're so if you're listening right now. This is a good place to come in and work those through a one of the so based on the fact that there are many meshes out there. There are a number of them in the CNCF that and whether they're in the CNCF or not, there is those that have chosen to implement service mesh interface. And just like any specification. You need some tooling to be able to verify the compliance of that spec. And so, in this case you'll need tooling to. And that works with each of those eight different service measures that implement SMI. And for the it to flex each of the four specifications that SMI currently has. There's a fifth that's being discussed now. And so, if you're familiar with son a boy of the. I wouldn't say of the Kubernetes project but son a boy to what son a boy is to Kubernetes this SMI conformance initiative is that to to SMI. And so there's some early reports actually this would be the first time I think that these reports are being shown reports of it's about is it about five, yeah about five different service meshes and their, their compliance falls with respect to these different right now it's just three different of the SMI specs. We're seeing a lot of red, and there's good reason for that. And that's because the SMI spec, a new version was just released about a week ago. And so, there was a breaking change it's in terms of that that spec and so hence part of the red here just for the moment. So, so we it's our hope to get a few more of the implementations up here, get get them participating in the tests, I'm even more directly. The tool that's being used is is measuring. And it is one of the tools that can and mentioned a moment ago it's up for up for adoption into the CNCF. Can if you oblige me if you were to click on that that measuring logo, there is measuring implements the these SMI conformance tests, it also implements the service mesh, service mesh performance specification. And it kind of does so across any number of service measures and so on that service mesh performance specification though on the next slide. We'll describe that a little bit more this specifications also up for review and donation to the CNCF. We articulated earlier as a way of a standard way of capturing and describing your infrastructure can you know it's characterizing your service mesh performance. It does that the spec directly does that it also facilitates some other interesting things it facilitates, maybe benchmarking of a given mesh over time and how well it's doing from release to release. It also facilitates comparisons between service measures to the extent that that's apple apples to oranges to the extent that that's those are comparable. It also facilitates, well potentially a new performance index maybe a new concise way of articulating the, you know how well how fast or efficient your service mesh is running. I think it's this next slide that talks about mesh mark, which is an emergent. Well, it's an emergent ruler or yardstick for it's an emergent index by which you would. And like I was just saying like advice, concisely convey how well your system is running. It's really quite complex, maybe harder than I think some of the folks that had gotten into it had initially considered what's been fortunate here over the last month or so is some new and others have joined the initiatives, a couple from Red Hat some from Intel, and, and are hopefully helping define a new a new way of describing service mesh performance so maybe I'll leave it at that and leave you with the cliffhanger what that ends up looking like. You're seeing some of the, you're seeing each of these initiatives overlap a little bit, and it's been long been the intention of the working group to get some of these projects. Well, that that were generated inside the working group into the CNCF. I don't know that that's the intention for the maintainers that are working on get Nighthawk, but it becomes kind of necessary. This particular initiative becomes necessary to the extent that Nighthawk is quite a capable piece of software. And if you're saying night what then I'll quickly tell you that that Nighthawk is more or less a sub project of Envoy, and maybe at some maybe at some point it won't be a sub project will be, but anyway it's a performance characteristic tool so it's a load generator. It's quite quite capable it's written in C++ it builds alongside the same build tool chain is as Envoy. To be able to facilitate some of the tests and the benchmarks that are being performed within SMP. This project get Nighthawk is bringing together is making it easier for people to get to get Nighthawk into the hands of the masses, so to speak. The collaboration with Meshory and Nighthawk will, well, it will advance that even more but it will also hopefully advance a little bit of the state of the art around some of the research that's being done within the working group. We've got a couple of universities that have participated in these discussions. I think our most recent one was a professor from NYU. I'm interested in trying to help advance the studies that are going on and really need tooling like this to be able to do it. It's pretty painful to see researchers, you know, like, try to pull together scripts and various things to like spending half their time and just trying to get the environment working when, when there's some easier to use tooling there that they can just go take and run their tests and so. That's the large part what get Nighthawk is about. There's an adaptive load controller in Nighthawk itself that that may change the way that those that are running service messages think about optimizing and tuning their mesh. So, exciting things in the project. That was an awesome job of describing so much of the great efforts we have going on in the, in the, in the six. So thank you for that. We, we kind of wanted to end the presentation asking for engagement and ensuring that you, you know, we're asked to engage it. We always hear that, you know, if we just have asked for them to, you know, people for being engaged with us we'd have more engagement. So, so we're going to have a call for engagement, you know, call for participation. We have meetings twice a month. They're on the first and third Thursdays of every month at 11am Pacific time. We have meeting minutes that we keep track of as you're welcome to kind of catch up on some of the things that we've talked about and everything that Lee mentioned there's you know links in this deck and there's also links in the meeting minutes to some of the projects directly. I definitely would love you to connect with us on Slack. We have a, you know, SIG network is our Slack channel. And if you, you know, couldn't want to join our SIG network, we are a SIG service mesh workgroup. We have, you know, my name is develop what list.cncf.io. With that, I, you know, I thank you very much for your time and I thank you Lee for, for co-presenting with me I think this has been a really useful and helpful, you know, use of the description of what we have going on in SIG. That's a lot. While, while Ken and I are sitting here fielding questions in chat. I've got a question. Can you were just saying, there's a call for participation people can jump into the meeting minutes jump into the meeting. I have a question. Do, do, do, do those that show up. Do they need to belong to a CNCF member company or do they need to be platinum sponsor, you know, like, what's what's the entry price into the to come and Swiss bank account. Yeah, that's a great question because that there are a lot of, you know, we do hear a lot of people worried about and getting engaged. We are, you know, first and foremost an open community and we welcome welcome your engagement and your involvement without, you know, being a member company or paying any sort of fees to to get involved and you know, unlike, I guess one of the concerns people have is if you don't pay for for entry how how good is your outcome and I think our outcome is awesome so I think you have to worry about the effort and the, you know, the desire that we have and the work group would definitely drive the right outcomes for the, for the community. That's a great question Lee. I appreciate that. Thank you so much for taking us to this can. I guess we'll, we'll see everyone. Next Q con is that the hope so hopefully in person this time.