 I see the red button. Hello and welcome. Today is August 5th. It is a Wednesday in 2020. What a wonderful day. It's a nice, shiny day here. I hope everybody's having a great day. I'll be your host for today's SMI community meeting. Hello and welcome to everybody who's here. If you would replace the agenda link in the chat, if you would love to go and add your name as attendees there and where you're coming from. We'd love to see you. Redbeard gets a call out in the chat immediately. Hey Redbeard, nice to see you. We have a pretty light agenda here. So let's go ahead and get into it because this is a very tight meeting. First of all, we have a review community blog post building an alternative service mesh, a novel approach using SMI spec by Kevin Crawley. So what needs to be done here? What I'm really excited because this is the first contributed blog post. Yay, thank you so much. So everyone else should write one too for all the stuff you're doing, but Kevin has the honor of being the first person to put one in. And what we did write in our guidelines was that it would be great if we can get two people who don't work at the same employer as Kevin, whichever employer that is, but get two people to read it and upvoted as like, yes, we would like this on our blog or say, hmm, this seems to be a vendor pitch or wish it didn't seem to be to me but I figure you experts should read it and assess it with your opinions and say, hmm, this seems like it is a great thing that we would like talking about what this particular person does with SMI. Anyway, we're trialing, this is a new process. So if everyone who has opinions about blog posts wants to go read that and put their comments on it, then we can hopefully get that out on the blog post and it'll be exciting. So that was all for that one. Excellent. Oh, and also write your own blog post. Yes, so if you have your own blog post, write them, there are some guidelines I wanna thank Kevin for being the first one to have a community contributed blog post. I love the transparency of folks being able to do this and obviously Bridget set up some guidelines there and we will implement the blog by committee. So you all have the power to plus one this and get it out there on the blog. So excellent. Thank you, Kevin for getting that in. If anybody has a few moments, it's not a big read and we'll get it out there on the blog site. Okay, anything else? We need to discuss there, Bridget. All right, SMI conformance tooling with Lee, positive to you, Lee. Hey, Lee. All right, great. Well, this has been a topic that we've discussed maybe too long. We're finally at a point by which some people weighed in. Geez, well, there's more to talk about than we have time for, which is a good and bad, but what we've been working toward is the issue number, I think 70. So it's been a little while. I recognize there's a few folks on the call that maybe, yeah, that weren't here when we initially raised this. And that's to say that SMI, like any other specification is in need of some tooling to validate conformance. So those service measures in this case that are participating, we need to define it as a group. So we have tests that we say, hey, if these things are, if the mesh does this, then it looks like this, then it's conformant. And so in order to do that, there are where, well, if you're familiar with Sonoboy of, sort of toward, yeah, then, hey, it's of the similar principle. In this case, things get a little more hairy in so much as what you need is some tooling to provision the mesh, to provision a sample app, to provision SMI CRTs, to then, in some cases, generate load for traffic metrics, like validate that in fact, what you, the load that was presented is what's being reported on the statistics are accurate. So there's actually like quite a big test harness around that. And so in starting the, in setting out to do that, we've, and also there are, you know, N number of service meshes and counting that claim SMI conformance. Therefore, we went off to grabbed a meshery as a service mesh management plane to facilitate for much of this, which it already does. The tooling itself, when you think about this project, it needs to have things like, we need to be able to verify provenance of tests that are performed so that someone doesn't necessarily cheat the exam and then kind of claim compatibility. And so that's also accounted for kind of in the design spec. The issue that's linked in the meeting minutes, it has a link to the design spec that we've brought up. They've been probably to date only light, comments and suggestions, I think from maintainers that are here. What I was hopeful to do today is to demo the progress that the set of contributors that have been working on this have made. They are busily working on fixing a bug on a different call right now. So I just, sorry, I'm gonna Pokemon ask them to come on, but there's quite a lot to talk about outside of a demo. Part of the goal and actually here, let me bring up and share the spec. Cause I think talking about the project goals, there's merit in that. Part of those goals is to, part of those goals is to have what's going on with Zoom. Is there something else we should definitely link so people can review it to that issue, but I think you wanted to show us something else, sorry. Yeah, totally. I wanna make sure I capture it. Yeah, if you click on the issue here, it has a link to a design spec. And here is that link to the design spec too. I thought I had it up. Yeah, please, please do bring that up. I'm having an issue with, here we go. Here's the design spec, it goes on for a while, kind of talks about the goals and the notion that there would be a public facing report that sort of sanctifies conformance. A couple of concepts that are probably worth verbally chatting through. So one of those is, well, what is conformant? It's like, well, clearly there's like a set of tests to perform those tests are in their infancy. You know, I think there's like 30 of them, but they need to be fully reviewed by folks that are here and many more of them put in or some taken away or, but what we've tried to do is make sure we've got the right vehicle for execution and kind of reporting. In defining conformance, last time that we gave this an earnest discussion in this call, what we ended up talking about was the notion that, well, not all of the mesh is necessarily intend to implement, fully implement all of what SMI defines and that's by their design. And so does that mean that they should be, they should always carry a red, does that mean that they could never be SMI compliant? And so trying to put some particulars around the terms that are used that, in this case, what we're suggesting is that conformance is a combination of the capabilities of a given mesh with respect to what the SMI specs to say that, hey, this mesh has full, for example, fully meets this spec, has full capability. Maybe it only partially meets it or maybe it doesn't and the difference here is like, whether or not they ever intend to potentially like or maybe that's not the difference. The difference is just again, like whether or not they've done it and then whether or not they comply. So if the capability is present, do they comply with the spec itself, the interface, that's worth highlighting. Other things that are worth highlighting is that there's a number of tests that have been defined that are broken down into four different test sets, one test set for each spec within those test sets, kind of two categories of tests, one that very simply runs an assertion for the presence of a custom SMI custom resource and then the others are more about the capability that we were just referring to like, so functionally is the implementation responding according to the spec. These tests that they're sort of, they're written verbally here, or not verbally, they're written here, but they're defined in YAML in the utility in the project. Other things to sort of key concepts, I think are like, hey, the notion that after those tests are run, the collection of them, and then sort of the presentation of them, Mechory facilitates for an SMI owned repository of those result sets. The intention is for each of the participating service meshes to run Mechory in their, either in their build and release process, run this utility in that process, and have a sanctified, yeah. Quick clarifying question, you just said SMI repository, but this sounds available, like people can put this against any repository that they have a mesh they want to test, right? Yeah. Or it doesn't have to be in the SMI repo, okay, awesome. Yeah, that's right, I used, boy, that's a loaded word. What I meant to say is that if the, you know, as a group, if we're so desirous of putting, so to what Brigid just said, folks can go run the tests and put up their results, just, you know, onto their own. If this group is desirous of having a, an SMI published version of those test results, and sort of have it centrally stored, and I use the word repository, what I just meant like have a central location for each of the participating service meshes to send their results, and have the provenance of those results, kind of guaranteed that there's been some, there's been consideration for that, and that would be that each of the projects as they have, they would identify a service account, or a robot account, if you will, that they say, hey, for us, you know, we use this service account, so we will authenticate with this one, and you will expect, and we will send tests under that account. Now that doesn't mean that they couldn't, to kind of take this concept one step further, that doesn't mean that they couldn't fork this open source project, you know, manipulate the bits so that they're all passing results, and so to mitigate, I don't think that this is really a concern, but just like to mitigate that, there's a shared secret inside of the CI, the build process for this utility, which then is only known in that CI process, and so it too will be validated as part of receiving those results, so, and then, yeah. Lee, can I just dive in here, because we do have short time, I want to just drive, what do you think the best next steps are here for everybody? Is it reviewing this document or running the tool? I just, what's the call to action would be, what I would like you to ask? Oh, very good, yeah, thanks for that. The one that would be the most helpful and where the collective brain trust here is, you know, a better, would serve this the best, is in this section here, conformance test definitions, so are these the things that need to be tested? Does this, you know, do we have full coverage of SMIs specified functionality? Yeah, so reviewing the doc and all of it is great, but specifically. So is the way that you're operating with the tool, you're basically pseudo coding the desired behavior and making assertions about the state, and then you're building that into the tool as a set of tests? Correct, and the tests are defined in YAML, and then the tool says, well, this is what I expect to see, do I see that? Okay. And so... Hey, Lee, how much of this tool is already built, does it exist? Yeah, well, we're going to demo today, and actually, I don't know if there are folks that are on it. Do we, okay, can you see the folks who can provide a demo? Otherwise, we can go to the next agenda item and then come back to this, Lee, and give five minutes at the end if that person... Sure, yeah. And just I was wondering, where can we look at the code and where is the tool and the test defined and the demo app? Totally, let me drop it in there. Yeah, if you throw it in chat or in the agenda, that would be great. Very good, I'll drop it into the agenda. But this is excellent, it's great to see this come along, and I know tireless efforts have been made on your front to get this working, but I think conformance is something that's going to become a really big stepping stone for all the service measures out there. And as the APIs develop, so I just want to thank you and the team of folks that have been working on it to get it this far, because I think it's actually going to be very pivotal. So I'd like to put some effort into helping you build out the test suites and then even running and putting, on behalf of the community, publishing results and conformance results up there, just like Kubernetes, for example, have a conformance directory to specific APIs and things like that. Beautiful. Any other comments for Lee? Okay, I will leave it to you to get us the links and everything in the agenda. Put those in. Yeah. Okay, and let us know in chat if you've got folks who want to do a demo, I'll let you give you five minutes at the end. They were, you know what, I saw a couple of them on, and then when you said, if you see their name in the demo, they dropped out, yeah, I think... When they said you're going to demo, they left. Yeah. No, it's okay, it's okay. Well, I mean, we can add it to the agenda for next time if that makes sense too. Just let us know. Okay. Thanks, Lee. Michelle, the next item is Michelle Updates. Hey, I cut a quick bug fix release of the SDK, and I'm using the latest release and haven't run into any issues, so you may want to update if you haven't already. Also, today we are announcing a new service mesh implementation called Open Service Mesh. It is on-boy-based and SMI-native, and so we're excited to add that to the SMI community. I'll drop a link in the chat. So a lot of the Azure networking folks that have been on this call for the last several months have been working on that, as well as the upstream team at Microsoft that contributes to open source projects and the Kubernetes and container space has also been working on that. I've been helping. So yeah, that's a thing, and the plan is to submit it to CNCF as a sandbox project, ASAP, so we can work on it as a vendor-neutral thing and a vendor-neutral IT space. And yeah, I think that's it. Anybody else from the team want to add anything? Okay, feel free to ping me with any questions or anything like that. Happy to answer. Thanks for all the congrats. Hey, Michelle. I've seen your proposal on renaming the traffic access custom resource. Yeah. From target to traffic access. Yeah. I think it's a great idea. Okay. Less confusion. Yeah, it's the access API group with the traffic target resource and it's just like very confusing. We should rename specs too. But one thing at a time. Okay, cool. You all responded on that issue. I can help kick off more discussion and consensus around that. Yeah, I'll comment on it. I think it's a great idea. Okay, cool. Anybody else have any thoughts around renaming traffic target to traffic access? Okay, cool. And for anybody who's new or doesn't remember, traffic target is the thing that defines, it's a resource that defines which sources can talk to which destination on which set of rules like HP routes, matching specific headers, things like that. So with which rules. So it kind of makes sense to call it traffic access resource. But if anybody has any thoughts, there's an open issue about it. I'll drop a link to that as well. Excellent. Thank you, Michelle. That is the end of the stated agenda. I'll open it up for any other business. Quick question on the open service mesh implementation. Go ahead. Congrats. I'm just looking at the docs and I've seen that even if it's envoy-based, the ingress implementation relies on NGENX ingress. Any plans on making an envoy ingress work like something like Contour, which is a CNC project? Yes, we also have instructions for AGIC. And AGIC, I believe, is envoy-based. Correct me if I'm wrong, Zellian? Yeah, so good question, Stefan. So the service mesh itself, the implementation right now is kind of agnostic to ingress. You bring in your own and we observed ingress resource and poke holes on the appropriate envoys. So you can bring wherever you want. If we mentioned NGENX is probably just an example. I hope that makes sense. Yeah, for me, it's more about integrating with an existing envoy ingress implementation there. So you can use ingress to inject the certificates. Yeah, I think from my perspective is, yeah, absolutely. I think from my perspective is we didn't want to lead with another hacked up ingress implementation that was purpose built for OSM. But rather, I know, Solo has done a lot of work in this area. Learning's in Istio for certain. You probably have a lot of feedback. So it would be more interesting to us to use OSM to try and feel out what the right ingress solution would be or a set of ingress solutions because we didn't want to go or throw another one into the pot without using SMI as a place to discuss that because we do know that that's come up on multiple occasions. So if you're interested, we could start feeling that out but we didn't want to go and say, well, here's contour and here's a completely new arbitrary API that we're just introducing and, you know, but rather, let's go and figure it out in SMI and then just bring it back down into OSM. Cool, thank you. I'll jokingly say, man, you had to call it or had to have the same exact acronym as open shift service mesh. Oh my gosh, I didn't actually consider that. So you've got open service mesh and open shift service mesh. Did open shift service mesh exist when we ran this through the naming council? It's existed. Kind of a question for Microsoft people. I'm like, I'm not sure when that came about. To be honest, we were on a call and somebody just said, let's call it open service mesh. And we were like really busy working on the projects and we were like, we don't care. It's fine, just call it whatever you want. Yeah, if you want the how the sausage is made as it turns out when you work at a large company, sometimes people who are not you will veto every hilarious name. Oh. You can get non-hilarious names through pretty easily. To be quite fair, that's part of the reason why a couple of years ago I said, let's just call it open shift service mesh. There's nobody who can really argue with that. Yep, we also, I will admit, I secretly pronounce OSM. Awesome. Yeah, really wanna get that Kiali integration. So speaking of that, Meltron, who is the product manager for Kiali, is actually on the call here. So, you know, it's a good moment for, you know, kind of him to be able to talk a little bit to, you know, some of Kiali's thoughts on this as well as, you know, introductions so that everybody knows who to bug and who to ask for things. Meltron, you wanna introduce yourself? Yeah, go ahead. Go ahead. Do you wanna say something about Kiali on what was the question, sorry? Michelle was pointing out that they're excited for Kiali's involvement with SMI and in relation to open service mesh. And I was merely pointing out that you were on the call and, you know, said you say hello to. So hello everybody. I think I talked to Michelle a long time ago and we started evaluating SMI in sounds to me a really prominent kind of project that we should be looking into. And of course, there's some other priorities we have to work in, but SMI is one of them that we actually still looking and understand exactly how we can implement into the Kiali plans. That as in now, there's no hard commitments. Yeah, I think I've heard on the, well, hello, Meltron and welcome. I think I've heard on the call several times that the Lincard D folks were extremely interested in figuring out how to get Kiali to work on Lincard D and they have done a lot of work in shipping the metrics infrastructure that supports SMI in Kubernetes. So there's, however, we can help you get the information you need to actually make Kiali useful by leveraging SMI. You know, we're all here for you as implementers to meet you in the middle. But there seems to be broad agreement that Kiali is a very useful tool for service meshes. And we want to enable that ecosystem much similar to what Stefan is doing with Flagger. We want to have tools built on top of SMI because I think that's more important than just SMI in itself. You need to have ecosystem tools that actually leverage that and make it useful. So don't take what's in the metrics as, you know, done. If there's things that you're like, hey, we need these five other things, let's start poking holes because you have an implementation. We need to figure out what you need to get out of these service meshes so you can actually build the dashboard. So don't, whatever's there is not complete. Let me tell you that. And nobody's implemented a tool to your, to what Kiali has on top of metrics at this point. So I imagine we're going to need to add a lot of surface area. Right, let me tell you one particular thing is kind of important to us. Ever since we started evaluating SMI, it's, there's one particular missing part which is the issue part. And we haven't seen much contributions into issue adapter, except for those folks here in this call, but we kind of miss people or engagement from Google or even IBM into that. And that would make a significant change for us, especially into contribution. Okay, so what I'm hearing is more support for the Istio adapter directly would help Kiali plug into this. Okay, that's great feedback. All right, we're at the top of our time. Our next meeting is at this time in two weeks time. If you would love to moderate that, please feel free to throw something and chat the next 30 seconds. It's very easy to moderate. Even a trained monkey like me can do it. They train me well. Don't worry, they feed me bananas. But it's really helpful and it gives the community meetings a lot of different personalities. And I would really appreciate it if folks step up there. There's nothing secret and I'm happy to help if people are interested. Other than that, have a wonderful day and it was lovely to see you all. Thanks for all the agenda items and we'll see you at the next call. Have a great day. Bye. Bye.