 cloud should be done all right very good thank you Kevin um all right welcome everybody it's the November 25th this is the SMI community meeting this is an extended version um so hopefully it'll be a little bit interactive today's version has a single or today's topic meeting has a single topic uh SMI conformance testing um if you have access to this doc please drop your name in linked to it in the chat um if you we're going to be going over we're going to briefly spend time I think in the meeting minutes and then we're going to um run on over to this document linked here which everyone should have access to it should be wide open and today's agenda well I sent out uh we should have it today's agenda is to talk about SMI conformance let me let me grab this two specific items that I'm hoping that we will that we really have a call to action on so the first of which is to talk about um what the heck uh SMI conformance testing is for some of you you've you've worked on it um for others of you um you've heard us talk about it on this call a few times and for some of you uh it's brand new too so um so let's start with that and then let's let's also see if we can achieve these two things I appreciate that um there's a couple that uh there's representatives from each of the well a number of different um implementers um it's important um it's important that you're here so so welcome by the way if we don't know each other my name's Lee um I should probably put my name into the maybe I should my name's Lee um one of a number of SMI maintainers um and so here to just here to try to help advance the spec a little bit um it's been uh kind of a beautiful thing that we've seen as an industry a number well I guess depends on your perspective but a beautiful thing from SMI's perspective that there've been a couple of or a couple or more implementations of SMI and then some some new service messages that have been announced and near as I can tell the last three or so significant service mesh announcements have been have claimed compatibility with SMI and that's great so from a specification perspective it's great um it's being adopted um at some point uh to both help advance the spec and help instill user confidence and help all of us understand what it means to support traffic metrics or any of the specifications um we want to have a conformance program so similar to so to introduce you to the genesis of why we're talking about this and why we think it's important is um if you're familiar with the fact that there are you know it's um some of you have heard me say this before but there's last time I counted which was like two and a half years ago I think there was 86 distributions of Kubernetes and I'm sure some have gone away and some more have sprouted up but there's a lot and so if you're familiar with Sona Boy of the Kubernetes well more or less of the Kubernetes project we're more or less doing something similar here so we're working on conformance testing to say that hey if an implementation a service mesh implementation of SMI behaves in these ways then it's compatible with this set of the specifications um and so what we're I'm speak I'll speak for myself like I'm not here to necessarily say what it means to be compliant rather that's part of the engagement that we're looking for from each of you is to weigh in on what you think that these the tests should be and what you consider to be passing or failing or what what you considered for us to collectively define a couple of terms some of those terms are what it means to be conformant which makes sense what it means to be compliant which sounds like a synonym to conformant and and then also what it means to be capable for a mesh to be capable of a given spec I think we can wrap some definitions around these and that might be helpful particularly to some of the implementations that don't cover all the specifications and perpetually intend not to cover some specifications so let's dig into this doc I'm gonna I'm gonna pause here and just just does anybody have comments before we go into the doc just just on the surface of what we're attempting to do does anyone like disagree that we that this is so I have a question so like I'm I'm I'm my name is Daichuan so I work for venture I work on a real project so like in real we actually like a layer top of SMI we don't really implement the SMI specification we use the SSMI to like program to different adopter like as the adopter or link to the adopter so that's the like the conformance testing still apply to real yeah it's a good question so from my perspective I'll speak on from my perspective it should the tooling that's been created or there's work to do in the tooling to account for reals use case when you when you take a step back from it like any any implementation like that any well so yeah so the direction or the vector from which the assertions are written and applied is like hey when you when you touch SMI and SMI spec like this you configure it like this do you see a behavior that conforms that that's kind of how the test cases are written it would be from my part like or not from my part but part of the goals of this particular project is to be able to encourage those that are compatible to put an SMI logo to wear the SMI badge so to speak and sort of wear it proudly on there on their project and you know I would this is a I'm going to make this statement but also a question for feedback from others like from my perspective of like really any of the projects that you see so like of these here and I sort of say this I say so you know some hesitancy because because you know technically your Rio is flexing the spec and Rio would suffer if or the user experience in Rio you know would suffer if one of the service meshes or the service mesh that it's using doesn't you know behave in a conformant way so yeah I mean it would be nice to be able to with some level of authority or some level of validation for all all the projects here to carry the SMI badge and say yeah hey we're we've implemented these these things so let's let's walk through the project and keep that thought keep that question in the back of our minds thank you for that clarification yeah okay okay I'm having all kinds of challenges with the tools today good all right when we go take a look at the initiative it is to there's a couple of things to dig into we don't have to we don't have to read the whole spec together today the design spec is you know essentially in or the mechanics by which the tests are executed how we consider is like a final draft it's it's essentially where this is a request for comment from all of you to comment on the approach and what's being done so to facilitate this testing a service mesh management plane meshery is being used to essentially run you know a gamut of integration tests to go over and you know to automate the provisioning of each of each participating implementation each participating service mesh to deploy a sample app on that mesh and actually the same sample app consistently across each service mesh to where it's needed to generate load like for traffic metrics you might consider that a simple more than a simple get request might be quite helpful to verifying the behavior of you know to verify the accuracy of the metrics that are coming back and whether or not there those math matter those that math or that traffic is being accurately accounted and so mesh tree has a load generator to facilitate that it has adapters for each of the participating service meshes and it also has apis to be able to part of the goal here is is for mesh tree to be incorporated into the release process for each of the participating service meshes so that as you know probably you know not every release but as and when there's a major release or as and when each of the the projects that are represented here determine that they're ready to qualify their compliance that they could do so conveniently as part of their CI process lastly as you look at the needs of a conformance tool and this by the way if anyone had lived through open stack which i'm hoping maybe some of you didn't have to go through that experience but but open stack also has a project lots of distributions of open stack similar challenge for a project like that like hey of a distribution how do you know that it's actually open stack does it adhere to the open stack apis the same for kubernetes the same for smi same for any spec there's another spec that's related it's service mesh performance it's a spec that that will we talk about in the service mesh working group but mesh tree is a tool to help with that spec as well well so mesh tree was an ideal choice to the extent that it is an agnostic tool part of the goal of the the community of contributors some some of whom are on the call today that work on mesh tree i think their goal is to have everyone pass with flying colors it's you know the part of the goal of mesh tree as a tool is to help people adopt and encourage them to adopt service meshes lastly of what's needed in this project and of a tool like mesh tree is the ability to guarantee the provenance of the test report so once the tests are run and there's a report saying that certain tests have passed or failed it'd be nice if there was machinery in place or mechanisms in place to ensure that those results aren't tampered with so mesh tree has that facility today it does that for smp as a different spec okay so hopefully that introduces a few concepts to folks here's a concept that i'm going to introduce and then i'm hoping that others will weigh in on with your opinions so i'm not sure what the best example if there's a specific example to use here that would work very well or not but um but it's suffice to say not every service mesh implementation um intends to implement support for all of smis specs i also don't think that smis you know that that it will only ever have four specs i would expect that would be a fifth at some point maybe maybe around identity or security some projects will desire to implement that and some won't so the question really is like okay so if there's a service mesh that doesn't want to implement traffic access controller whatever either the entire spec or a portion of the spec because it just isn't applicable to them when you know when there's a report that says these are the service meshes that are compatible this is their passing this is their state of compliance state of compliance with that version of smi state of compliance with that version of the service mesh you know and then and then compliance with each of the specs and various aspects of the specs you can imagine there's there's a matrix you know matrix that we're looking at here the question is is it fair i don't know if that's the right way of phrasing it but is it fair or should it be the case that for a given service mesh that doesn't you know isn't going to implement a portion of the spec let's say that they pass three fourths of the test but that other spec they're not going to do should they perpetually be at a 75 percent passing and like out of conformance or and that's where these terms that i was referring to like conformance capability and kind of compliance i think you know it's been suggested in this spec that it's um not as black and white it's not as it's not as red and green as you might think or it's not as black and white as it's been it's suggested here that if a mesh doesn't intend to have that capability or doesn't currently have that capability then then failing those tests doesn't count negatively in terms of their overall compliance what do you guys think is that overthinking this is that just like well hey then they're out of compliance like they're you know they're only going to do three fourths so they're out of compliance and that that's how it should be reported or should it be a little more granular than that so it's uh yeah it's um it's um um think about it think about it um it's it's one of those things that well uh you know it's one of those things depending upon which way that went it could be make some implementations look good and some not and part of um at least just for my part part of my goal is to make it make them look good or or like highlight the good near what people what implementations are doing well yeah my perspective on that is i think it's fine um i know that right now linker d accomplishes partial to none on some of them and fall on the others and i think the transparency is what's important for any service mesh and as i think it's really it says a lot that you want to make other service meshes look good or all the service meshes look good and at some point there's also the the onus is on the service meshes themselves to fulfill these things if they intend to and um it's probably also the case that i'm not fulfilling one isn't necessarily a and actually i think if we do it kind of like this like if we talk about the compliance in terms of their capability it's not necessarily a black a black eye it's not necessarily a red mark because in fact you're you're you know informing the users up front that like hey don't expect this out of the hey hey if there was yeah anyway no i think i think it's helpful to the people who need it right so if there's somebody who's coming who is looking to use a service mesh for a very specific thing like traffic split then they can go down the list of service meshes and see whether that capability is fulfilled so i think it's it makes sense to me um i guess we'd need clarity around what partial means and it what happens when one service mesh partially implements something more than another service mesh uh then you get into like a literal gray area but um the the full none i think is is a good place to start and then we can figure out what partial means good good call out yeah um just just recollecting is um similar thoughts or that exact question um and um my hope is is that um you all and a couple of others that aren't able to come today are are more or less the ones that are defining that that um that hopefully the the effort undertaken here is um while it's i i would acknowledge is like probably one of those things it's kind of a kind of a the non-sexy part of the project it's sort of a burden to do uh but kind of very necessary for like i think helpful to SMI and to people adopting in general is just to say oh okay well if we use this um this interface then we bet there are this there are the benefits of of being agnostic the benefits of all the other benefits of SMI that that actually leads us um Charles to the another set of questions intended to be thought on by yourself and and the others which is okay so there's four specs um any number of statements like assertions that you could make that if if this is true then this service mesh implementation is compliant um and this is and so these are incomplete this is um i don't know how complete universe is incomplete that they are that that's um i'm hoping something that that you all will determine um so so some of the the way that these the tests go is just um and when you deploy some of these are very simple like black and white tests if you if you deploy the service mesh is is um under traffic access controls like is this particular custom resource present okay then then that passes and sort of so some of these tests are defined in a sequential way which is we're attending to indicate by like hey the the first test these these the first set of tests has two assertions these are done um sequentially or evaluated sequentially the second set of tests come through about then actually flexing that capability and looking for feedback and so so I have I have a question here for this can I yeah please can I ask you are we do we have dedicated tests for each API version because for example with traffic split it's very important which version to support to support them all uh there are breaking changes between uh versions so I think that's I don't for flag users that's super important yeah totally yeah you're right like uh and Stefan on that so the answer is yes that that's um uh the tooling is cognizance of that and and support and tracks like what SMI version is being it tracks what service mesh version or what you know what what implementation version and then what SMI spec version is being tested and actually Stefan I think you probably know this this better than me point of clarification for you uh I don't recall where we landed on the individual specs like the individual specs each carry their own version number correct yes okay and then okay and overall there is no singular overall like hey what SMI version are you running that the answer would always be well this version for metrics this version for access control yeah it's uh we have uh separate groups and the group is the subdomain of of the group me it might even just to help inspire confidence it might be helpful if there was kind of a demo of the the tooling or just here's a screenshot and actually the last two kubecon so last week and the kubecon before this we discussed this initiative and a couple of others in the cncf sig network and the cncf um service mesh working group and so this is on demo if you will on on display it's caveated with at kubecons it's caveated with um for people not to read much into passing or failing tests because it's you know the initiative is you know mid-flight um but in those demos it does um account for what you're saying like hey well what what version of that particular of traffic split is it yeah and I think the actual test should be different right uh because you are dealing with different structures and different data types and everything like yeah that's a great point yeah I didn't uh other people probably considered that and whereas I didn't that um not only do you need to not only does the tooling need to track as some um SMI versions spec versions service mesh versions but also the tests themselves have to be versioned that's a that's a great point um that is not accounted for yeah so good practices to you should bump the uh major version of your crds when you do a breaking change so having that in mind I would structure the tests to have dedicate tests per custom resource definition version makes sense so it's the the initial structure that we have here where it's kind of dedicated tests for that spec if versioning is added to these tests does that hit the mark do you think yep we should mention here uh and also publish in the final result okay linker the only supports for example traffic split v1 alpha one um open service mesh only supports v1 alpha two totally yeah one and two are not backwards compatible so each service mesh uses a different api version which in fact it's a different has a different structure so there's a totally agree that like um we've been the community of contributors has been kind of around the the track a couple of times on what that table looks like what that matrix looks like um Dhruv do you consider that that that's in a at a good enough that we should probably show what that looks like and I think the intention is for that that those reports to be captured and um you know ultimately probably displayed on you know smi-inspect.io Dhruv is this let me this is it kind of an early version like so so first you know that inside of the meshry tool itself it will um have a the ability to and actually Dhruv may well you know let me okay so so inside meshry it'll have the ability to overlook um the tests and their results and look back in time so that you as uh or so that those that are implementing or those that are just running tests and look at those um which is great for their environment but when those test results are published the we need to guarantee that those are in fact coming from the project itself but like hey if if uh Charles and Tina are running conformance tests that in fact when those report when that of the results that are that they came from uh Charles if you want I had run some smi tests beforehand this meeting I can show you the details in the meshry tool okay yeah that I think that'd be helpful people to see the yeah this was the greatest one I believe something like this this is in a growing stage nothing is final yet but this is how we vision it to show the details panels while you run a particular test in meshry tool so just to add to your point yeah we do have sort of uh semi table ready which probably we will add uh it is currently over here in the landscape but we'll probably add it in the smi spec to where the main idea would be that the meshry cloud back in there would be a github app which would be linked to which would be linked to the uh accounts which every mesh specifies and whenever there is update they will run this particular test in their ci processes itself and then we would store that the date of that particular test uh in meshry cloud and later on someone can use that same json to populate the table which would be shown uh and it's in smi spec if that makes sense yeah the version there looks strange alpha one slash one why but for example for trust traffic split what that means that is the smi spec version which we are using for yeah currently we are just using one of the versions which we are defining we are not yet currently uh been able to give a choice between which particular smi version they want to run for the test itself so we are currently running only one version and that's why this i'll call it for now but yeah probably we would uh we would uh update the versions as and when they are being updated in the smi spec thank you other other questions comments thoughts uh so the the osm group um is probably been the most uh hot to trot on on you know on having conformance tests um run and kind of becoming validated in that way there's been a couple of service mesh teams that are desirous of um participating but it's just sort of a low priority and so we'll we'll be i think you know persistent in their ear about it um does that let me ask you this does that make sense about um you know trying to ensure that so one trying to make it easy for service mesh implementers you know the service mesh teams to run these conformance tests as part of their ci process it does you guys feel like that's invasive to processes is that the wrong place would you rather just run an ad hoc would you rather that it was centrally run for you um you know the smi project takes that on or the thinking was that um that that that wouldn't be the case that hey the each team is empowered with the tool um each team is hope helping define what the tests are and that when a team does take the tool and wants to report their test results that they would build that into their ci process they would they would um identify a service account a robot account um as the one that's allowed to send in test results because you know because anyone can go download the tool and go run tests um but as and anyone is capable of sending test results um back to the project but we would only want for the test results that came from the service mesh team themselves to you know to to count so and then so um so charles you're the you're the lucky winner man you're uh uh this is most squarely aimed your way for stefan uh and dashan it's i'm hoping that there's utility and value in here i'm hoping that um real as a real might be able to benefit in the in a similar way i think i think i need to give a little more thought to that myself but and then stefan kind of with respect to i think stefan you you you wear a few different hats like one flagger hat one and s and i maintain her hat you know just um stefan if you think if you think about flagger for a moment um flagger only um cares about traffic split nothing else it doesn't use yet the semi-metrics api because flagger allows you to write custom metrics as well and yeah only those two metrics that the semi offers are uh let's say the minimum but people want to do a bunch of stuff custom sure so for from a flagger perspective is only traffic split and traffic split what exact version because yes flagger for example doesn't implement anything else but the first version that works with linker d got you right so also from from those that are you know they are not implementing service mesh uh interface as a provider but as uh i know some some tool that automate stuff on top of the api does not the service mesh itself uh this kind of testing uh counts a lot right it's i think is very important yeah i find it very important to have such an insight and when i look at the table i know okay traffic split is supported this version by these providers this version by these providers would be like awesome information yeah and you can yeah when you're i don't know when you're troubleshooting a bug you can immediately hopefully immediately and confidently come to figure out where you should spend your time is is flagger having an issue or is this service mesh implementation having an issue and yeah yeah there are many things like first of all should i implement v1 alpha 3 with headers okay the the api is there but if no one actually has implemented why should i implement it in flagger because there is no such capability in the underlying infrastructure and for flagger is the service mesh itself right so it also helps in you know deciding when to implement and what specific version yeah where to invest yeah fair enough so the the calls to action today if we go so you know assuming that no i mean so doshan hopefully part of what um stefan was just saying helps like in terms of your thinking of whether or not you know this set of tooling is valuable to rio yeah so my source was with the same with stefan because i in real we actually only we also you only use the traffic split split and like we are just looking forward to see like if the ssm i support the routing but we also have a routing portion so right now to do the routing you real we have to program two specific things like a steel or other service mesh crd so we'd like to if the travis reading if the ssm i support that then we can just program the travis reading smi spec and we don't have to program to different invitation and if we see the testing like the comfort confidence test pass for those providers we can we can we can just program the spec smi spec and we don't have to worry about different crd on different invitation invitation but right now we're just using the travis read we're the same yeah but if we have more invitation then we can see the test result and we can see like this is working then we can just program to smi and we don't have to worry about different invitation makes sense so uh okay the the two calls to action are one to to weigh in on the test cases on the assertions themselves and it's uh lee i do have a question around the test cases as in the assertions that we want to describe uh and so uh so this question is uh probably more specific to stay fun because he's the material for smi and so the question is that we define a lot of assertions for every spec and these assertions define the best practices and a lot of validation cases that we apply on that particular spec and this applies to one particular version or is it common across all the versions and should it should we have it common across all the versions or should we have a test of a set of assertions separate for each and every version of that particular spec that gets released uh how is it even wait wait it's even more complicated than that because some futures depend on two different um apis at specific versions for example if you want to define uh http routing based on cookies http cookies then you need more than just traffic split you need traffic spec as well right and traffic spec at a certain version because the spec didn't have um headers uh matching rules back then so a test for um yeah routing based on headers you need to match um two um two apis at their own version understood so we would need to account for all of the combinations uh and not just the independent different versions of each spec yeah i don't know i would i would do it only when a new future is uh introduced understood yes here thanks thanks for that welcome so there's uh how i guess it my guess is is that the osm team and the njax team are probably the most um and charles i'm not saying linkardy because you're standing right here so um are probably most ready willing to engage uh to define some some tests and kind of get it over the wall i think um kuma will there's a few open source contributors who of kuma land that are willing to engage and so i think kuma will probably come along um i got crickets in the istio community about this and uh but yeah i'd say for us we want to implement these um it's we have the roadmap they're further down the roadmap than then uh well they're not immediate on the immediate part of the roadmap i'd say so um i know it's the desire of the team to get smi all of support for all of smi um it's really interesting to see what osm and um njax are doing with their implementation so um yeah this i i could see that this conformance would maybe encourage us motivate us to work on to move things around on the roadmap but again i know that there are quite a few other items that are considered higher priority at the moment makes many sense if i was uh yep i if i was in any of the other shoes i think that would perpetually be the case i would perpetually be focused on features and funk you know up into the point that actually like there's users who are actively consuming a spec and they're complaining saying hey we're trying to use the spec and your mesh doesn't work with it and then it would bubble up to your point like yeah when a project like this shines a shines a light on it it it helps in the priority ranking some but um so i if it's if it makes sense to those around the call i'm the light the so we've been working on this for the community has been working on this for a long time and it's um a lot more of a challenging thing that i had um and i personally had hoped that it would be um so i'm eager to like uh claim some small amount of victory i'm like being done to to like get it out of the way uh which means that like i heard traffic split a couple of times now it's like uh strikes a chord with flagger strikes a chord with um blinker d um it maybe that's the right spec to make sure that we've gone through a few times and that the assertions make sense to folks uh my suggestion is is that it we don't need to make it any harder on everyone than it needs to be like that like the test should be valid but um anyway it can also take a while to act you know the more tests you have the longer than that can take to to execute the tests themselves are defined in yaml so they're um the the same sample app that's being used is um lightweight and essentially custom written for this use case so i think it's just a small go program that has a an htp interface that's i think it's um uh instrumented with uh prometheus so that it can help with some some reporting but other than that it's pretty lightweight uh okay good well um i don't know here's my suggestion hey yeah i'm i'm for i don't i can help out if you if you want to writing traffic split tests i mean i in in flagger there are end-to-end tests for every single um implementation smi e-steal app mesh everything else contour and gen x and so on yeah that's right it's mostly testing uh traffic and metrics so yeah i think i can i don't if you want me to look over over those i can yeah i'm about to come out of my seat i'm so excited to step on yeah please yeah that's that's uh yes that would be lovely um for my by the way because i don't know this is really obvious to anyone but like uh i'd like to wash my hands of the project fairly soon it's like we've got so much invested i want to want to see it be successful and and hopefully help people i want those like um charles or or um staphon or dishan like those that are participating to well i don't know frankly i guess like to for it to be advantageous to them that they put in the time and they can wear a badge on the project and awesome and um and then but they just hope it um advance us all collectively and so what i was trying to say is i don't have it from when i keep talking about myself but i just want to make this clear like no personal investment in what any of the tests are whatever you guys think that they should be you know whatever is it whatever you consider to be appropriate so so staphon actually like it should have occurred to me earlier that like of course the tests should be versioned like if for nothing else just to say hey at one point we'd all agreed that they look like this even if smi was static like they all look like this but then we learned some things and so we should move to you know v2 of the conformance suite smi isn't static service meshes themselves they're not static yeah the compatibility major the report it needs to be something of a pivot table in some respects or we can have a table pair um api type traffic splitter table all service meshes and all the versions and you have um i don't an easier way to read it list it all out yeah so without other comment like hey mission accomplished them i'm um for me anyway uh staphon um this is the source of truth if you will for well it's between this and the yaml representation of these um i think the the repo is is here there's a this this sample app is the one that's being used this thing is both this repo contains both a sample app as well as then just the um individual conformance tests for each of the specs so there's been a couple of um gsock and community bridge uh interns have come through to help build the tooling and then sort of uh one of the tools that's used under the covers here is kudo although it's causing some pain needless but the point of saying that is that uh these are written out in much the same way that you find in that doc so kind of between that doc this is the realization of those um some things that i think are action items for the those that are helping advance this is that um some examples of how it is that you can use um the rest api of meshery or the meshery ctl the cli to just you know invoke conformance tests for a given service mesh uh would be like for charles and others i'm sure would be helpful so fair enough i i also think it would be super useful to have um something like i don't know github action yeah we only generates a i don't know uh table with tests and yes no yes no so anyone can put it in their own ci uh and run it i don't know i i'm not a service mesh developer so uh it's just an idea yeah i want to put that i want to put that charles i assume that that that's a that's a happy smile about uh i'm gonna put that suggestion into here because uh because yeah because i for those that are using github actions that would be convenient right charles do you guys use github actions yeah we do okay anybody have any other comments it's a good good comments so the the plan here is um this is a it's a continued thread inside of the standing smi community meetings now those are only half an hour long so we'll just give kind of updates about the progress we'll try to make it easy easy as possible you know charles for for you and others to you pick pick it up and and run it um um and then we'll be asking probably for i mean we're asking now but just i'll do it even more vocally um about the assertions and whether or not you think that those are so i'm still trying to round up specific contacts for all of the meshes that are participating um i think there's a couple that wanted to be on today's call but it's you know the day before thanksgiving so people need to rest up before they go stuff their face anybody have anything else mr connors curious for your feedback oh i'm just listening at the moment to try to catch up with where things are on smi so we've been exploring this a bit so we're uh as you may know we're heavily in the issue camp at the moment so we have a product based on that but smi is definitely something we're trying do you consider from that vantage point is is this um this effort helpful or just sort of an aside uh this is very useful and it is something that we have been asking for within the sdo community for quite some time and there's no tcks or anything like that there's nothing no compliance conformance tests there uh we have a fork of the sdo code base which uh tailors it for opachift and adds other capabilities that we think is are important to ourselves and obviously we would like to have some tck that we could run against that but it doesn't exist so yeah this is very much of interest and i've come through lots of the java standard bodies and w3c standard bodies and the like so i'm used to tcks and things existing is the conformance testing that you're referring to with respect to istio is that is that in regard to smi or do you mean in regard to istio's apis that that was just a general uh statement about istio rather than smi itself uh but but we are i mean from a company perspective we are uh suddenly interested in tracking us uh smi and what is going on with smi especially with its involvement in cncf nice last comments um happy thanksgiving thanks for coming thank you see you guys after the holidays thank you thank you bye bye all right thanks a lot