 Welcome everyone. We usually start about five minutes after the hour. Let me go ahead and stick the link in the chat so people can go and record their attendance there. You could add yourself to the attendee list. That would be fantastic. And then do be aware that these calls are recorded and will be posted up to you to also feel free to go ahead and add things to the agenda. We run a pretty open agenda. Most of the stuff we have on there currently is actually sort of our stock stuff. It doesn't take very long to get through. Hello, we'll give it a few more minutes and then we'll get started. Sounds great. Okay, so to two things, can somebody post the meeting notes and can somebody share the meeting notes. Okay, let's go. Let's get started. So welcome to the next network service mesh meeting. We have this particular meeting every Tuesday at 8am Pacific. We also have a we also have an Asia friendly meeting which occurs I believe at 3am Pacific time every other week and we should have had one this week. So next one should be in two weeks. We also participate in the CNC of telecom user group which occurs every first Monday at 8am Pacific and every third Monday at 3am Pacific. The next call will be on the first Monday of next month. We also participate in the CNCS network which occurs every first and third Thursday of the month at 11am Pacific. So we have a couple major events coming up. We have introduction to network service by the cloud native Austin meetup, which is occurring on the 20th on Thursday. We have March 18th in the go San Francisco meetup cloud native zero trust, which I will be delivering. We have keep calm cloud native con Europe coming up in March 33 April 2nd, which will be at the RAI Amsterdam. The schedule has been announced. We also have an SM con coming up the CFP for that it closed on the 14th, and we will have the schedule out shortly. There are still some sponsorship opportunities available so please consider sponsoring if you have the ability to, or if you're able to reference somebody to do so. We also have open networking and Ed summit in North America Los Angeles. The CFP for that is already closed. The schedule will be announced early next month. And that will occur April 20 through 21. We have cube con con native con China. The CFP will close February 21 and the 1159 Pacific time CFP notifications will be out in May 11 and schedule will be announced shortly after. And we have ONES where on September 29 30 ONES Europe. The CFP will close in June 7 and schedule announced shortly afterwards. We have cube con and cloud native con North America. This, the CFP for that will open on April 22 and close in June 12. And so, very the, we should also be having an SM con at that particular time as well but nothing is announced yet so make sure you keep that you keep that in mind. And the couple announcements so we had so we can move the agenda item at the very top quick operator on a quick update on NSM operator down into the main agenda that'll be good. And as a reminder we have a new project page so if you're looking for things to do or want to see what's going on. The project page is very informative. Do we have anyone from the social media community team on. Yes, I am on the call. Hi everybody. And so it's been a busy week as far as social media goes. Now that the cube con schedule has been announced and it was the deadline for NSM con CFP's last week Friday, it was really busy. With that being said we gained 11 followers on Twitter, followed four accounts and had a total of 34 tweets and retweets. And as mentioned a lot of that was CFP deadline reminders, there was a tweet that went out, trying to gather some people to sponsor NSM con, as well as a tweet for thanking everyone that did submit CFP's and that same tweet just announcing that the schedule will be out on Friday. There were also individual tweets per network service smash sessions that will be presented at cube con. There were some general core reminders meeting recap videos that went out some CNCF news as far as the weekly webinars that were happening last week and some events coming up OSS and Austin and O and LA and also tweeted about the cloud native media happening in Austin this week. So that also get some more attention over the next few days just to further promote that and gets as many people to attend that as possible. And just some general retweets about open source service mesh, etc. And then linked in again we are back to gaining 10 followers this last week so that was exciting to see. And we posted the same, the same content on Twitter, I'm sorry on linked in as the original content that went out on Twitter. So Twitter we are at 696 followers so hopefully within the next few days we will reach the next goal of 700 followers. And we will just continue to promote as we have been in SM con related q com related events and anything else that comes up. So that's it for me. Thank you. That is awesome we do appreciate all that you do. I know for me personally and I suspect this is true for a lot of the other folks who are involved with the project, all of the social media stuff is black magic and you guys make it look so easy. So, it is much appreciated. You're welcome. Cool. Awesome. There was a update on the Asia call. Someone who was there, she was a rundown and pass on any questions. Just a few of persons, Nikolai and Zareb team, but mostly now nothing and nobody from real Asia. So it's just have some requests and this decay a bit of discussions. Yeah, we were mostly just using the time to talk developer stuff. And yeah, there were a couple of topics which we discussed. So here they are. So is there anything that you all can do some action in this meeting or are you all good at this time? No, nothing very. At least, I don't think, I don't know, Andre. Yeah, what's the question? Sorry, I missed it. Change the time. I was just asking if there was anything that there was anything that popped up in the last meeting that would be better answered with this particular group since we have a different audience. But it sounds like it might not be. Yeah, it's cool. No worries. We have an update on the MSN operator. It's the, there's no name on that for I assume. Yeah, it was me. Alexander. You are the usual suspect for this topic so yes. Yeah, I know. I saw there was an Alexander on there and it's like the last thing looks different. Okay, again, no problem, no problem. Yeah. So hello everyone. I'm actually getting close to to hit the PR on both what we call community operators repo and the upstream community operators repo. So that NSM operator will be kind of shipped by default with red hat open shift. So, and also you can install it automatically using using the operator how bio. So, I'm almost there. I'm changing a few things. I should have documentation on that by the end of the week. And it will be really, really easy to install everything. So, yeah, basically to say that the PR is to come any day in this week and to say also that that animated gift that you guys provided me that thing gets giant when I try to convert it to base 64 so it makes like a file. I can I can go into that to whatever at this point. Um, so just ping me on Slack. You know, and we should be able to sort that out and also let me know in Slack what size you would like it to be. Okay, okay. Yeah, I don't know if it work will work on on on the operator hub embedded operator hub inside open shift because the the icons are pretty small there. I don't know. I don't really know. I can even share my screen and show how how it is. Do you want to see that in using nothing else. So optimize. Yeah, go for it. We think about we will have fun seeing it. Okay, let me let me try to share just a second. So many desktops open. The idea as to how much we have to see it down as well. It's this one. Yeah. So basically, can you see my screen? Yes. Which one are you seeing? Now I don't know. Are you seeing an open shift or We're seeing a web browser that says installed operators. Okay, cool. Cool. So when you when you log into open shift, you you you get this home page with dashboards and everything. If you go into operators and go into operator hub. You can see that we have this kind of app store like experience where you have a lot of operators here, installing by default many applications using the operator life cycle manager. And now, if I type NSM here, I find network service mesh operator. So with that, see the icon is pretty small here. I tried to put a giant thing I don't know what is going to happen, but it goes from 56 kilobytes to to 7.5 megabytes. So you can, if you can, if you can give me a size I can jump into whatever size you'd like. I can try that. I had been thinking that you were dealing with sort of a web pagey thing, which will let you, you know, which web browser sure get to whatever size, but give me give me an icon size and I'm sure someone to specify the icon size. And yeah, there is. It's literally two minutes to go and export to that icon size. I don't think this is the problem, though, because we have like a YAML file, which is the cluster service version that we use to implement this, this whole infrastructure. And on that YAML file, we have what we call a spec descriptors with the spec descriptors we are describing the fields that we have on under any operator. So for example, here, I would install the service mesh operator just using this screen. We have some instructions we have like repository, a lot of other information. It's already installed because I'm testing now. And when I come into install operators, I will see the operator running and I can see the provided API is here. So here when I click in network service mesh, which is the one I want to like to implement and run. I can click create an NSM, then I see the file, but the X descriptors, what they do is something like this. I can transform this YAML into like a form, a web form. This is broken by the way. This is why I am messing with that right now. This is why I'm not able to fully demo the installation here because some of those types are not exactly accurate with the code underneath. But yeah, like those X descriptors and the images, they have a field inside with, I think it's PNG and GIF format. Those two formats are the ones that will be allowed. So I don't know if I can put an animated GIF there, but I can transform into base 64, change it to PNG and try to deploy and see what happens. And that's it. This is the final result on all the screen. What could possibly go wrong? Not running. Maybe not running. I don't know. Or get too heavy, too slow. I don't know. I actually don't know. I need to test. Yeah, but the file. Sorry, go ahead. I was going to say it was optimized for 8K full screen. Wow. Yeah, I don't know what would happen. I just, what I can say is that my VS code with base 64 converted image like that, it gets really, really slow. So just my VS code is not like processing the image, but because it becomes a text file with 7.5 megabytes. So that's my main concern when I saw that. Okay, it will be a little bit weird working with this file because I have content above the image and below the image and there is tons and tons and tons of base 64 code. And writing the middle of the file. Yeah, it's kind of hard to manage, but I can try. I can try. Even people in my team, they were curious about it. They wanted to know what happens if we put there because no other operator is trying to do that. So among the other first for network service mesh. Well, let's try it. Let's try it. Let's try it. But first I need to fix a few types on those fields and finally run the UTIMATE testing tool. The scorecards, that will probably put me into a place where I need to put the status field on the NSM object as a whole. So I'll put a very simple one because in order to be an open shift, I need a status field. It's the one that will say like when I have something installed, it will give me like bring it back. Things like here, if it is it succeeded, what status is happening so but not here in the operator in the networks have smashed itself when I run this guy here. So that's it. We should have PR really soon. That's my update for today. Nice. So one thing that we would, if and only if you are comfortable, once all this stuff is working, we'd like to see about creating a new repo for this stuff to live in the NSM repository. And setting it up so that you and a couple others can have access to help us to help us with it and also to have a place. One of the things that we will want to do is we'll want to make sure that the stuff also drives through CI. So that way that as we make changes that we find things that the break the operator and we can make sure that we get fixed over time. So sure. Is this something you'd be interested in helping with? Yeah, yeah, sure, sure. I would be glad to have just a fork. If you put on under the NSM org, it's not a problem, then we can transfer everything to there. That will be a little bit of a pain too because there is the go path. There are some things that we need to change, but it will work. We can change everything and put there. And then I just need to fork. That's enough. I think I can contribute from there and put PRs there and have other people reviewing everything. And it's cool because if we can put more people working on that, that's the goal. Fantastic. And so that means that like we have resources from the CNCF and packet and a few others who have been very generous in giving us resources to CI these type of things. And so I think this is a landing it in the repo and then getting it getting those things wired in I think is a high value. So we'll we'll start working towards that. Sure. Sure. No problem. I hope I hope next week. If I can maybe even not having the PR, I think next week I may be able to do a full demonstration on that. That will be cool. Cool. We definitely look forward to seeing it. Okay, cool. Thanks. Sure, pleasure. Okay, so was there anything else on the or not on the agenda that people would like to discuss? Yep. There was this this thing that came up actually in our previous conversation and I know that it's probably good to to discuss it here. So in front of the group and it's more or less how are we moving and what are the actual plans with this refactoring that is going on. I think that it's worth that, you know, people hear about our plans and where we are what we I mean, we went through the initial presentation about the process but maybe as a reminder and kind of intermediate report of what's going on. I guess it will be good to to discuss a little bit about this too. Do you how I don't help how people feel about talking through this. Sure, sure. I apologize. I was briefly distracted by something. I think you were probably going to discuss about the refactoring stuff and timing. Yeah, I mean, like where we are. How are we moving. Shall I share the presentation that was or you want to share how do you want to proceed here. Yeah, yeah, so totally. You mean the presentation on what's on the repo pipeline and stuff or on the. And I think it's like, somehow, you know, interleaving at least if not the same. Give me give me a second. Hang on. We can sort of talk through what's going on there. One second. Google docs is being slow today. And I'll go through kind of fast, do ask questions if you have them. We've talked about this in the community before, but I, I'm very much against the notion that, you know, you should have to attain the community call or that if you miss a community call people should. I hate it when people are like, Oh, we already discussed this. No, no, no, no, we can keep discussing this. Right. So the repo pipelining that this is stuff that we sort of have already started undertaking. We've got our unit repo service mesh service mesh and it's gotten to be kind of very large and complex. And the CI for the unit repo is very long, which encourages larger changes as it takes an hour and a half to run the CI people trying, don't do smaller things. It also I think probably discourages contribution and it slows development velocity. So we'll look at our Dev stat reports and I can bring them up. You can sort of see that. And then, when we originally put the stack together. I done sort of an initial proof of concept with some of the stuff so you had the unit repo it took about an hour and 20 minutes to run CI. And then you had a pipeline of repos or you had API, which effectively we've managed would give have actions to be able to not only run CI on these but to auto push PR stuff. PRs to update the downstreams. So API runs takes about a minute and 20 seconds to run at CI about 30 seconds later a PR turns up in SDK, basically to update it to where API currently is that takes about a minute and 20 seconds to run at CI. So once you merge that, then about 30 seconds later stuff pops up in SDK, VPA shouldn't SDK kernel, they can run their CI. And so the total end to end, you know, not counting the human review time ends up being very, very quick. And all of this and so it becomes very, very doable to go through and do rapid development. And so the proposal was that we go to a pipelining scheme sort of like this where, you know, API has the top level API is those get auto propagated to SDK as PRs SDK, then can get auto propagated to either various platform things like SDK VP agent SDK kernel SDK SROV, whatever. And those can all be propagated to a series of commands where each command repo builds a single Docker container for a command. And actually, I just talked to some of this last slide. And then when things merge into the commands, they could then auto propagate to update a repo with helm charts or to update an operator repo, which could then run their own CI is on those things. But, but at the level of see of helmin operator, because when you're talking about integration tests, then you're talking about having per platform integration repos like integration dash k dash packet integration dash k dash AWS, that could fit off of integration in, you know, and so things propagate through the system so the, I wanted to go and fix a bug in SDK that ends up being a very quick CI cycle that propagates the system now you may discover that you have a problem downstream. And so we do want to talk about failure detection and remediation. Now, one of the things that's actually true of this model is that it actually encourages stronger unit testing. But let's say that for the sake of argument we merge a PR in SDK platform. And that propagates to SDK command, it passes and it gets propagated to the helm charts and the operators, they go up and they hit the integration and some platform has a failure. You chase back the failure and you discover that actually it was this change in SDK that caused it. So you fix it in SDK you add unit tests to make sure we don't have quite that failure to get an SDK. The PR merges fixed it propagates the system and integration gets a PR that actually can be merged and bring it up. Now please note that each step we can choose to only merge these things. If and only if they actually are passing the local CI. And this has advantages in that it gives us a clean roadmap to introduce new platforms you just started the repo. It's got much faster CI experience for users biases towards catching things early rather than late. And it allows for the formation of sub communities so you know we've already got sort of some of this going on an early form around the srio v stuff. Alex, I think as soon as you can find friends you could probably film out one around operator. That sort of thing. So that's the repo pipelining do folks have any questions on that. Is this sort of what you were looking for Nikolai. Oh, yes, yes, yes. And where we are right now is, we're still getting the pieces put in place for API SDK SDK VP agent SDK kernel so we haven't quite gotten to the command yet that I'm hopeful we'll get to our first commands this week. So the other two things around refactors was we were talking about moving innocent forwarder to being just another cross connect Tennessee. And this was sort of talking about the current state where we have the network service API, where you go through and you say okay I'd like to make a request for the network service, or close the network service, we've got the registry API. But then we also have this cross connect API and this forwarder registration API and the cross connect API is just bringing to two connections together. And as we gained experience we've realized that this makes things very complicated. And so, you know the current sequence diagram is essentially a client comes in the manager makes a request the manager makes a request the network service and point gets back its connection. It then sends a cross connect request to the forwarder gets that back, and that sends the connection back to the NSC. One of the problems with this is that we've got no really good way for the forwarders to, to a priori, you'll indicate things like no I actually can't do that SIOV VF for you, for example. So the proposal is going forward was to keep the network service and registry apis have the sequence diagram basically run as a chain. So you go to the manager, the manager makes a request the forwarder. By the way, these are color coded. So when colors match that's a request and return basically puts its mechanisms that it's willing to do for that particular connection into the network service request that gets into the network service endpoint. Network service endpoint responds with its selection that gets sent back to the forwarder, the forwarder then, you know, basically having gotten the piece that goes towards the NSC, it will then make its selection of where it wants to send you when it wants to drop in the NSC based on its preferences, and then it comes back. And so this, this has the advantages that it's a simplification there are fewer apis and the forwarder just becomes another pass through that offers cross connected to service. It allows forwarders to do resource reservation so if I get an incoming request. I can reserve the resource when that incoming request comes in. Then on the outgoing request I can hold that resource. When I send an outgoing request to the service manager and service manager, and then when the network service manager comes back and tells me what the far end NSC wants. I can assign or release that resource. It also massively simplifies healing because instead of having a bunch of different things we have to do with a bunch of different timers. You know it basically becomes what to do in the next hop and the chain goes down. And so you also get no special cases for forwarder versus any other NSC, which means you can use common SDK elements for both. So if I'm writing a, you know, virtual router, or I'm writing a cross connect at a C as a forwarder, they're both going to use, for example, the mechanism SDK pieces that are in common. Multi multi forwarder simply becomes iterating through the local available forwarders. So you can have local forwarders specific to particular nodes. They don't have to be diamond set. This is particularly important for SIOV where one node may need a forwarder that can program the SIOV Nick. Another may not, or one node may need one that can program the particular smart Nick that's particular to that node, and another one may not. And again, as I mentioned, you can use the same SDK for NSC's and forwarders. And I won't walk through the activity diagram. There's a link here on the slides, the activity diagram. I'll stick this link to the slides, though, into the chat for folks. Everybody okay so far. I feel like this is a little bit like a model log right now. It's not the model log. Yeah. Yeah. So this is sort of just setting the stage and then the path stuff. Our healing is complex as we refactor from the forwarder to cross-connect and see we need to rethink the healing, because the current healing with lots of timers is rooted in the cross-connect API. And so path emerges from this rethink. So essentially just says, well, we keep network service mesh. We got number service mesh and connection on the API. So the proposal is to keep network service mesh basically as it is, and then introduce a path into the connection where the path is a list of past segments. And those past segments have tokens, as well as the name of passing through in the ID, and we're also looking at adding metrics to them. And so the net result of this is that you can authenticate at every step you can authenticate the entire chain. There's a lot of policy about the entire chain. And healing becomes very, very, very straightforward to do in a very localized way. So I've got an example here on restart. So the client is talking to the endpoint. So the endpoint. And by the way, there's an activity diagram with the whole thing back here, but I don't want to spend all the time to walk through it. So the client talks to the endpoint. The endpoint, you know, let's say it's the existing connection the endpoint restarts. So the input restarts client to monitor client gets its connection back, gets its initial state transfer and discovers that a connection it believes it has isn't in the endpoint. So it simply re requests it. It's fairly straightforward. On the client restarting. So the client restarts the endpoint still believes it has a connection. Each past segment has an expiration timer on it. So when that passes the endpoint basically says okay we're done. This also means by the way that clients are constantly refreshing themselves and refreshing their credentials and policy. So if for example that you decide to change your policy worst case exposure for someone being in violation of that policy is the expert timers after that your everything fixes itself up. So client restart. Ends up being. I'm sorry this is network service manager restart. So for network service manager restart. It ends up being much the same way. If your network service manager restarts your client discovers that it wants a connection that isn't there. It actually asks for the connection back and since it's got the path the network service manager knows which forwarder to send it to the forwarder may not even know that that there's a problem yet. Right. So it goes ahead and you know since it's request by to the network service manager who sends it back to the NSE because again that's in the path. And you end up being healed. You could also get the case for the forwarder initiates the healing. But I suspect that would be less likely. And so advantages here it ends up being a simplification because you've got a single behavioral flow everywhere. Robust auto healing as a property of the system. So you can heal if all components but the leaf client restart, which is kind of cool. We sometimes call this place gorilla. Healing only flows forwards, not backwards. This is actually really important because if you try and make it for flow backwards, you get all kinds of crazy proliferation of timers and timers are really hard to manage. So we try and keep timers very localized and simple. Healing is indistinguishable from refreshing your authentication token. Right. So what you do routinely works. And so if the question about healing becomes well does healing work well, we're doing this behavior all the time. It's also more secure connections expire unless they were refreshed. So if policy changes or authentication expires, then the connection goes away. The client could always come back and whenever they get around to coming back, we're happy to re plumb them to where they need to be in the connection. And those are sort of the pieces we've been talking about here. So you were asking sort of where we are with this, Nikolai. Yeah. Yes, I mean, I think that probably it would be worth talking a little bit about how these things kind of chain into one another because it seems like we're moving all of them at the same time. Yeah, I mean, effectively, kind of what it basically, if you move to forwarders, we have an extraordinarily complicated healing mechanism right now that uses the cross connect DPS you need a way to heal. That is doesn't require the cross connect API and it turns out that the path approach appears to be both robust and simple, which is good. So moving the cross connects to the forwarder stuff you need the path piece. And it turns out that all the mono repo problems that we discussed become even worse as you try and do this refactor because if it takes an hour and a half to run CI for every little thing, which is what it currently does. Then you have sort of a serious problem we've been wanting to break up the mono repo anyway. And that's kind of how these are interrelated. Does that make sense. Yes, but I mean, like what what we're saying here is that at some point in time we're going to have more or less all of these like merged against the main repo, or at least whatever is left out of it in the end. Well, I guess the question is, do we want to continue having a large unit repo with long. Okay. Um, so I mean, I guess we have the stuff that continues to go on in the unit repo while we're making the transition. And that's still there and it's functional and I think that's actually very, very good. Um, and while this stuff is coming up, but my guess is that we will eventually get to a point where the pieces are coming out of you know the commands are all coming out of their command repos. The integration testing is coming out of the integration testing repos, etc. So I guess as to where we're going to end up at the end of the day. Does that make sense. Yeah, and then the question is which day end of which day. I know I understand. A lot of people are typing as fast as they can. I'm hoping and repos going this week. And we're getting good unit test coverage that's coming up in the SDK stuff right now. We're starting to sort of test some of the pieces, which is good. Because that means when you land in command, you're much more likely to land with a functional thing. Perfect. Cool. Okay. So one, so one question, perhaps I missed this is based on on how things are currently going. Is there, is it a you're either using the old stuff or the new stuff or is there some form of transition compatibility that's there where like maybe I, I write a new forgery in the new SDK. Is it that easy at this point to integrate with the current monolithic repo NSM that's that's there. No, because the monolithic repo is still using the cross connect API and the healing that was built for the cross connected API. Okay, that makes sense. Yeah, so I mean, it's, but it's not like we're saying go stop everything because quite frankly, if you want to do a new forwarder and we've had a couple of cases of this go by already. You can learn a lot about the process poking at the monolithic repo. And then we do have folks and I see some of them have actually turned up now for the call which is awesome. Who are looking for example at building the SROV forwarder stuff. And, you know, that's, that's goodness. And, you know, also migrating over and building a kernel for her. So that's basically kind of where we're at. Part of the reason this this came about was the realization that we have this thing that's working we have ongoing work where people are doing things to learn how to do like for example new forwarders. And there's no point in halting that learning process. You know, because it becomes fairly straightforward to come and bring that back over here so for example, the way the SDK is written. If I wanted to write say a new mechanism say a mechanism I don't know for wire guard. Now I'm going to have to have figured out a lot about how wire guard works already. And I can do that either in the monolithic repo or in the new repo but once I figured out how wire guard works. All I have to do is write an SDK chain element for wire guard, and then drop it into the command repo, and suddenly I've got a forwarder that supports wire guard. Does that make sense. Yeah, and the reason I ask that particular question is to set up the next thing. One of the concerns that people may have is the, there was quite a bit of work put into maybe creating your forwarder and NSM, and then doing the work to get it migrated into the SDK is. People may be thinking of it as like a similar quantity of effort, but in reality, and as is one of the people who is reviewing most of the PR is coming in. I can tell you it's it seems to be the exact opposite so people are are having a very easy time and actually implementing stuff in the new SDK. And I suspect that the shifting like once we get the, like, SROV works with monolithic. And then getting that to work in SDK is like we've already we've already done the, we'll already have done the hard work of getting it working. First, we already have that advantage and the second thing is a new API is incredibly simple and very easy to, to test and keep modular. So, so I want to make sure that people's fears are around this kind of thing are are reduced. So I know that it's not going to go away until you see it all work. But, you know, definitely. But I definitely feel confident with the, with the current, the current path. And so I think as we get more people wrapped up into SDK, we'll see a lot more momentum. It's just just because of the simplicity of the, of the eyes and, and getting them wired in. Maybe I'm good if you're talking. No, you are not. That was a great explanation. Thanks. Cool. So, but I mean the other thing that I did want to point out is part of this is also we've got people who want to build network service mesh into other platforms. Then just Kubernetes. And so if, you know, if we have these composable pieces that make it very, very easy to do where you don't have to figure out 90% of everything in order to do it. That should I'm hoping make it much easier for people to build the pieces of things. I mean, so for example, one of the, one of the things I'm providing in the SDK is a thing that that simply says okay, I want to do end point. You pass to an endpoint, new endpoint, you pass its name, and you pass the piece that's implementing the thing that's, that's actually the work you're into point does. And so all the machinery around timing out all the machinery around authentication and authorization, all of those things those are not things you have to think about. You just have to think about the piece that is what is it that my particular network service does anything else on this topic. Otherwise I'm inclined to go back to the agenda. It was pretty much open after this. Cool. Do folks have questions, comments, opinions, all of which are welcome here. If not, and we will yield back nine minutes of time and thank you everyone for attending. You'll have a good day. Thank you. Thank you. Thank you. Have a nice day. Cheers.