 So as always, let's start with agenda bashing. If there is something you would like in the meeting that you would like the opportunity to talk about, please add it on. Okay, so today, so we have our events in December 10th through 13th, KubeCon Seattle. So we have a couple of talks that have been accepted into KubeCon Seattle. So a lot of the work going up to that is going to lead into trying to be prepared for what we want to show off there. There is a network service mesh demo that Ed is going to talk about that we also want to prepare. And simultaneously, we also want to see about getting some type of podcast and the blog set up. And we also have a submission to the FIDOMINI Summit that was put in by Tom Herbert. And so an announcement, a very important announcement, we are having a meeting change. And so the new meeting slot will be every week on Tuesday from 8 to 9 a.m. Pacific time. And so if you show up here on Friday, you'll have missed the meeting next week. And I will re-announce this again just in case we get someone new. So today is our last Friday meeting. And the primary driver for this is to get people in from Europe. So a lot of people in Europe are skipping out because of the very bad time for them. They're also some complaints from Israel. So... Yes, it is, my understanding is it's their weekend. It's even more aggressively bad for them than it is for Europe. I mean, it's one thing that we're asking the Europeans to basically come spend their Friday evenings with us. It's even worse that we're asking the Israelis to. So, cool. Okay, so let's jump straight into the agenda. KubeCon demo brainstorming. So, you have the forehead. Awesome. So thank you for following the link. So there's a lot of conversation going on around KubeCon demos for network service mesh. And I'll sort of start this by saying, by the way, that we've got a bunch of different companies who've piped up who are interested in showing network service mesh demos in their booths. So if you represent a company that's going to have a booth at KubeCon and you want to show a network service mesh demo, we would love to make sure that happens. And so, you know, we wanted to make sure we got broad feedback from the community as to sort of what this demo would look like. Because it takes effort to put together a demo and it's good to pull together towards them. And so somebody earlier this week basically said, we should write down what we think we want to do for the demo. And so I took a first swag at drawing some pretty pictures for it. So we would have something to discuss. So don't consider this the, this is what we're doing. Consider it more of, this was a conversation starter on the kinds of things that we would hope to be able to do, right? So at the high level, we're sort of looking at a simple chaining demo, right? Where you've got some client pod that is going to consume a network service. And that gets cross connected remotely to some network service endpoint that then chains locally using MIF to some other network service endpoint. Now it's important to note, I don't suspect we will have network service wiring for this. So this will be a little bit manual in how it gets put together in terms of, you know, the first NSC will actually be trying to connect to the second NSC. But I think it still would be kind of a cool demo at a high level because it sort of shows how these things can be changed together by network service mesh. And this sort of begs the question a little bit, what do we use for the network service endpoints here? And we've got a couple of options for that. One of them is, I know that the VNFC and NF guys are putting together some chains of VNFs. We could definitely do some of that. And I think that would be cool. But I wanted to sort of also point out another option that we would have in slide two, which would be something that's roughly the story we tell about Sarah's secure corporate internet gateway where basically you get a client pod that gets connected to a firewall that then connects to a VPN gateway. And this is kind of a cool story. People seem to like the VPN gateway story. And so that was one idea that I wanted to throw out there. And then, you know, if we go on to the next slide, obviously we could make this more interesting by introducing some replicas. And then, as you sort of stretch further out, you get questions in slide four about things like, how do you want to visualize? Could we maybe get something where we could visualize topology? If we succeed in getting auto healing going, could we use that in order to show something like, oh, I kill off one of the replicas in NFC and it comes back? Now, obviously this is all coming in layers because we don't really know how much we're going to succeed in accomplishing in time for the KubeCon demo. But my experience has been that if you set up your goals in layers, then you eventually get to some piece in that stack where you don't succeed and then you fall back to the one before and you win. So do other folks have other interesting ideas or comments? Or I mean, we're really trying to figure out stuff we can come together as a community to pull towards. I actually, so I'll just chime in and reiterate that I really like the approach. I like the layered approach. These seem like reasonable things to strive towards and I kind of particularly like slide two because it ties directly into the narrative that a few of us have been giving at various events in that. So that one seems particularly relevant just because it ties into the central narrative that we've been telling. I would also suggest, Ed, that somehow you bring in Kubernetes networking to show where it resides in this. So it's not just NSM, there's also Kubernetes network. Basically to show that if you like your Kubernetes networking, you keep your Kubernetes networking. Well, you have them both running together. So I think there's two conditions. One is ships in the night, NSM plus Kubernetes is working independently, which I think is a useful case. Then there's also where does Kubernetes networking intersect with NSM because that's pretty much more useful. So that's just my point of view. No, I totally get it. You're basically, what if I wanted to stick my VPN gateway before my Kubernetes networking? Yeah. Yeah, no, that would be very interesting to look at. Have you thought about sort of what additional things we'd have to do to be able to show that? Oh, yeah, if you think about how to do it, of course not. No. That's actually fair. So I mean, we can sort of noodle on that and see what other things might need to be done. I mean, the other thing I'd like to kind of do at the time is and folks who feel free to try and then tell me if I'm right or wrong, my sense is we got a lot of people in the community who would like to pick up a shovel and help and they're trying to figure out exactly which shovel to pick up and where to go dig a hole. And so my hope is that by figuring out sort of the demo, we can break it up into pieces that various people can work on so we can work a lot of it together as a community in parallel. Does that make sense to folks? Does that line up with, am I reading the room right? Yes, probably. Okay, cool. So we've got one suggestion that we may want to look at. Could we also do something that's a little more involved in interacting with different entities that are working in this picture? And that's a very cool suggestion. Do other folks have other ideas or comments or other things? Hey, yeah, a couple of things. Number one, does the direct mem if between I guess the pods and the nodes on the right, does that somehow, I mean, could people, if we're seeing like imagining cross connects, are we maybe doing something there that people, I mean, our job is to sort of reinforce the notion of cross connects to connect pods in a network service. Are we compromising that picture that people come away with? Let's talk about that. Lucina, could we go back to slide one? I think this is kind of the slides you're talking to, which is, so we've got this direct mem if, which is just a cross connect to get set up between two network service end points. I'm not quite sure, could you maybe repeat your point? I didn't quite follow. Well, yeah, I guess visually, we dropped down to the data plane to cross connect, if you will, the client pod on the left and node one and the NSE pod on the right in node two. Is the mem if sort of consistent with this notion of dropping down into the data plane, or did we just claim that the mem if between the two NSEs on the right and node two is in fact a cross connect, if you will. Yeah, I mean, I would tend to say the mem if in this case is actually a cross connect. Okay. Obviously, that has to get set up somehow and there's an interesting set of words that's going on with data planes that I think hopefully we'll get to some of that next week. Yeah, in my mind, a direct mem if is just another kind of cross connect between things. Okay, and then secondly, I totally agree with the notion of having this at least be present in a Kubernetes networking space. If we can, right? Because people are somewhat familiar with that idea and this sort of does something different and ships in the night, if you like your CNI, you can keep your CNI, but at the same time, if we wanna set up network services and achieve all of what we're hoping to do here, then we can indicate that, hey, this is in a Kubernetes environment and you can start to take advantage of it. Cool, got it, got it, go ahead. I mean, I see it from a little bit different angle because one of the kind of a key point, at least I saw on the SIG networking during the SIG networking meeting is that, I mean, we are orthogonal to the CNI. What my concern is that if we somehow try to show NSM and the CNI together, it might send a people a wrong message that we're trying to do something with the existing CNI. I think, I mean, at least in my opinion, we should be 100% clear that we're not doing, we're not changing anything on the CNI side. So it's completely independent, it can do whatever it wants. No, I think that's also a good point. I think that's also a good point. I mean, part of it may just be sort of stages of things. I mean, when we're first showing people the stuff working, then it may be best to sort of show, I mean, to show, yes, your CNI is still there, it still does its thing and show the orthogonality at first because I think people are going to require a little bit of comfort before showing potentially any kind of interaction between, with threat Kubernetes networking stuff. So I totally get that point too. Cool, so I do have a question also for the CNF guys, which is if you look at this picture in slide one, right, other than the fact that I think you guys are looking for longer chains than just two, is this the kind of picture that you guys are wanting to have as well for your chains of CNFs? I think it is, it is one of them. We're looking at doing it in two different ways now. I think we have the direct MIMIF connection similar to what you're showing on note two here, but we're also looking into doing this kind of snake testing where we always switch, context switch through VPP between chains, but that's mainly so we can compare MIMIF to Vhost. Those direct connections, they're good, but it's kind of cheating when comparing to Vhost. Right, right. I mean, it's sort of a, in addition to the fast, that it's faster, we've also done something smarter than Vhost to do. Yes, yes. You know, and I can understand that. So, okay, so I get that point that effectively you have a situation where you'd want to be able to do both direct MIMIF cross-connects and also MIMIFs cross-connected through the VPP data plane. Yes. Okay, that's good to know. But other than that, you know, are these the kinds of cross-connects that you guys are looking for, those being MIMIF, VXLAN, and I suspect you guys aren't gonna use kernel interface, but are those the kinds of things you guys are looking for? I guess so. Yeah, so right now we're not using VXLAN, we're not using kernel interfaces, but I guess as we progress and get things a little more complicated, we might at least need to look into VXLAN at some point. Okay. Yeah, it should be easy to swap out the kernel interface with MIMIF because we have to build it out for node two anyway. So, I think the primitives will be there. It's just a matter of specifying the correct, the correct mechanism. Yeah, I was thinking of MIMIF and Vhost. I mean, if somebody wants to do Vhost, that's great. I don't have any experience with that, but yeah, I mean, that's the nice thing. We should probably get a couple of things that I think we wanna actually also capture here is maybe something about Sergey's data plane work and the agenda and something about Andre's work on packet.net CI, because I think we somehow got in file into the event and we kind of skipped past them. But I think as the work that Sergey has done with the data plane lands, that other people should be able to add mechanisms pretty easily to it. That's what I'm hoping for. And I know that Sergey merged another commit that added some more proto definitions as well. The other day, and I think that will help. Yes, cool. All right, so anything else that folks wanna chat, suggest around the demo? I think, this has been really, really helpful. And are there any folks on the call who are interested in sort of moving the ball forward on some of these items for the demo? Well, I'm difficult. Sorry, you go on. I didn't understand the last couple of words. Hello, are there people who are interested in, so part of getting to a demo is you break it up into smaller pieces that people can work on. And I was just curious, who on the call might be interested in picking one of those pieces up? Well, Ed and others, I've got that 185 issue. And it's really like one half of it, really it's in slide one. So, I mean, I still wanna see if I can actually just take, I mean, I'm just wanna do something simple initially. Partially, it's to educate myself because my understanding of Kubernetes isn't anywhere near as profound as others in this group. And so I wanna keep it simple just to make it easier to set up and something that other people can leverage on maybe. I don't know, I hope I can get even that done. But that's my goal is to just do. I recall you're working on a bridge NSE, which is goodness. Yeah, yes, rather than, and to me, that's not much different than what's here because here we have DX land and separate nodes, but rather I would just, you know, which is conceptually the same, but. Cool, excellent. So yeah, if anybody else wants to sort of pick up a shovel and help on some of these items for the demo, please feel free to reach out. You know, we're kind of, I'm desperately trying to break these out into issues as I go, but as you probably noticed, I'm really terrible at that. Yeah, something I can do is I can set up the initial deployment and pod structures and set up the initial entry points so to make it easy for people to start working on those independently. So basically come up with half on two nodes, set it up so that one pod lands on one node, the other two pods land on the other node, entry points are clear, you know, basically set up some structure for people to start implementing things. And that will lead into the next part, which is how do we land, how do we land VPP on that particular system? So we can start setting, we can start working towards getting this set up in the CI system as well, which there's already been a lot of work towards that. So it would just be a matter of leveraging some of that work that's been done by the other members of the community. VPP will come within the data plane. So it's bundled. So basically all you need is just to start a data plane pod and VPP will start as a site car container and serves pretty much all the requests from the local NSM on the same node. So the only thing is that, I mean, your host must support huge pages and the cooblet should be able to discover them because that's the requirement. Is a site car the best place to land? I mean, it's okay for at the moment, but is it the best place to land it? Yeah. It goes along with the microservice architecture. So we have to be a little bit careful about the use of the word site car because there are a bunch of different things that people mean when they say site car at varying degrees of precision. So let me sort of be really, really straightforward. I think what Sergey is talking about is there is a pod that contains the VPP data plane. It also contains the necessary agent machinery so that it can expose the data plane interface to the network service manager can then talk to. Not perfect. And the reason I say this is I used to get myself wrapped around the flank pole because when I sent site car, I meant very specifically another container running in the same pod, which is kind of what really does happen, but I've discovered that lots of people don't use the term site car that way. They use it more broadly, and that's perfectly valid. So I didn't want to disambiguate because it felt a little like we might be talking a tiny bit past each other. Yeah, no, that's perfect. So yeah, we'll make some of that then, but I'll start setting up the proper deployments and Kubernetes and so on in order to, in order to make it easier for people to join in because I get the feeling a lot of people, like even just landing your pods in Kubernetes can be fun when you don't have that experience. Okay, cool, cool. Hey, Ed, maybe I can take it up a layer or two, but I'd be happy to help out with the sort of demo flow and how we might choreograph this thing and maybe help design some sort of simple GUI if we want to have that. No, and a simple GUI, that would be huge help because one of the things that would be lovely to do is just to be able to let people visualize the topology of network services, the connections between network services, that would be kind of epically awesome. And the other thing that was kind of running through my head is if we get the auto healing working, which is definitely a stretch at this point, but if we got it working, if you would let people kill off network service end points in the chain, sort of video game style, so they could see the auto healing working, that would be unbelievably epic. You kill the network service endpoint and the connections get rerouted to other replicas, that would be kind of epic. If you want to go with an easy integration, there's a gopher bash game and the Kubernetes team actually has a physical arcade game where you bash the gopher in the kills containers that represent the gopher. So if we reached out to them and so showed them, we had this amazing demo that did that, maybe they'd be kind enough to lend us some hardware. But at least the software to run it is open source. There's a virtual version of it that you can point and click. Awesome. All right. Cool. Anything else on the demo before we move on? I know we've got a ton of other stuff to cover here. I'll just make one quick mention. I won't take much time, but I'm actually working on that second one, the VPN side. I'm working on the integration for that, so. Okay, cool. So you're working on the building of VPN gateway and I see. Exactly. Oh, very sexy. Okay, so is there any last comments on this or should we move on to the Data Plane API documentation? Okay, Data Plane API documentation. So it has me listed as the lead on this, but I think. That's, I think actually probably me, and I apologize, I've not made any great progress on that this week. No worries. Should we punt it till next week then? Yep. Okay, punt it. Okay, so comments in the abstract section. Do you, is that also looks like it's the same? I think you probably should punt it along in the same way. You'd probably even just incorporate the ST together. Okay. Those are the same NSM API document in different sections, right? Correct. Correct. Okay, so X Factor CNF updates. So I have mostly moved over the code from the Ruby based server to Hugo and I've added two links to the agenda. So one of them is the link to the GitHub repo where it's currently living. And the second one is a staging environment where you can see the result. Please pardon the theme. I just picked a random theme that sort of looked half good, but there's some problems with the structure of the theme that I'm not particularly happy with that leads to either change or replace later on, especially around the wording. The theme was written for performance by making your text be more sparse and longer and increasing margin size. So it looks like you wrote a lot more than you really did, which gives us the wall of text effect when you read the introduction. So I need to make some changes on that. But the thing that I would love some help on is in two areas. So one is the, there's some new sections that were added in on the bottom, which are the parts that relate to the CNF itself. So the first question is does this look good? Does it look sufficient? Read them, comment on them, please. If you think there should be any changes, pull requests is more than welcome and we can start working on this documentation to get it up to speed. The second one is I went to a really cursory review of the first 12 and I made some modifications on the first 12 to just talk about the X Factor CNF rather than 12 Factor apps and trying to remove things that were not relevant such as the HTTP port binding for all communication. But something that would be really, really helpful if someone has the time to do so would be to read through each of those and modify them so that they're using technologies and databases and so on that are much more relevant to the CNF community. So for example, I doubt that anyone in the CNF community is gonna be making a Ruby based CNF. More likely we'll see C go maybe some rest or so on. And so these things need to be refactored to be more relevant to the CNF community. And the third thing is that there may be sections or things that I did not talk about additional rules and so on that we should look at. And so if there's something that is missing that you strongly think should be a part of it, let's start the conversation on that so we can talk about like, is it appropriate? What message should we put forward or so on? So in a nutshell, it gives people time to understand and so on. And feel free to start making suggestions and requests within the pull requests. Are there any questions in the scenario? I think it's a good start. Thank you. Yeah, and if anyone's good at HTML and CSS and so on, I would also love some help with the theme. So that's not my strong point. Like I can do it, but it'll take me a little bit more time than it would if someone who is experienced with it. There's another area you can up with that. Okay, if there's no questions on this, then let's move forward because we still have a bit to talk about. So last week we punted Romkey's Kubernetes network policy discussion. So is Romkey here? Doesn't look like it. So maybe we'll punt it again. Okay, so we have two updates. One, the first one was Sergey on the update to the data plane work. So Sergey, you have the floor. All right, so I pushed PR which introduces the VPP-based data plane. At this point, it has a fairly limited functionality in a sense that it interconnects just local ports. So along with that PR, there was some changes done to the data plane API, but they were done in agreement with ads. So it doesn't contradict to what ad is working like on the global data plane API. So it's safe to use, like if you decide to develop anything in that area. It's a little bit more specific. Sergey apparently has much better taste in naming of things than I do. And things with deeply counterintuitive names like mechanism one and mechanism two became local mechanism or local source and destination. So overall it's goodness. All right, yeah, thanks. And as I mentioned earlier, the model is the following. So in the port data plane port, it's kind of a controller. There are two containers. One container runs a latest VPP code with the very default configuration. There are not too many changes. And then there's another container that runs the agent. So agent, it's some sort of an interface between the NSM and VPP. It runs the GRPC server, which is serving that API I mentioned earlier. So there are two requests now, connect request and delete request. And on the connect request, we are passing map with the strings, which are basically parameters for that connection. And currently there are several parameters like the namespace, process ID of the container that is requesting the connection. And we also can pass the IP addresses, which VPP will apply to the cross connect between two containers. It's working perfectly in my test environment. But in CI, it won't work right away because as I said, it requires the huge pages configuration without huge pages, VPP will not work. So once it's done, hopefully soon on the packet.net, then we'll be able to do the CI and test them in the CI. I do have a question for the VNFCNF guys, which is I know you were using your cross cloud CI scripts. Do they enable the huge pages stuff or are we shortly going to have something that does? Currently they do not. Hopefully that's something we can roll back in to the main cross cloud CI as far as that support. And the next two weeks based on what we're currently doing, we're working on that right now. Okay, cool. All right, that's it for me. If you have questions, please ask. So that's actually awesome work. And it provides a good framework as people want to add additional mechanisms the next week or so. The one I'm probably, that I think people are most interested in right now is the MIF mechanism next, which is going to be awesome. Cool. Okay. Yeah, I'm definitely interested in the MIF. So pretty happy to see that, see that happening fast. Okay, so Andre, do you have an update on the packet.net CI? Yes. I made EPR with code which deploys Kubernetes cluster in packet.net thanks to cross cloud project and run all the integration tests on the servers deployed in packet.net. And right now we are finishing Circles CI configuration to run these to run these in Circles CI continuous integration. I think probably that this is a good thing for two reasons. One is it keeps us, the Circle C is sort of orthogonal to the current stuff we're doing with Travis, which I think is good. But the second one is, I think Circle C is much better at parallel jobs, basically fanning out jobs. And one of the things I think we're going to find is that because the cross-cloud CI stuff scales across a bunch of public clouds, as time goes on, we'll get CI not only just on packet.net but we'll also get it on AWS and GC and Azure and all these other places. And so being able to easily parallelize I think Circle C is probably going to be better for it. It's also easier to, I think pay them for better features and support over time as well. So there's a easier integration out in that scenario. That is definitely huge. And they have some interesting debug capability as well. For example, you can kind of hold the nodes if you want to log into them via SSH and debug some things much easier than with Travis. Oh, cool. Awesome. So thank you, Andrea. I'm looking forward to that landing. I know you're so close. You and Kyle are just sorting out some credential issues. So. Yeah, thank you. Cool. Yeah, and then probably it sounds like we do want to figure out. Sergey, could you reach out to Andre about what has to happen as well to enable the huge data stuff? Yeah, definitely. It just basically just simple configuration in the boot loader on the host OS and that's it. The rest should be automatic as long as you're using the recent Kubernetes version. So it doesn't require any gate flags or anything. My recollection, Sergey, is that the huge pages stuff is enabled. Like there is huge pages stuff on packet enabled by default in terms of boot loader pieces. So I think we're probably okay for the boot loader. Now, what was the second piece? Oh, nothing. I mean, if we have, like if you do cut pro mem info and you see the huge pages, then Kublet will be able to discover them and serve them. Yeah. Okay, so I actually have done that before on packet. So unless that is something that varies from type to type of node, we should be good. So that's goodness. If you can type in the command in the agenda, I'll go on one of the running systems and type it out and see. Because they have a system they put on hold while they were debugging an issue for us. And I can just log into that and check to see if huge pages is enabled. Yeah, sure. Yeah, I'll do that. Cool, thank you. And this will also be something good that we can document in like once we, once we get better documentation for data planes as they start to, as we start to build that out. This is something that we should probably make sure that we add in. I'm not sure what the right way to track that is though, considering we don't have a place to land it yet. Maybe we'll make a GitHub issue for it. Anyways, other than that, is there any other questions then on the updates to the packet.net CI? Great, thank you very much, Andre. Let's see, for the action items, I haven't updated the action or the agenda board this week. I'm gonna jump back into grooming and maintaining the project board again since it's proved valuable in the past. So for an action item for me is gonna be getting the agenda board back on track. And we'll start using it again starting Tuesday. Anyways, before we finish up, is there anything else that anyone would like to discuss before we yield back time? If anything, there is, I added like a quick entry in the bottom of the list, I guess, below the announcement as well. And I just had a quick talk with Machik as well. So I guess he wanted to keep this as a weekly occurrence with just the VNFC and CNF demo comparison update. Okay, go for it. Yeah, well, it's quite quick, I guess. We're working on getting the single chain with multiple network functions up and running in three different configurations. So we have the VNF that connects and context switches between VPP and the network function for each network function in a chain. And for CNFs, we have both the context switch that we discussed a little while ago and we have the directly connected MIMIF interfaces. And I think right now we're just finishing up some modifications to the scripts to fully support them with the configuration we need. Then we're going to run some benchmarks and then we've just started doing some templating or getting the templating started to actually make this a little easier to run for anyone without having to go through all of our quickly put together batch scripts. You're right, very good directions, Michael, but it's very appreciated that you're producing things that don't require reading them. Yeah. Michael, I have a quick question. I mean, you mentioned that you have MIMIF plugged into the VPP, right? Yes. Is it done in the batch scripting or it's done in the, let's say, go code or? It's batch scripts right now and what we do is we use the batch scripts to generate VPP configurations that we just load. So basically you're using bin API to talk to the VPP, right? Yes, yes. Would it be possible to share those scripts because basically the sequence in the go code would be very, very similar. So instead of scratching my head and trying to figure out how it should be done, I mean, I could probably use that as an option. Absolutely. Oh, thank you. Everything is in the CNCS CNS repo. So we have, most of it is under a comparisons folder. There's three main subfolders. I don't know if we've renamed Mike the new folder, but it's been CNF VEdge throughput and that's where the most recent, there's actually three comparisons sets that we've done in there, but all of the batch scripts and everything else. You can see some other top levels that we're adding things to. But if you go in there, that CNF has throughput at the bottom. That's gonna be the most relevant stuff, including the VPP VSwitch. So that's the host level configuration, which will have a lot of information on the MIMF and other setup at the host level. And then inside of the VEdge is, which is maybe the wrong name, but it's the network function for, can you go up on folder? Yeah, the VEdge, we have the CNF and VNF, so that's gonna have the, that's the VM side and the container setup for how we connect. And those MIMI-F interfaces are mounted into the container right now. Great, thank you. That's awesome. Yeah, I think if you have any questions, just send us an email. I think we've gone through enough of weird bugs and whatnot now that we should be able to answer. Okay, nice. So is there any more on this topic, or should we move on? I think we've gone through it all now. Yeah. Okay, in that scenario, was there anything that anyone else would like to discuss? So again, before I close it out, just a reminder again, show up next Tuesday, not next Friday, same time, just different day of the week. So the next meeting then will be on the 16th and I will contact Prem to get it all changed upon the calendar. And Ed, one thing that we should do as well is make sure that there's no issues with Zoom, if Zoom has a specific time scheduled that the meeting comes up and down on. I will go and inquire. I suspect probably not, given that we have our own URL, but we'll let the Ligo ask. Yeah, I suspect not as well. I just prefer not to have a surprise. No, I'm with you 100% of that. We probably will have an issue with scheduling again, the meeting once we get to US, and the end of US, they like savings time. That's a good point. Confused everything. I don't know whether we want to consider pegging and at the UTC now at this time, so everybody get used to. I find, generally speaking, that my experience has been that meetings pegged to UTC. Well, basically meetings pegged to UTC are not particularly better than worse than pegged at any particular time zone. The real trick is to pay to a time zone, right? And so, because then you can enter into your calendar in a particular time zone and it actually shows up at the right time of that time zone. Well, but the daylight savings time confuses things. Time zones actually incorporate daylight savings time into themselves, right? So if you- Good, yeah. The daylight savings time happens, it's still at the same time in specific time. If you pay it at UTC and daylight savings times happens, it's still at the same time UTC, but now it's at a different time in specific time. So effectively, you can pick a time zone and pick your poison, but- Okay, so, all right, so that's fine. So it's very clear then that this meeting time pegged to specific time, whatever that is at the time of the year. Okay, that's fine. As long as we're clear and that's what it'll be. Totally, clarity is excellent there. Thank you. Well, we'll make sure it's pegged to a specific time zone as well and publicize it for people who are managing their own calendars. Other than that, thank you everyone for attending and we will see you next week on Tuesday. Okay, right, goodbye, thank you. Take care, goodbye. Thank you, goodbye.