 Yeah, it's not yet. Probably not. So I've added a few things up onto the agenda. So in particular, I have some super happy news. We are scheduled for next Tuesday, April 9th, for our review with the CNCF talk to become a CNCF project. Yep. Cool. So if I recall, that's at the same time as this meeting. So I recommend that we cancel this meeting for an occasion. I would tend to concur. I've actually got an agenda item to just that point. So yeah, would someone be willing to share the agenda? I'm on a much smaller than usual monitor. OK, I guess I will share. One second. I'm on a cell phone right now, so you can get a cell phone. Yeah, that's worse. Worse than a cell phone? No, no, no. Than a small monitor, the laptop monitors. Yeah, I was going to ask whether you were on. Yeah, cool. By the way, if folks can add themselves to the agenda, to the attendees list, the link into the chat, there we go. Cool. So if folks can go ahead and add themselves, I know we have a bunch more folks on the call than have added themselves so far. Awesome. So do you want to get going then, Frederick? Sure, let's get started then. OK, so first, events. So the Network Service Mesh Day is now done. So the next event is today. And I believe the event starts at around 1, and I believe the talk starts at around 2.30. So we're going to have around 90 minutes to go over a variety of Network Service Mesh topics at the Intel out-of-the-box event for Meetup. So if you are in town and you have time, feel free to stop by. Before we go ahead and clip the Service Mesh Day out of the agenda, how did that presentation go? That went remarkably well. I have a number of people who I've been following up with, and I'm going to start funneling them towards the community. So I'm pretty happy with the overall result. So one of the unfortunate things was they were running out of time. So the amount of time that I had available to talk about Network Service Mesh was cut almost in half. So I wasn't too happy about that. But I basically gave the rundown as to what we're doing. And afterwards got swamped by a number of people who recognized that the L4 through L7 use cases that don't solve the L2 or L3 and that the L2 and L3 are just as important for solving certain use cases that they have. So I'll start funneling them towards this so that we can find a way to, ultimately, and the most important part is that we work out how to build alignment with groups like Envoy and so on. So if we build that alignment, then I think we'll have an easy way to move the larger community towards the direction that we want for these specific use cases. Cool. Cool. So let's see. We have ONS starting tomorrow for three days. So we have three talks that are going to be given by Network Service Mesh or people in the community. And so I don't think we have to, we should put the times on this as well. I think that will probably help. That is probably a good idea. The other one I will point out is that I think if we go to the events page on the Network Service Mesh site, I think we do have the times. So if we go take a look at. Oh, perfect. That's even better. Yep, they do have the time. Are we missing a talk? Possibly. Yeah, because we have three and there's only two there. Yeah. Yes, could we go ahead? If someone could please push a patch to fix that. And finally, according to Prem, and I don't know if Prem is on or not, doesn't look like he is. There's supposed to be a demo of Network Service Mesh at the, oh yeah, it's listed there next, the Elephant Demo booth. So at the Elephant booth, you should be able to see a demo as well of Network Service Mesh. Cool. Awesome. So I need to follow up with some today to make sure that all issues that they've been having have been resolved so we can get it out the door. OK, and then the next event that we have is April 17 to 19. We have Container World 2019 with Prem giving a talk. We have QCon EU that's coming up. And we actually now have, let's see, we have the, do we have talks in that already, Ed? Or like what's going on? We put in two talks, I don't know if they hit the schedule yet. As soon as they hit the schedule, I'll add them to the list. But we have an intro talk and then we have a building NSM Solutions talk, which will sort of walk through how easy it is to build Network Service in points. So there will be two talks that we should be a part of. We also have co-located FIDL mini summit. And the call for paper closes this Friday. So if you want to put something in the FIDL mini summit, this Friday is the deadline. And please do, it would be great to have some talks from Network Service Mesh people. Bonus points if they're not given by me. Yeah, and in the last FIDL event, I really enjoyed what came out of that one as well. So we have people from CNF CF and the last one who presented the CNF testbed. And we had a lot of really great topics. And so it's worth going to it if you're in, if you're sure you will come earlier. See, we also have ONS Europe coming up in Antwerp. So call for papers is currently open, closes in June 16th. And so we'll see what's going to happen with that. We have MEF 2019 in Los Angeles. And we have Cube Con North America at the same time. And so we'll probably, I believe that there's some people from the community who are going to go to MEF 2019. So we'll make sure, and are trying to give a talk. So we'll make sure that they're well-prepared. And so also the call for paper for Cube Con opens up on May 6th. And with that, that's the main agenda. So we have CNCF proposal next week, April 8th. So do not come here if you would like to listen in or participate. Go to the CNCF talk. And we should make sure we have a link there that we can point to people and put it prominently on the top so that way people know where to go. As soon as we finish this meeting, I think that's next up for putting on the agenda. I just didn't want to confuse people putting it there yet. Perfect. And one thing I actually would point out is that we're post slides to present. I have a link to those. If folks could try and get me some feedback in the next day or so on those, that would be super, super helpful. I'm incorporating a little bit of feedback, but we're probably going to get those finalized and put into the actual deck sometime this week that we're going to present. It's basically a little bit of song and dance, a very, very abbreviated version of Sarah's story. And that kind of stuff. So, and then talking a bit about the community, et cetera. So feedback on those would be super, super welcome. Well, I will definitely take a look at it as well. Cool. So in other happy news, so we talked briefly about NSM and various Kubernetes environments, right? Particularly public clouds. And we've been looking at GKE, AKS and EKS. It would be, if folks don't know other public clouds we should be covering, that would be super helpful. I know I know somebody has mentioned maybe the Alibaba public cloud. Is that something you might be able to help us, Sean, in figuring out how to plug into? Shai, I think you're muted. Yep, sit it again, please. I know that Alibaba is a big public cloud provider in Asia in China. Is that something you could help us getting plugged into? Because I'd like to be able to also run there. Sure, definitely. I can try to fit some information, take some information about all that. And to see what are, like say, I don't know whether they provide a free, like say, those kind of services? Well, the thing, the good news is that once we've become a CNCF project, we can get some resources from CNCF to pay for some cloud time. And it's entirely possible. I know that most of the public clouds also donate substantial cloud time to CNCF for use by CNCF projects. And so Alibaba maybe do something similar. I'm just, I literally don't even know how to start engaging with running stuff there. So if you could help us figure that out and maybe help get some of the stuff running, that would be great. Sure, sure, definitely. Are we gonna get our information about public clouds in China? Sure. Yep. And one of the quick thing I just want to mention is I'm told that as of this morning, the guys have actually gotten stuff working on GKE, AKS and EKS. And so we should have PR shortly to fix the last few niggling problems there. And then having done that, then hopefully we're gonna get some CI running on those very shortly so that we can run our CI across those public clouds as well. Okay, that's fantastic. Yes. Turns out the underlying problem was deeply, deeply, deeply embarrassing. I had screwed up the handling of routes and that was what was causing the problem. Ah, these things happen. They do, they do. This is why we're building the RetroSmash. Yeah. Well, it turned out it was a little bit of a chimera of a problem because normally if you have screwed up the routes when you do a trace in VPP you get to the IP lookup node and then you get error dropped and you look at that and go, ha, I have a route problem. But as it turns out, the VXLan incap node has an optimization where it does the route lookup itself. And so we were hitting that incap node and then getting dropped and it's like, what happened? This is not where you expect to hit a routing issue. But it was. Sorry Ed, I'm just curious about, so any requirements to run for public log to run in the same, let's say, what kind of things do they need to support? So we're trying to get to the point where it will always work. So think of the things in layers, right? Which is we're trying to write in a seminar way where it will always work. Now, as you probably well know, if you want to be high performance you may have to do more, right? And so, but we all wanted to make sure it always, always works. So for example, like one of the things we had to fix for the GKE case is the fastest way to get a kernel interface into VPP is with DevVHoseNet. And it just so happens that the normal out-of-the-box Linux that the GKE is running on doesn't have DevVHoseNet. Now, there's no reason it should, right? So it's not like it's a problem on there and at all. So what we had to do was to check for its presence and if it wasn't there, then we had to fall back to the second fastest way of doing it. And, you know, and so my guess is that there's a really good chance that it will just work on Alibaba's managed Kubernetes offering out-of-the-box. But if it doesn't, I expect the kinds of things we'll need to solve will be those kinds of little issues that you run into when you go to a new environment. So the goal is to try NSM and get NSM so it works out-of-the-box on Alibaba's Kubernetes offering. Okay, gotcha. Another one that we may wanna try to take a look at is if you look at the largest ones, you have AWS, Azure, Google, Alibaba, et cetera. IBM is pretty huge as well and they do have a container. They do have what they call IBM Cloud Kubernetes Service. So it may make sense to reach out to them and see if we could potentially do an integration on that as well. That's a super good idea. I will see if I can track down Jason Hunt while I'm here on OS, he's from IBM and he's their TAC rack, their tac rep to LFN. He might be a good place to start pulling that thread. Yeah, I think that's a great idea. So we, and I think that with those, those will wrap up the largest ones. Yep, does anybody know anybody in Oracle? I know the guy who's part of the Java spec at Oracle who's on the name, I can reach out to him and see if he remembers me. Cloud guys, we'll see. But yeah, the basic goal is we wanna make sure that we just work out of the box on any Kubernetes you run us on, particularly on the public clouds. So, cool. Awesome, so that's all been super, super good news. All right, so upcoming release dates. So Nikolai provided these, we had asked about them last week. We previously agreed to these, but it's good to get release dates front and center in people's minds frequently. And so April 23rd is when we plan on pulling the NSMV1 branch. So that's when we're, by that point we should have the features we want for our 0.1 release in place. And then the designated release date is April 30th and then we do our 1.1 release on May 14th. So we have it in place for KubeCon. And essentially, think of it as the 423 date, we should have all the features in. By the release date, we should have really beaten audit, fixed bugs, increased testing, et cetera. And then on the 514 date, we essentially will take any pending fixes from the actual release and incorporate those and continue to add additional testing. So from 423 until 514, we'll probably have quite a focus on testing as well as we go. But please note that it all gets pulled onto a branch. So master will continue to be open continuously for new features. Cool. Anything on the dates? I think the dates look reasonable at this point. Yep, okay. Actually, I see what they're doing. So they're creating V01 branch and then they're creating a V01 zero. I'm not sure that a V01 itself, but what's the difference between V01 versus zero, one, zero, one, zero? The branch versus tag. I see, yeah, tag makes sense. That makes sense. Yeah, it's a branch versus tag thing. Tagging is important. And then we get to go through the joy, and I do mean joy, of figuring out what our release process looks like in the course of this. Having written the ruling for release processes across multiple communities, that is always interesting the first time. So cool. So we should make sure this all gets documented so the people coming on understand how it works. Well, in principle, we want the release to be automated. Automated releases tend to work. Unautomated releases tend to be an unending bag of pain. Yeah, I agree entirely with that. Cool. All right. Nikolai, Nikolai has named the next release Doodle. Yes, so we have a Doodle poll for names. And it looks like Andromeda is currently in the lead, which is actually what I would choose as well. So Andromeda it looks like is in the lead. I think it was a big move towards doing this as constellations. I think constellations are a great choice for release names. Do folks have opinions? Are we okay with Andromeda? Do folks want to speak on behalf of other names? Ariadne. Ha ha ha ha ha. True, true. Is Ariadne a constellation though? I don't know. It could be. Hang on, let me see. Let me do a quick look up. It appears. Come on Google, you can do better than this. There is something called Corona Borealis, which is supposed to be the crown of Ariadne, I think. True. But that's the wrong letter. It's the wrong letter. So Akpala, I think was the next one, but it looks like we're gonna go with Andromeda. Is everyone fine with Andromeda? I voted for another, but I also like Andromeda. Let me look up really quickly the mythology behind Andromeda really quickly. Okay, cool. So fine, all right, so I think we're onto Andromeda then. Awesome, so going back to the agenda, that's cool. Awesome, do folks have anything else for the agenda today before we conclude? I know it's been a little bit of a light meeting because everybody is traveling this week. By the way, I wanted to mention if there is someone else that observes some instability lately in the project, I see some, yeah, not very consistent behavior in the integration tests, and I was wondering if it's only on my side. Let's talk about that, because I know that Matthew Rohan hit something yesterday morning when he was trying to run Vagrant that may have been assumed. Yeah, exactly, yeah. With memory limits? Andrei, do you know a little bit more about that? I know you were hoping on... On the memory, probably with CPU limits could be related. In one of the previous commits, we added with CPU limits, defaults for the data plane only. So I'm not sure. And someone experienced issues with starting an SMD because of this, I'm sure. Probably we need to remove these CPU limits, and add them only in case of a cluster, I really need them. Okay, so maybe the CPU limits are tweaked in a way that's a little bit rough for environments like Vagrant. Yeah. I know there's an issue Matthew had opened, Radoslav, could you take a look at that? If you're seeing the same thing that he was seeing, if you could chime in there and if you're seeing a different thing, if you could speak up, because we definitely want to stamp out any instability and get some testing to prevent it. But yeah, it would be good if you could, if we could capture that. Because if you're seeing instability, even if it's just you, it's probably something in your environment that somebody else is going to have in their environment. So if you're seeing instability, let's definitely get that chased out. Okay, thanks. Because basically I have tried even the integration tests, even the ICMP example, and they're not consistent with their behavior because one may succeed, but the other may fail. So to be clear, when you say CPU limits, is the pod refusing to start or is the pod starting and is it failing afterwards? Is this for me or...? Yeah, yeah, yeah. I haven't played with the CPU limits on my set time. Well, even without changing the CPU limits, is the pod just refusing to start? Yeah. Well, even without changing the CPU limits, is the pod just refusing to start? It keeps crashing for a couple of times, then it's in a running state. But for example, the ICMP checks are not succeeding. Which, okay. So which example is... So which pod is crashing? I have to check. Okay. I believe I have deleted the environment. You gather all the information. I've seen your issues. You write excellent bugs. So if you can gather all that up and either add it to the one that Matthew opened, if it looks like the same thing, or open a new one if it doesn't, we can go ahead and chase that down together. Because again, we want to make sure that we catch any of that and get it beaten out. Because part of the whole value proposition and so we want to make sure it actually does run pretty universally. Okay. Thanks. Anything else that folks want to bring up before we conclude? All right. Thank you guys. So I will see you at the talk call next week. I'm super excited about that. And hopefully by the next time we meet as a community, we will be a CNCF project. Talk to you guys later. Yeah. Bye-bye. That's great to hear. Bye. Cheers.