 missing, but that should be okay. Okay, great. So welcome to today's network service service mesh meeting. And as always, we start with agenda bashing. So if you see anything that is not on the list, please add it. And also please add yourself to the attendees list. Okay, so events, we have cube con on December 10. So just a month away from 10th to 13th. So I believe the 10th is the day before the conference. And there's some, there's a mini summits that are going on. So we have a FIDO mini summit that we have a couple things that are going to be presented. And there's also a open source comments, which we may end up giving a talk there as well. So we have two talks at the main session, an intro and a deep dive. So feel free to to join in and and listen or get involved and help if you'd like. We have we are currently working on a NSM demo for cube con is actually a few demos that we're working on. One of them is the is trying to get things set up for the VNFC and F comparison. We have more to discuss later in the agenda. We are also working on a standalone network service mesh demo so that we can so that we can show it off during the presentations. We also are looking for people who wish to to do some form of a recorded popular podcast or join in on various other things or looking for people to write blog posts. So any help that you can that you have either something you want to do yourself or help others with that'd be absolutely fantastic. We also have a the final mini summit listed with a session on what I think was a one or two sessions is doing two sessions now at final mini summit. Hang on a second. I've lost the mute button. There we go. Alright so yeah it is going to be actually it looks like two sessions the one you've got listed there from our friends at Oreo Networks and then I think Tom Herbert also has one that he's doing. Yeah Tom has one that was accepted in as well so okay. So a couple of a couple announcements. Volk has put out a second network service mesh video. This one's a five minute one and the link has been added to the to the list. So some time after the meeting go ahead and and take a look at it. It I have not seen it yet so but I suspect it's probably going to talk a little bit about the problem that we're trying to solve with network service. We should probably take out an action item to get a link to some of these videos on to the network service mesh site because I think they're quite helpful. And it would be good to get them linked in from the SM site and possibly also from our read me. Yeah I think that's a great idea. So and Ed and I also gave a interview at the cloud unfiltered podcast that's hosted by Cisco and we've also posted the link there. So that one is up is a bit more I think it's a 30 30 minute talk that we ended up giving. That's all right. Okay. So on to the agenda board which so the agenda board actually we haven't added the stuff from the past week that we were that we were adding. So my proposal is instead of doing the agenda board today we have a section where we're talking about the changes that we're making and then next week we'll be back to the agenda board. Does that sound reasonable to you? How does it sound everyone else? Okay we'll say it sounds good. Okay this let's start into I guess first item on the agenda. Cubecon demo for network service mesh. Yeah so the slides actually I've only happened particularly changed the there was some thinking about basically a fairly basic format for that being first tell a bit of a story probably a trim down version of Sarah's story. Then literally just have a very simple one or two control applies that make it real. And then the third part is being able to visualize it hopefully with the skydive integration so that people can see the result. That was kind of the set of thinking that that that I was having about that. For that I think we've got basically three things. One is that we will need someone to help with getting a sort of a shrunk down version of Sarah's story that can be used for you know for the demo to present this sort of what are we talking about. The second one is and this is stuff that we're all working on which is delivering the working network service mesh stuff and we're getting you know quite close to the first drop for that. And then the third is the skydive integration I know we've got folks including David who are looking at that as well. So how do folks feel about that as sort of the one two three of the demo. I think that sums it up quite quite well. So for the narrative though the one thing that we need to be a bit careful with is to make sure that the narrative that we're giving matches the the code for the network service endpoint that we're providing. So as we need a little bit of collaboration on that side. I suspect that the components in the skydive because of the way that it's visualizing and don't need to have this quite as tight of an of an integration from that perspective but we should make sure that that the what's being presented is is impactful for the for the demo for the demo. So let's see. We also don't have a skydive section on here so I'll add something to the to the demo or I know I know there's been a lot of activity there trying to sort of sort things out figure out where the APIs are etc. So let's see. We have a section on the on the code as well. So let's let's go ahead and talk a little bit about that and then we'll jump then we'll continue on with the with the agenda for for Andre. So in terms of the code, we've done quite a lot of work to simplify the network service mesh. So do you want to do you want to start off it and talk about some of the changes that we've that we've made? Yeah, so effectively, network service measures become much more micro servicing in the sense that you've got a bunch of small components that are talking to each other with well defined gRPC APIs. And the biggest change that's going on right now is trying to get so it turns out and we found this out last week and I'm kind of embarrassed that we didn't find this out till last week. It turns out that BPP actually has a something that exposes a gRPC API for it already. It's called BPP agent. And so we can simply just run the BPP agent and point to it as a gRPC client. And this makes the data playing part of the story for us so much easier because literally all we're doing is translating from the network service mesh data plane API to the BPP agent API and they're both gRPC. So it becomes super, super easy to do that. So we're in the process of making that transition currently. And the good news is the BPP agent's been pretty well hardened. It's actually been used in shipping product. So that that should hopefully take care of a bunch of stuff for us with using BPP. They've also done the work to make the container for it super small. So I think the container for BPP agent, which includes both the agent and BPP is something like 64 megabytes, which is pretty respectable, given that it's got both of them and they're in doing their thing. Do you have other things that folks would like to point out or talk about? I know you've been doing a lot of work on sort of the CRDs and the NSCE, NSCE to NSAM stuff, Frederick. Yeah, I think the biggest thing that the biggest change that I made in regard to the CRDs is historically the network service endpoints was driving a single API that was the registration of the network service was the same API that was being exposed to CRD and the exact same model. And one of the things that we came to the realization was that having that this wasn't a particularly good idea. So we ended up separating them out. So one of them is just pure protobuf and the other one is just a structure that generates a client set for Kubernetes and generates the clients necessary for us to get cube control integration and so on, but is very closely aligned with with how Kubernetes expects things. And the way that what Kubernetes does is the CRDs have what's three sections. It has metadata where you put in things like names and labels. It has the spec, which is things that that are properties of the system that you don't expect to change or like configuration. So the spec could include things like what is the payload of a network service? And you have the status and the status is like existing online. What's the IP address and so on. And so we've so it's very much, it's more closely aligned with what Kubernetes now expects from from its CRDs. And so effectively, now you can do keep control, get network services, and you'll get a list of network services, keep control, get network service endpoints and get keep control, get network service managers, and you'll get you'll get a list and status of these various things. And it all just works. It and you can also access them programmatically. I've also written something so that when the first network service manager comes online, it checks to see if the CRD has been created and how to create it. And so spinning up and adding the CRDs is just as simple as as running the application. And so we have so we have quite a few things that have been added in from from that respect, but the at the end result is we ended up with something that's a bit more simple, because we don't have to worry about is this thing if we make a change to the protocol, we can iterate on it and not have to worry whether we're breaking every consumer of that CRD. So and beyond that, I've also we've also isolated the portions of the registration. So I've called I'm right now I'm calling it a registry. And effectively, that's what publishes the network services and endpoints and so on. So this information doesn't have to live on Kubernetes, but we implemented a Kubernetes component, the microservice that knows how to publish this on on Kubernetes and then uses Kubernetes for the book for the bookkeeping. So in the future, if someone decides to build something out for another scheduler or another orchestration system that is not Kubernetes, then this is the component that you that you'd end up having to replace in order to work out where our network services and where do I where do I find them. And so so we so breaking out the endpoints or the API is from that respect, I'll also simplifies the not gonna say it future proofs, but it gets us a long way there. Well, and so the last thing I'm working on right now is an ICMP responder, since that's like the hello world of network services. It is the simplest network service in point. Yes, it's the most one that I know of. There's there may be a simpler one out there somewhere. But none that I'm aware of at this point. Is there is there anything else that you can that you did you can think of that we need to add in? I think that's probably the piece right there that comes to mind on sort of the what's going what's been happening. So there's been a lot of stuff moving in the code base. If you've been sort of watching a ton of really cool stuff has happened in the last couple of weeks. And things are really starting to fire on all cylinders. One of the other nice things, by the way, about this refactor is that it makes it possible to run component to component integration tasks. Without having to stand up an entire Kubernetes cluster. So for certain kinds of, you know, edit compile run loops, you know, it shortens that edit compile run from edit compile run deploy to edit compile run, which can speed up development a great deal. And we got go build working again. So before there was a C project, a C go project that got included in that would break the go build. And so there's a pull request that's pending, that fixes all that as well. And also gets us off of the C go and gets us on to the native go runtime, which is absolutely huge, because that's where 90% of the work can go in the runtime is focused on the go runtime, not the C go runtime. So we're going to company there. Yeah, it ends up being that sort of a side effect of the consuming VPP agent, which is by consuming VPP agent, the APIs we see are all gRPC. And the only thing we have to do is take a VPP agent container off the shelf, and swap in the configuration files that we need. And it's good to go. So is there any other any other questions or anything that people people would like to drill down on this? Okay, with that, Andre, do you have an update on packet.net CI work? No, nothing changed on my side was for bucket net. I think the packet.ci stuff is basically just working about 90 percent of the time, we still do have a little bit of occasionally we can't get a packet. Occasionally the packet deploy job fails still. And I'm told that they're in the process of rolling out something they think will fix that. And so basically, if we if we get that solid enough, then the sort of question emerges, at what point do we retire the Travis CI? I actually don't see a reason. I personally don't see a reason to keep Travis around at this point, primarily because we're doing the full build on a circle CI. And Travis is duplication. And if the if the CI fails with packet deploy, it fails to build anyway, like Travis doesn't pass the build and override the failure of circle CI. Do other folks have opinions? This is what happens when I search for Safari. Let me go ahead and actually just share what the circle CI build looks like, because it's actually kind of nice. I could find the proper share button. So this is kind of what the circle CI looks like right now. Can folks see the the share that I've got with circle CI? I can see it. Awesome. Effectively, one of the nice things that we've done is most of the containers that we've sort of broken them off into parallelizable buildstuffs. Turns out that most of these parallelizable build steps are you on mute? I can't hear you. I do I'm not on mute. Can other people hear me? Yes, I can hear you. Okay. All right. Can you so it looks like we have lost Frederick's audio. But effectively, we've broken these down into very small steps that actually end up being pretty fast to build. And also gives you really granular visibility into what's going on because if you're like, okay, well, what failed? Okay, well, building this container failed. Okay, that's not good. You can just scroll down into precisely what failed for that container. So as CI gets more complicated, one of the things that drives me frankly, completely crazy, personally, is when you get the giant monster log of doom, and you have to be very skilled to figure out why the hell that CI broke. And one of the nice things with circle CI is that you don't have that. Frederick, are you back? I am back. Cool. So do folks have do folks have other feelings on the on disabling Travis? Anyone in favor of not disabling Travis? All right, then it looks like we're probably going to disable Travis. Hello. Yep. Okay, so just go ahead and add that to has an action item. Let's see. I'm going to have that right now. So so in terms. So let's jump into the VNFC NF comparison then. So I see is Taylor on right now? Or is he is he in this TSC at the moment? I'm here. This is Taylor. We have several vocal. Can you hear me? I hear you Taylor. Great. Okay. So let's see, we created this aggregate project view since there's so many things going on. I just posted in the chat. And we have sub projects for each of the, I guess, larger components. So there's a project for the open sack. That's in progress. Right now, the testing for the VPP neutron plug in is one of the main items that's being worked on. Most of the rest of the open sack cluster that we're going to be using for the test is, is done. And deployments are automated and being documented and everything. So it's the the stuff with the VPP neutron. And then once we have access to an environment that we feel stable enough, then we're going to start doing some of the updates and for the actual test case where we'll be connecting all the VNFs through the VPVV switch. On the Kubernetes side, we've added support for Ubuntu as a host OS to cross club. So that'll be something that NSM can use for any of y'all's testing. It's been core OS before. So you can use core OS or Ubuntu at this point for host OS containers. So the CNFs don't matter. So just host OS. And let's see. The, a lot of the host configuration that we'd like to use for performance and stuff has been done as part of the packet generator. So the system that's actually sending the traffic from TREX and NFE bench driving that has been done and that and that's going to be rolling into all of the worker nodes and make that available. So right now, that's we can deploy and provision a packet system with dual melanox mix, and it gets updated with all of the host settings and kernel configuration reboots and the box is unavailable. We can also provision a quad port Intel. So this is something that right now we only have access to but quad port Intel Nix and we have provisioning working for both of those configurations. We have some reserve systems. So this is kind of early access before that configuration is made publicly available for everyone else. I think all of that provisioning software is going to be useful as NSM gets passed a lot of the functional testing and wants to target real specific things. So all that's publicly available. And we've been working on the the VPP V switch setup for the test case and the provisioning of that being able to support both the so our test cases are direct CNF connections between each container over MIMI as well as a secondary comparison to show a more direct apples apples is going to be all the CNS go through the V switch in a snake, which is what you have to do on the VNF side. So we're doing both of those in Kubernetes and in the process of building out the provisioning for that. We also have a lot of results from Macek, Peter, Michael and a bunch of people that have been working on testing the software on the CSAT lab side and and validating that they're getting the expected results from the daily runs that happen in the CSAT lab and comparing those to what we have and then we're rolling and merging anything that's ready back into the code so that we can optimize most of the results are pretty raw and they're going right into the repo. But feel free to look at those in the CNF repo and some of them are direct dumps from NFE bunch and then there's some summaries and a couple of markdowns, but I think that's probably it for us. Nice. Okay, so any questions before we move on? Okay, let's, let's see about jumping into the skydive and do we have the right people to talk about skydive on the call right now? Yeah, I'm here. Cool, if you can stick your name out so that we can know who we're talking with. Is that David? Yeah, please. Okay, cool. So so in terms of the the skydive so one of so one of the things that we need to get to you is the monitor, the monitor endpoint. And so what we're, I wanted to mention about two areas where you can pull information, one of them has been built out already. The second one is need we need to write a monitor connection on. So so to start off with, so here's here's one question. And I want your opinion on this as well. Do we do we want skydive to pull directly from Kubernetes some of the topology information? Or do we want it to pull? Do we want to expose that through a through a monitor endpoint that that pulls off changes from network services, network service endpoints and so on and passes them along to to skydive. So I think that the reality is that Kubernetes, we don't have any plans on the record of making Kubernetes aware of the connection by connection topology of network service mesh. And I think probably for the best, because I think frankly, that would be interesting to the Kubernetes player and ultimately not helpful. So I think we're kind of left with having some kind of an API, you know, we call it monitor connections for one of the better term that is available from the network service, the NSMD that that basically we would just provide information about those connections and changes in them that could go northbound and be integrated with by a variety of things, one of them being skydive. Does that make sense? Yeah, so the connections that fully agree that should be a generic monitor connections. So the the second one is what about the at this point, what about the the network service and network service endpoints, because those do land in Kubernetes, one option that we have for the interest of time is we could pull information out of those and they have very well built clients. Or the second approach is we could wrap those clients around gRPC and just expose the expose the network service endpoints themselves as a as a gRPC endpoint. My guess is that it kind of ends up looking like the following. Which is my guess is that the you've got two sets of problems with skydive. The first problem is how do you find network service managers that you can ask for a topological information? And then the second did the second set of information then is okay, having gotten that, how do you actually go and ask them for topological information? It strikes me that the how to find a network service manager is something that probably is best done via the CRDs, because those are sort of clearly visible and out there. And then, although the discovery piece of that you could do via gRPC, if you were to bring in something like the NMS, the NSM k8s, which will give you a gRPC if that's what you prefer to consume. It gives you a nice little adapter there for discovery. And then the other question is to pay me the picture of what it would look like representing a bunch of network service endpoints in the absence of their links in skydive. Yeah, I'm not sure what that would that would look like like it could be it could look like like a node because there is something there. So we could we could show a node that is isolated that nothing has any connections to yet. And then the actual connections when someone says create a connection or close a connection, you can become edges from clients to to those endpoints. Yeah, I think we get sort of two. We wind up with sort of the following things, which is you wind up with network service. Yes, these the nodes of the edges, the edges we can discover from the edges we can discover from the the monitor connections kind of stuff, you're suggesting possibly getting the nodes from k8s. That would give you the nodes from the network service endpoint nodes. But of course, network service clients don't actually advertise themselves from discovery. So right, you know, that that would have to be sorted out. Hey, um, so question on when we talk about nodes, we're not talking about the k8s notion of a nodes, we're talking Oh, graph nodes. Thank you for making me. Oh, I see. Right. Okay. He misgrip us about my language. I try to remember when talking about generic graph theory in a Kubernetes context to always call them graph nodes. But I don't always succeed. Yeah, same problem with ports, namespaces. No, but thank you for calling me on that. Cool. So, so I think so we could create a a graph node on demand when we see a new connection come in, because technically until a client makes a connection request, it's not technically part of the network service mesh world. And so it would be reasonable to say that the creation of a of a connection adds it to it. So I'll sort of throw that out this out there. We've actually got a lot of folks working on this call who have a pretty deep networking experience. I'd love to hear some other opinions about sort of at what point you would find it useful to know about the various nodes and links and the pathology that's being visualized for you for network service mesh. Could some of the folks who speak up a little less frequently, but you have quite a bit of depth of experience speak up. Sorry, what was the question again, Ed? The question is we've been sort of debating whether or not it's helpful to represent the network service endpoints as dangling nodes. In other words, nodes that have no edges yet, when representing a topology, graph nodes in this case, graph nodes with no edges. And so we've got a lot of people with a lot of depth of experience on this call and networks and varieties of ways. So I was sort of asking, okay, so you run a network, does it help you to see that there is a graph node in your topology that is completely without edges can I keep doing that? So for this is Ian, by the way, I thought I joined this week's first once because I actually found the meeting link. What you really seem to be talking about as we compare it to the physical world is, here's device in my rack, it's got no wires to anything, should I represent it in my topology? And the usual answer that would be no, that would be silly, don't do that. It's an inventory, but it's not topology. No, I think that's a fair distinction. And I think that's a fair distinction. We essentially have available an inventory in the Kubernetes API server, if what you want is an inventory. Yeah. And I mean, so also in this, I think it's obviously two questions, how do you expose this through API so that that skydive can consume it? And how do you display it? Now, you know, as you say, it's exposed through API's and skydive could consume it display it. But the point is that it's not a topology view that you would want to display it upon it would be something slightly different. Yeah. Yeah. Are we are we talking about a view? Because me as an operator, and so I'm moving over from the abstract world to maybe an operator that want to that may want to see something like this, I would, I think like to be able to see the candidate, if you will, and SMN points. Yeah, so I think throw that out there. Comments. We need to figure out is a couple things. The first is sort of priorities. And the second is sort of what does skydive currently offer. So for example, I'm currently sharing of the skydive thing and it has a tab for topology and also as a tab for discovery. And so I, one of the questions I think we may want to address to the skydive people is, is this discovery tab really an inventory? Right. Because if it's really an inventory, then obviously, you know, getting to Ian's very succinctly made point, we might want to go feed that inventory in skydive. And then the second point is sort of what are the priorities I would maintain that visualizing topology in the immediate term is going to be higher priority than capturing and visualizing inventory in skydive if that is a feature in skydive. Because what we're going to hope to show people at FuCon very shortly is in fact visualization of an inventory. I'm sorry, visualization of topology, not visualization of an inventory. Typically, visualization. Yeah, when it comes to actually showing this off, you're going to want to show them what's wired. Now, I would emphasize whenever I've been doing NFC that those topologies actually turn out to be pretty boring in practice. But so, you know, the one you're showing there is actually a lot more complex than actually usually turns up in reality. But it's yeah, I mean, you're really trying to get what exists as a beautiful node graph in your head is not necessarily very easy to communicate using words. That's where skydive is going to help you. The inventory is not wired into that node graph. So the inventory of things that you could do is not so relevant. Okay, I think that's quite reasonable. Awesome. So in that scenario, should we just focus on the connections for now then? That seems to be what I think I'm hearing. Is that what everyone else is hearing? For now, I mean, what we're saying for QCon or that would just be okay. Yeah, fair point. Whatever we can do to make it as simple as possible, given the tight time constraints, I'm all for it. Yeah, so let's focus on the connections only then, and we can add things to the graph as they monitor, publishes them and leave it like I think maybe not even, we have to monitor the deletion of connections in that scenario as well. And I think if we do that, we should have something that can visualise that path very well. To ask a slightly meta question on this, is somebody documenting for future consumption how Skydive is learning and unlearning these things? Because it seems to me that one of the things that's missing here that kind of takes a backseat to everything else is how we intend this to be used. And the example here of Skydive getting hold of the psychology is to find things where we should say this is how we did this because this is what we intended. A worked example. Yeah, I think part of this really comes down to Ian and I think this is getting to your question which is, I feel like you're saying visualising on Skydive is all well and good, but what exactly are we going, how are we going to expose the things that Skydive is consuming because there are going to be other more sophisticated consumers who are going to want to consume them. I have a related question. What protocol is used to discover the topology? Is this something that is predefined, predetermined or is there a freedom to choose anything and everything? Our current approach that we're looking at building is to have a monitor connection in each network service manager. So each agent on each node would have a monitor endpoint that you could connect to. And this monitor will stream you a list of connections as they're created and destroyed. But will it be just local links or will it be links and neighbours using some neighbour discovery protocols like IPv6 neighbour discovery or if it's LLDP or or something new? So I think what we'd be representing here, Mačić, is the network service manager's viewpoint of the links that it's dealing with. And the network service manager knows quite a few things. It knows who the network service client is, it knows who the network service endpoint it's connecting on the other end is, and it knows various details about that particular cross-connection that it can share. Given that, it's sort of like to go back to Ian's analogy of a data centre. Yes, you can run an LLDP and sometimes that's helpful, but if literally the guy who connected the cables between the two servers has a perfect idetic memory and 100% of the time knows precisely where those two ends of the cable are connected. You have a very powerful tool that doesn't require those kinds of things. For things like LLDP, we may or may not actually have both into the connection from a data packet carriage point of view in the hands of the network service manager. It's just the one who set it up. Okay, thank you. Cool. Yeah, so I think we have a wealth of information off of a single, even just a single network service manager on a note. And so one of the things that we'll be able to do is be able to just monitor the connections on that. And the one challenge that I can see though is that when you're listening to, suppose you have a cross connect that crosses node boundaries, then we may want to have something that can deduplicate and unify. Yeah, I mean effectively what you're getting in that case is a report from both ends of the link. And I'm almost certain that the Skydive people have a way to deal with that because if you're trying to do this, Skydive is a series of probes and then something that collects from those probes. And so I know that Skydive does have situations in which he uses LLDP. And if you're using LLDP, what you get is the report of one side's view of a link and the second side's view of the link. And then you've got to bring them together. So I'm pretty sure they've got that sorted out. Okay, so in that scenario, the action item that comes out is we have to build an AI to build a monitor for publishing the local state that the network service manager knows of when the connections are created or destroyed. So I think that's the yeah, that's the one thing that we have to add into this. Cool, and David, if you need any help, let us know. And we'll help out as much as we can. Alright, so let me summarize, right? So I'm waiting for maybe you or somebody to expose those APIs to me and then I'm in charge of dealing with Skydive APIs to let the information I got to properly show them the topology, right? That's correct. Okay, I think if things is going well, I'm able to give you guys some demo this week. So yeah, that sounds good. So let's not set an exact date for that just now until you have the monitor end point. So that way you have an understanding of the complexity of that of that. But once you have enough information there and then let us let us know at that point how you how you feel about giving a demo and when you'd like to. Okay, no problem. Cool, let's see and are there any other questions on the Skydive or shall we move on to our last item on the agenda? Okay, so Masiek now has CNCF network service VNF and CNF data plane benchmarks in comparison. So you have the floor Masiek. Okay, actually I have most of the stuff already typed in. Can you guys hear me? Yes, we can hear you. Yeah, it's already in your screen. So Frederick, so if you could reshare, I'm not in a position to share. My computer is overloaded with stuff and sharing will basically kill all the four cores that are currently processing on max. So if you could reshare, please. One second. I can quickly walk for the points. Again, I stopped sharing on the anticipation you would probably want to share and I can reshare. No, no, no. I do love sharing it, but not a screen today. So there is a team formed. We had all sorts of staff churned or team churned churning due to various things, mainly personal. Health related on the family side. That's Michael and Ed. But Michael is coming back online. Ed Kay will be back on Wednesday. Peter, Em, Mikus and me from the FDA assisted commuter team are driving it. And coordination and supervision is with Taylor and Lucina. The Alec Hoddan joined the team as he's the author of the NMV bench from OpGNV project. So he's now plugged in and he's fixing one specific issue, a bug, which is documented in the link that you will be able to click in a moment. The service topologies are VSC, CSC, CSP. I think we discussed this last week. If you click on this link on the second bullet, Ed, you should get to the HTML version of the current status map. That's of 6th of November. They're going to be now issued every day. And the last number is basically a date in November. So it's 06th of date. So if you go, you're already there. You don't need to open all of them. Okay. So you can see that we do have a multi-chain T-Rex latency, not measured and crashing NMV bench. That is the issue that is hard to reproduce. It is only reproducible in a larger scale environment. So Alec is diagnosing that now he's in Pacific time zone. In the European time zone, Peter is doing the tests. All the results are being pumped into the CNCF repo as per the following links. All code is now done. We can basically do any combination of the CNFs in the service chain or in the service pipeline. With VNFs, we have some acrobatics to do related to the .1q part, but that is being fixed and may have been done. Peter has been hard at work, so maybe he pushed it to the repo. I'll have a live from him tomorrow or I can check a good look. So we are pretty much done with the assisted version. And so it is now really doing dry runs, exactly a stellar set. And then we are looking at fine-tuning the setup. Michael is partially back, and I believe Michael is coordinating with you, Taylor, and a typologist for missing the call earlier, but my calendar is not picking up Slack yet. So it needs to pick up the Slack. I'm really using my calendar as a single source of truth for my time during the day, not multiple apps. But hopefully we can fix that. But I understand from the Slack conversation and from Peter that he's also using Slack with Michael, that Michael is now basically taking the work done on the FDIO assisted part, and he's doing the glue with you, the Ansible Glue, so that things can be automated for full one button or one key press, let's call it a green button press, to run those environments in packet.net. And that will allow you to then, further on, abstract it in the orchestration stack so it can be used in the future for driving larger systems. So I let Taylor, if you're okay, let me just finish my update and I'll let you comment. So that's where we are at packet.net. And now I think I should have in my inbox or on the Slack the location of the NICS and the way to access them for the packet.net. I would like to run some basic tests on a hardware level, if I can. And if they are good and the NICS are placed okay and always wired and kosher at Kern, when he will be coming online tomorrow, he will basically start testing it in coordination with Michael and Peter. So that's pretty much the software data plane benchmarking part. And hopefully if things work, we should have good results by end of the week. We'll be comparing them with the Sysit 1810 release, which is shipping tomorrow. And it is on time on Skylakes, which are the same machines that are the same type of processors running in the packet.net. And then they are going to become a public reference for anything we test. And hopefully so far things are looking on time. I think the biggest risk I see is the packet.net part, because it is a third part, the operated system. And we don't know whether it's all working, but recording Taylor for your communication on Slack and email, then the Intel NICS are there and they are working. So maybe I'm just paranoid. But when we do a baseline calibration test, we should know within a day or two if we are good. And then it's just a question of dry runs. That's it. Thank you. Cool. Great. So I guess did you have something to mention before we close it out, Taylor? I think that pretty much covered it, so that all of those updates are coming and available than we prioritize and merge them in. It's optimization and tuning for the performance side and then making sure that we can re-create stuff. So I said earlier we have testing with the Mellanox bill port and we just added the quad port. So the quad port NICS were not available fully until yesterday and some of them were available last week, so we got started. But that's in there and we'll keep doing testing and validation to make sure everyone else can re-create any of those tests. Nice. Okay. Well, with that, one last note. So Travis has now been killed off the repo. So that was just merged in. And with that, I want to thank everyone for your time and we will see you again next week at the same time. Take care. Cheers. Thanks, everyone. Thank you. Thank you very much.