 Well, thanks Taylor. This is Lucina from Vote Cooperative and here are some upcoming events. I'm Ed from Packet. Let us know about the Lanaro Connect in Bangkok, Thailand, and that's the first week of April. Looks like that'll be around CI systems and scheduling. So there's a link to that event. Also, the first week of April is the Open Networking Summit North America 2019. And we've got a couple of CNF testbed events on the calendar on Wednesday of next week. There's the tutorial on the CNF testbed driving telco performance with the cloud native network function testbed. And also next Wednesday we'll be presenting how to set up the CNF testbed, which is reproducible. Following the steps. The end of May is KubeCon Cloud Native Con Europe and Barcelona. There'll be a CNCFCI intro and deep dive two sessions. There will also be one 85 minute session on the CNF testbed birds of a feather. In the meeting notes, there are links to those events to add them to your calendar. KubeCon Cloud Native Con China will be in Shanghai in the end of June this year. Are there any other events that anyone would like to share with the group? Sounds good. So next on the agenda and feel free to add any agenda items if you'd like. I'd like to give a status update on the CNCFCI dashboard. You can take a look at the V2 0.1 and V2 1.0 releases as well as what's in progress and what's next. And then HH will talk about API Snoop and the prow automation. Since our last call at the end of February, the CNCFCI dashboard has released two releases V2 0 and V2 1.0. 201 was released on Monday, March 4th, and we included the new UI and the new test environment section for Kubernetes stable release, as well as the new test column for the end to end test stage for projects. Visualization of the test environment is what you see at the top of the screen. We've moved Kubernetes into the test environment where it is running the stable release on bare metal packet. In future iterations, we will have a dropdown so that you can toggle between different Kubernetes and test environments. The new test column has been added after deploy and that one will show the status of the projects end to end tests. Right now, you'll notice they're all NA as the ETE tests are inactive at this time. So it's part of our goal in increasing collaboration on the CNCFCI dashboard with the CNCF project maintainers to meet with those contributors and start building those end to end tests. We also switched the location of the build column and the release column in the new UI. And then on March 21st last week, we released V210 and at a high level, we updated how to update project details. So that's the first step in starting the increased collaboration with CNCF project maintainers so that we can add more CNCF projects to the dashboard faster. So the first step was to update the project details and that is what we see in the project details column here. So it's the logo. It is the display name, the subtitle and then this is also a button clicking on that button goes to the CNCF projects GitHub repo. We also created a contributing guide with steps on how to update those project details. And we resolved the last updated counter that was showing no unexpectedly. So now it is working as expected. So link to the contributing guide is included in the slide deck. There it will be incrementally updated as we add the steps for each column essentially so we've gone through the first step of the project column how to add and modify those details the next step will be the release column. And then we'll work on the external integrations with the CI systems for the builds and the deploys and the end to end tests. Here's our bug with that look like last updated now. And now it shows 12 hours ago for the correct time since the 3am Eastern refresh of the CNCF CI dashboard. We've got several items in progress as well. The first one is that dropdown that I mentioned earlier. We currently see on CNCF CI, the Kubernetes stable version only, and we are working on that test environment dropdown so that you can toggle between stable or head for Kubernetes. And this is a building block to adding arm support. So adding arm support. Our goal is to add arm support to Kubernetes and CNCF projects on the dashboard. After we get the Kubernetes stable and head test environments also working on arm in addition to the current machine the Intel machine. We will add arm support to core DNS. And this is the design mock for adding arm support to Kubernetes in the first iteration. We did receive an enhancement request and we have that in our design phase now to iterate on that idea. So we're going to move forward with this original design, and then we can iterate and enhance it in a future sprint. Yes. So for updating the Kubernetes stables from V1 13 to V1 14 that will most likely be post on us, but I will open that up for it at the end with the Q&A section. So what's next is to add arm support to core DNS and this is what we anticipate it will look like in a perfect world. So today's head environment on arm will be provisioned to bare metal packet. The provisioning phase will be a success. And then core DNS will build its latest release 190 onto the arm machine and that build will be a success for both stable and head. And then those build artifacts will be deployed on to the provision packet machine and that will all be successful. So that's what we hope to achieve. So what's next right after we update the drop down and then add arm support as well to the Kubernetes stable and head to practice changing project details. We have tickets 77 where we'll be updating the logos on CNCF CI will be replacing all of the project logo icons and the CNCF logo with SVG versions. So my will follow our documentation and improve it in case any steps were in case any steps need to be updated in that contributing guide. Then we will work on changing where the where the release details are updated that is stable release name the head release commit. And then after that the build column changing the integrations with the external systems to retrieve the build details and the deploy column. Also plan to do some updates some maintenance and we've got some styling updates to resolve some visual bugs and plan out that some enhancement requests. And then we'd like to do quarterly software updates so we'll take a look at our view JS app and Ruby. The roadmap can be found cross cloud CI cross cloud CI roadmap markdown file in this month will continue adding arm support to Kubernetes and Cordy NS on CNCF CI. Next month. We'll continue adding arm support to additional graduated CNCF CI projects will update the own app stable and head release Kubernetes to one 14 zero. Also, we'll change how the release details are added to CNCF CI add support for those external integrations and write up the documentation. And skip skipping ahead in May at the cube con Europe we plan to do an intro and a deep dive and a deep dive session will be how to add a project on CNCF CI so we'll be working on those steps incrementally to have all of the steps ready as well as the contributing guide up to date and upcoming events on s the driving telco performance tutorial and validating performance in a reproducible test bed next week. And then next month the CNCF CI intro deep dive and the CNF test bed off intro and deep dive. So we welcome your feedback and your enhancement requests and any questions that you may have. Feel free to add any issues to the cross code CI CI dashboard issues. If you haven't already joined our slack channel, please join the CNCF dot slack that IO the CNCF dash CI channel can also join the mailing list at CNCF CI public. And these calls are monthly on the fourth Tuesdays. We just today changed our Twitter handle to CNCF CI from cross cloud CI. And we have yet to change our GitHub handle. Coming soon. So any questions on the previous two really since our last call in February. Well thank you so much. I will reshare the agenda and notes and H. Are you ready and would you like to screen share with your update. I usually say I can't wait for the screen share. We have a lot of projects within the CNCF and one of the things when we started the CNCF CI working group was to try to find some ways to help with the CI and in working with our own project. I think we've been needing our own deployments. That's where we're going when I thought it might be useful to share some bar approach just to get some feedback and thoughts from the greater CI working group. I'm going to try to share my screen. And hopefully this is super easy and works wonderfully. I'm going to click share screen now. Can you see it only seeing a blank screen Chris. It's the same thing I have as a blank screen. Is there anything that I can do to change it. Unfortunately, no. So I can see the menus as they pop up. Yeah, that's all the menu. That's it. I have a the screen I was going to share for me has also gone black, even though I have multiple windows. Let me try again. Next time. Sorry for that. I don't want to take up your time. When I can't present. And I also because I can't see I can't share. Wait, this is my work. Let's try this. Very top of the screen. Should be able to. I'm going to click. Very top here. Yeah, I like the middle area. Leave and rejoin. And then hopefully it will remove my sharing. Can somebody claim host or remove me from the. Can I do anything. No. Yeah, I'm going to exit. So I'll leave the meeting and I'll try to come back. Taylor, would you like to move to your updates and then we can. Try again with HH when he gets back. Yeah, that sounds fine. I don't know if there's something I can share for him, but if possible, I can help with that as well. So. I was. Going to give an update on the CNF. Test bed. I'm actually on the same dock. So. I don't have anything in the slides. But they. I can open the repo. So they, for folks who don't know. CNF test bed. Trying to create a. Fully reproducible environment for testing. Network functions. On open sack Kubernetes. Doing various use cases. Most of them have been performance. Focused. There will be others testing different things. Functionality, resilience, whatever, but most of those have been performance right now. And we're testing on packet as a primary. Area. And there's also some collaboration with FDIO. See set testing lab. Where. Part of the test cases are being replicated there. And we're also testing on the Linux foundation systems. So on packet. We create the machines. Provision the resources from scratch. Bring them up. And those have been. Primarily. Melanox. Network hard based systems. And we've been able to do that for. Docker KVM Kubernetes. And open sack. That's. That's been. Just bring it to us on the open sack side. So we've been adding. Support. For up to 18. This will be one of the biggest ones. Getting it in parity with. The Kubernetes side. So using Kubernetes as a host. And this is bringing everything up to date across the board for. Open stack. It's already using the latest version of OpenSec for the host. These systems are using the vSwitch for OpenSec can be swappable, so it's by default you're going to use OVS in OpenSec for doing the switching and networking. We're doing the high-performance, we're using an FDIO project called VPP, and that allows us to do high-performance network connectivity access to the cards, the NICs themselves, and we're able to do the same type of connectivity on both OpenSec and Kubernetes so that we're doing comparisons. In the OpenSec side, you're using Neutron, if you're familiar with that, for all the networking, and then on the Kubernetes side, we keep the same flat-layer three-network connection and then add additional interfaces. Right now, those network interfaces in Kubernetes are manually stitched together. We're looking at adding NSM support, which is tied into this next item. On the OpenSec side, some of the items that we wanted to get to was supporting an additional type of system that's coming up on packet. The Intel network cards, they don't require additional drivers like the Melanox, the proprietary drivers on the Melanox necks. Our public systems, you can get them now, but the Intel's don't require any additional drivers. They're built in with Linux, also another Linux-funded project, DBDK, and you can use that out of the box. We've been working to add support for the installation of the OpenSec VPP that we have, the deployment, and to add, update it to Ubuntu 1804, and that's happened at this point. We have Intel support now on that, so you can actually select to use whether it's an Intel system or the Melanox-based system. The Intel systems are not yet released. They were announced at Mobile World Congress last month, and they should be coming out. They actually came out, but they don't have enough systems provisioned for the public yet. That'll be an end-to-extra-large, and they have Intel's. What this means is anyone can go and take the code in the CNF testbed, and currently it's in the tools area, and you can deploy an OpenSec cluster. You can decide whether that's OVS or you can do VPP. Either way, you are going to be able to configure your network for whatever test case you want using standard neutron configuration, and you can do that on the publicly available machines. On the Kubernetes side, you can deploy a cluster also on the same machine, the Melanox, and the Intel systems, and then run any of the same test. The other item would be moving towards how do we configure the network connections that you're going to run CNFs on for Kubernetes. At some point, we may be doing some test cases and configurations to show multis and other CNI plugins. Right now, those are stitched together using Ansible and some other items that we configure the CNFs at deploy time. We use Helm charts, and the interfaces show up in the host system. Going forward, we're looking at using something like a network service mesh in addition to exploring CNI plugins. To get to that, there are some items that we needed to look at. Pod-based, a pod-based V-switch. The V-switch, which is OVS in the open-sec world, there is actually an OVS CNI plugin. We're looking at VPP for that high performance. We've been working on moving the V-switch into a pod so that we can have it more in alignment with how you would use it with something like NSM, as well as being more cloud native in general with how this is deployed, and then looking at it would be a closer match to the VPP Neutron on the open-sec. That's what this ticket's about. At this point, we have it working for the most test cases. We're doing some final tests on some hardware with more cores so that we can have more CNFs running at the same time. At that point, this would be done and you'll deploy the V-switch using a Helm chart to an existing Kubernetes cluster. At which point you can use that V-switch and the way that we're using it, or potentially otherwise. We'll be talking with FTIO and the VPP group about potentially having a container that's one of the items that we want to do from here is having that as a public container that could be useful for other people to use in different deployments on top of Kubernetes. That's something that we're thinking that'll come out of this beyond just the test case here. The other related item is unprivileged CNFs. For performance reasons and other considerations on Kubernetes, such as pinning specific cores to containers and for performance reasons, we have a lot of the test cases we're running in with privileged containers. We've been working towards having privilege and unprivileged containers because those are different use cases you could see in the real world. That's another item that we have in progress right now and it's related to NSM. Once we have both of these from a functional standpoint fully in place, then we'll be looking at a use case within NSM managing the connectivity between the containers and talking to each connecting between nodes and connecting the containers on the same node. I think that's it as far as the current stuff. Potential new item, which would love to get some feedback on, is about additional use cases and one in particular looking at SRV use cases and working with some of the folks that are on the network service mesh and other groups that are interested. The SRV is a way to do acceleration in the VM world. It's been used a lot in KVM. There's places you could use it in open sac and it's very a common thing with telcos. So what we're looking at is what are some real world use cases that are using SRV and other type of performance use cases that we can take and reimplement in the test bed so that other people can rerun these things and share and understand them. And then take that and then implement a cloud native version that would be following the methodology so we would expect on Kubernetes. So that's a goal and this one's just getting going. We've had a use case calls going for a few weeks on different things and this one kind of popped out as something we should focus on. So if folks are interested in this, love to get feedback. And yep, that's it. Any questions? Doing anything with ARM for any of the test bed work? Have you heard of anything? Yeah. Nothing right now as far as implementing. It's definitely been mentioned as far as it's in use with some people. But if there's any specific use cases on ARM, most of the use cases we're looking at would be on Intel CPUs, not even AMD CPUs. We've intentionally avoided the packet AMD machines. And a lot of that has to do with specific performance tuning that a lot of the folks involved know about that's available on Intel machines. So I think if we were looking at ARM, then we would want a specific use case that makes sense on ARM. Definitely something that would be available. There's a lot of our machines are out there on packet if we can find a use case that matches. Taylor, this is Dimms. Quick question on the ARM. Are we talking about all ARM nodes and the master on x86? Or is it the master also on ARM? Well, for the CNF test bed, there is no current ARM support or there's nothing being deployed right now on ARM. OK, thank you. If there was a use case where we wanted it, then we could run Kubernetes on ARM. And then it would be finding a CNF that made sense that could also run on ARM. OK, so the ARM stuff is just for the CNC of CI at this point? Yeah. So the CNCS CI project is currently implementing ARM and the Kubernetes cluster. The entire cluster is running on ARM. OK, thank you. OK, there's no other questions. Hippie Hacker, are you wanting to try again? That would be great. Sorry about that. I'm going to give you a URL that is going to be updating while I'm typing. This is our presentation side. Please open that locally in your browser if you're interested and follow me along. Is that the teammate URL? It is, it is. I'm going to be typing. Would you like me to share that screen? I think not because what I'm going to be doing is there's a lot of links that I click on and I'm going to share via Zoom. So bring up the URL I just shared and the Zoom sharing side by side. So on your right-hand side, you might have the web browser on the left-hand side. You might have this URL or the Zoom sharing on the right, the web URL I gave you on the left. I'm going to attempt to share the single browser session really quick. Let's see if this works. Did that work? No, it blacked out my browser. I'm happy to share those as you drop the links. Okay, let's try that. It'll be a little different. If you go ahead and share the URL as we go then, I'm going to start off with just a quick overview and then go into the mirroring and the pipe ones and how that was going to environments and then go into our cluster overview and how things were connected via the jobs and down at the namespaces and our pods and also digging directly into a particular deployment and probably not getting all the way that you're bugging the bill but just kind of give you a taste of what we're up to. So I'm going to go ahead and focus on the overview for a minute. The software we're doing has some back-end and front-end and it all needs to be glued together and we look at Netlify and a few other CI things that didn't provide the flexibility in deploying a complex app. And we want something as simple as this. When somebody creates a PR, we wait for the CNCFCI and that's just a bot that's responding and saying, hey, here's the results, the pipeline and the URL. And so I'm going to do that really quick as a TLDR so we can see how far I would like to take this. I think what I might do is have you share, I'll start a new PR on our branch. So I'm going in, I'll drop a link to this real quick. I'm going on Tix 121 and I'm going to create a new PR and I'm going to drop the PR in the channel real quick once I do it in the system. Demo PR for CNCFCI working group. And I'm going to create a pull request. Now I'm going to drop that PR into our channel for everyone. I just created that one. And there's some fun things around the automation. This is including some of the things that the Kubernetes community is using, including Prow. So automatically on that job, the moments after I created the CNCFCI bot is going, you don't have a release node block. We've got a process around that. Welcome to the community. If I was first committing to this repo, it would say, welcome. Thank you for your first commit. That was delightful. Here's some of the other things that you might need to know. It also added a node around the release node, like I said there. And we have some automation around who within this repo should we contact and apply to an approver and a reviewer. I won't go into the specifics of that. I think it's a whole other CI working group meeting. But all of these tooling developed by the Kubernetes community, the SIG testing, which we spent a lot of time with, I think has a great benefit to the broader use of the CNCF and CI working group community. The last thing that they did here as far as the CNCFCI bot was add a size. So it says this is a fairly large change. We were using all these failed pieces are part of Netlify. Our application has a back end and a front end. So deploying to static content in Netlify wasn't quite going to work. We're exploring some options to remove that. But in the interim, we went ahead and added this other approach using GitLab. So you can see at the bottom, if we were to have all these go, we have a past CI pipeline. And I'll click on that to give us the details on the pipeline itself. You can see we had a build and then we have a review. But it hasn't gotten to this next step yet because we're still in review phase. If I click on the review, oh, sorry, you're not following me. So click on the details next to the pipeline there on the right, the bottom. There's the green checkbox. The only green one, yep. And then the review in the middle, so click on that. And at the bottom, there's a URL. It says URL there, and it says HTTP, API, SNOOP, CI, it's up. So you'll have to copy that for a moment and we'll show another flow with that later. And this new branch has an ability to filter stuff. We won't go into actually doing this because this is an API SNOOP thing. This is a CI thing. So I'm going to go through our shared presentation window here and say, we created a PR, we waited for a CNCF, and we got a deployment URL. This is the TLDR. I'm going to go ahead and close that and back out to our larger overview and dig into this just a little bit. If I click on these on my side, they'll open up into a URL. Please think of a way I can publish this quick. But this goes through to the settings for the mirroring. This may be a bit quicker, but we don't go to the URL, so I'll just talk about it. We do some mirroring for each commit. There's a set of branches and pipelines and jobs that flow through. And that ends up being a specific environment and review branch. And those links will follow that there when I publish it later. For a specific commit, we're actually on that specific commit right now. So if you'll, and Taylor, if you'll go back to the pipeline that you had up. Just to kind of follow through, there's a commit over there. If you'll click on the commit, click something. Yep. On the commit, this is where you can see the parts of the pipeline. And that's the commit to the pipeline and the build and deploy jobs where there's two jobs. If you'll mouse over the first and second, what via stages that there, it says it built it and it passed it. If you click on the build, real quick, we'll just take a look. See, this is the jobs. This is the specific example. And underneath that build job, you'll see the Docker container getting built and pushed to a registry. If you go over to the registry on the left-hand side, there's a mouse over it underneath CICD. Yep. There will be a specific registry pushed for this ticket. And if you'll kind of click on the chevron to the left of the FIX 121, a little lower, yep, that one. I made the chevron a little greater than symbol. There you go. And you can see the different containers that we pushed while we've been doing these different releases. Now, the next thing is to, you're not logged in, so you can't see these things. We actually have some environments and you get to a console on these. I can quickly add you or maybe we won't go through that this, but that's the pipelines and setup. So I'm going to close that out. Is there any questions before I go any further? So I'll go through what's happening within Kubernetes. Now, most of this is going to happen inside our little shared window area in the browser where I'm having my teammate session. So I'm going to focus on the Kubernetes area. And we can see that there are some namespaces on here. So if I run kubectl, get namespaces there, you can see we have API Snoop CI set up a day ago, and we have all the rest of the cluster up a while back. And so that's the different namespaces we have. Namespace, GitLab managed app is where all of our, the GitLab managed stuff goes. And the API Snoop CI namespaces where our deployments and reviews go. Inside the GitLab apps, managed apps namespace, we have a bunch of containers. And we'll go look at these real quick. There's a assert manager and an ingress and Prometheus and the runner, which does the jobs and the tiller, which is part of Helm to deploy the applications. And in the namespace itself, there's a, we can inspect it further by getting all of our descriptions. So you can see it's got some pods, some services, and specifically an external IP. So we do a star DNS entry for all of anything.APISnoop.CNCF.CI. And that's where that traffic gets redirected to this particular ingress controller, which uses assert manager and other things to create our SSL certs. I'll go ahead and close this part out, which is the GitLab managed apps namespace. And it's at our own namespace within the API.Snoop.CI. We have our different production review and staging deployments. And the replicates that are there quickly to get down to inside of our pods. This is a set of pods that we have. There's now a few more running, including the ticket that we have there. And I won't go through the details on that pod for time. So if we widen out our view to go to the next step about digging into the deployment, these are the URLs. You might be able to click on these in your browser, I'm unsure. But I think you'll need to have permission. So this is based on if you have right access to the repo, to that particular branch. And so I need to do a little bit on how to open that up. But within this, we can get a terminal and also get in and see the artifacts for that. And underneath here, this maps our deployments and our deployments to a pod. And since we have a particular pod, we can start executing commands on that node. So this is just a kubectl command that looks inside the namespace for our CI stuff and goes inside the particular review deployment pod for that, and we're looking to see the process data. And we'll add, just update this to, just to not sort it so you can see what this is, this is a lot. Those are the APKs, new JSONs that get downloaded and processed and we needed to find a way to combine the generating and processing of the data into our app, which is one of the main reasons we kind of needed some type of flow that actually deployed something that we could take a look at per commit without everybody having to do something fancy. And here's the details for that particular pod, if you wanted to go into that. The last thing is this exact shell and this will actually give us a shell into that node. So, let's see if we can execute that. I think our other command was CD to where the data is. And now we can see we're inside that production or this review ticket node. I think that's it. For now I just want to get a quick, quick overview of the details of what we're doing and get some initial feedback. I think it might be useful for other CI or other CNCF projects that are looking to have some, if they have a product that needs to be looked at via the web so that they can have commits that come in and do a deployment. I think it's separately, this is just one little aspect like what GitLab does. I'm actually really interested in seeing prow and some type of CNCF, CI type bot interacting with our community in the way that the extreme success that's been used in the Kubernetes community. If you want to try this out, go ahead and submit a PR against API suit and we'll see how we can automatically get that connected so you can get to the URLs on the left side of that. That's all I have. Thanks Chris. Today I was notified of another upcoming event. The FDIO mini summit will be a co-located event at KubeCon CloudNativeCon Europe. The CFPs are open and closed next Friday, April 5th and the link is here in the notes. So the next CI working group meeting will be on Tuesday, April 23rd. Please subscribe to the CNCFCI public mailing list. I'll drop the link here in the chat. And if you're on Slack, please join the CNCFCI channel on Slack. Are there any additional feedback, suggestions, questions, comments? Sounds great. Thanks everyone for joining us on CNCFCI working group call. We will meet again the 4th, Tuesday of April. Thanks. Cheers.