 Hello. Hello, hello. Hello, Chris. Hey, Taylor. Oh, hey, Taylor. We'll get started about five past. I put the slides in the chat. We'll get started about five past. We'll get started in two minutes. I see Camille is here. Is Brian Grant here? Don't know if he's dialed in today. Hey, Chris. Good morning. Hey, Ken. How's it going? It's going well. We'll get started in a minute. Busy, busy morning already. Oh, no worries. I apologize. Yeah, lots of people are either stuck in board meetings or have travel conflicts. So we're a little bit light on TOC members today. But obviously appreciate your presence. Alrighty. Is Jonathan or Solomon? I don't know the call. I don't see them. All right, cool. I'll get started. So yeah, we have a lot of TOC members that are currently, you know, traveling or have conflicts that have come up. So we're a little bit light today, but no reason for us not to hold the meeting. Since Alexis is not here, I will drive the meeting this time around. In terms of the agenda, I'm going to go through a little bit of an update in terms of the election schedule, a little discussion on a working group process. We have a community presentation from the cross-cloud folks, and then we'll kind of just open up the questions from the community. So slide six. So we are now in the phase of the election where the TOC is voting on the TOC's selected seats. The vote will be closing at the end of day, April 5th. Pacific time. We have 10 fantastic, qualified candidates. So I look forward to the TOC, seven TOC members getting their votes in for those two TOC selected seats. Does anyone have questions in the community based on that process or timing? Cool. Look forward to making that announcement. Slide seven. So Project Review Backlog, you know, we try to link everything off the spreadsheets that are linked off here, but really the main things that I'd like to mention is like to kind of, you know, publicly shame some TOC members to kind of get their votes in for the Lincoln D Sandbox Incubation vote so far. Brian Grant has voted, so I'm waiting for other TOC members to send their votes in so we could come to a conclusion on that one. And then Fluent D and Prometheus are in queue to be voted upon to graduate. There's just a couple of small things they need to do before we formally call the graduation on that and to fulfill their graduation requirements. Does anyone have any questions here? Awesome. Moving on. Slide eight. Just a quick update. You know, we have a couple projects that are requesting to be presented to the TOC for essentially, you know, getting accepted into the sandbox. There's telepresence and open messaging. You know, I'd like to call upon our TOC contributors and community members to take a look at these issues before they present and feel free to make any comments there or ask questions to the teams before they formally present to the TOC and wider community. Moving on to slide nine. There's been a few folks that have approached the TOC and in particular me of setting up some new working groups. We essentially had a informal process of doing these and I'm just trying to codify a more formal process on how to propose these similar to kind of how we handle projects where people just send a pull request with their proposal and we get community discussion on GitHub and have a presentation from them. So I'd ask the TOC and the community to take a look at that pull request and hopefully by the next TOC meeting we could get that solidified and kind of move forward so folks from the community could propose some new working groups that they're interested in exploring. Does anyone have questions on that from the TOC or the community? We can hear you, Chris. We just don't have questions. No worries. I like a very non-controversial TOC meeting. This is good news to me. Thanks. All right. Yeah, so look forward to that. Please take a look at it. I mean, this is exactly what we're doing anyway for working groups. I just need a mechanism for people that are interested in proposing new ones to provide an avenue for community discussion. So there's some really cool stuff coming in the pipeline. People are really excited. All right. Next up, I'd like to give a little bit of an update from the CNCFCI working group and the cross cloud community. So let's see. I don't know if Denver or Lucina is going to be driving, but I think you're on the call. So we'd love to kind of hear from you. Hello. Hello. Hey, Lucina. I don't hear you. Okay. I hear someone now. Hi. Hey, Taylor. How's it going? Good. All right. It's now off to you. Slide 10. So go for it. Sounds good. I'm going to go ahead and share my screen. Awesome. Go for it. Look good, do y'all? Yep. Okay. So it's been a good while since we presented to the TOC. Some of y'all may have seen some updates earlier, but we'll do kind of an overview of everything and see where we're at now. So this is a cross cloud CI project. Here's the cross cloud CI team. Some of the folks, Denver and Lucina are on the call and I think Watson as well. So a little quick overview of why we're here. So everyone knows the CNCF is growing like wild. Lots of new projects, new clouds, partners, everything else. And we'd like to see the projects working well together and validating that they're looking great on all the cloud providers supporting their cloud native features. So this project is trying to, its goal is to test the projects on the cloud providers and the interoperability between all the projects themselves. The backend testing system is composed of a build cross cloud provisioning stage and a cross project stage. There's also a status repository server with an API and a dashboard to show those results. We want to target all of the CNCF projects. Right now we have 40 NS, Hermitius, Flinty, Linkardy, as well as of course Kubernetes. We're also trying to target non CNCF projects. We've added the Linux foundation project onapp specifically the service orchestrator. On the clouds we're targeting all the public private bare metal. For bare metal we're using packet and for private clouds we've added open stack recently. I'm going to go ahead and start a live demo so we can see how some of this works. This is the production dashboard. It's updated daily at 3 a.m. Eastern. It shows the different stages of the build system, the deployment, the ED testing, Kubernetes provisioning, and the status of those. It's testing the stable as well as the head release of the projects. In later phases on some other screens we may be showing some of the other stable releases. Right now this is an overview screen and we have Kubernetes, Hermitius, Core DNS, Flinty, Linkardy, and Onapp. On the cloud providers you can see AWS, Azure, Google Clouds, IBM Cloud, bare metal, and open stack. If we click through you can go to the commits out on GitHub for the various projects on the build side. We can go through to see the backend build system for Onapp. This actually ties in with their CI system. They're using Jenkins and Nexus for container repository. I'm going to go ahead and start off a deploy to see this running. We went ahead and did the builds and the Kubernetes provisioning earlier because that can take anywhere from half an hour to an hour to run through everything including end-to-end tests. We don't want to wait that long. I'm going to just kick off the app deployment ADE face. This is calling out to the API and triggering that. What this is going to do is start the... We can see it here. It's starting the pipelines that will deploy each of the apps to Kubernetes across every cloud. There's quite a bit that are going to run through. Right now it's getting some of the software, Helm, and the container that actually does the deploy is ready. Then it'll move on to the app deploy face here. We can see it's starting to run across each of these for Prometheus. This will keep going through as it gets deployed with Helm on to Kubernetes. After that, it will deploy and run the end-to-end test, which we try to source from upstream and work with the projects. Here's a quick overview of what we were just looking at, all the different parts of the dashboard. I want to go through a quick timeline where we've been. In February 2017, the project started and then June, we had our first demo to the CI working group. That was showing primarily the CI system and how the different pipelines work to build, provision, and deploy. In August, we demoed to the TOC, showed some of the designs for the dashboard, as well as the API status, status repository. Then we had the green light on the design in September. In December, we had some sneak peeks at KubeCon and had the release of the dashboard in January 2018 with the status repository, as well as a reworking of the back end to use the different components independently, like provisioning can be used independently, or the app deployment and E to E phase. In March, we've had quite a few new releases, including adding some new projects, new clouds like IBM Cloud. We added OpenSack, as well as the Onap project, which was our first external integration with the CI system outside of GitLab. That was pretty great. What's next? We're going to be adding new projects, new clouds on the project slides, Envoy Yeager coming next and notary some others here. We're going to target Oracle, is the most likely target for April. We're hoping to have this out and then getting to Huawei, Alibaba. Eventually, we're going to be adding ARM support, starting with Kubernetes on ARM and going forward. Inside the project itself, some of the updates that we're going to be doing would be automating the updates of the releases from projects upstream and testing those and then pushing them out to the dashboard. Right now, the process is a couple of steps to get them out. We're also planning on having the history in the API for all of the tests and the artifacts and everything else to be available in the status repository so that it can be queried externally. You can look up stuff like a version of Kubernetes with what version of Prometheus worked on that and use those externally to the dashboard. This will also help us to do rollback to previous releases. If we had a failure on 196 on a deploy or maybe Onap wasn't running, then we could point that out in the dashboard with full chips or whatever else. That'll tie in with some of the next screens that we're doing. Probably on a Kubernetes project screen for that filter, we may be showing 1.8 and other stable releases. For the collaboration side, working with other groups, we're working with the OpenCI community that came about from a face-to-face before ONS this past week and working on a collaborative white paper together for how that should look. We've also been working on an RFC for pipeline messaging protocol. Have the link here. Anyone wants to check that out? We're going to be working with VMware and IBM Cloud talking about their provisioning. We're looking at Spinnaker as an option for doing some of the stuff to do with GitLab. Now that the cross-cloud system has a layer between the backend platform that has workers and runners as well as some of the projects themselves for their end-to-end tests. Hermitius is working on a lot of their end-to-end tests including performance and stuff. We're running to tie in and try to see where we can complement directly and reuse those for DNS as well. We're also looking forward to talking about the GitOps and how we use it and where we could improve and tie that into the project. The next CI working group is April 10th. We're planning on being at KubeCon, CloudNativeCon, and Copenhagen. We're going to have an intro as well as a deep dive for the project and how folks can get started, whether you're a project or if there's any cloud providers that want to talk to us. OpenStack was really great on Chris Hodge, who wanted to call out for helping, as well as Melvin, trying to work with us to add OpenStack support. We'll be there to try to show how folks can do that and contribute directly. Here's where you can join the CI working group mailing list. If you have any more questions, feel free to jump on there. If you check out the GitHub cross-cloud CI, we've been updating the READMEs. It has an overview in the main and the different components like cross-cloud or cross-project. We're trying to update those READMEs so that you can use the components directly independent of the cross-cloud CI project. That's our update, pretty big from the last time. Any questions? Hi, I have a question for Malibor. Can you hear me? Yes, I can. Would it be suitable for testing Kubernetes certified distributions? I'm thinking of Cubespawn, for example. I couldn't understand that last part. Would it be okay to use that for testing Kubernetes certified distributions? I'm thinking of Cubespawn, which is a tool to test Kubernetes on a developer's laptop. We are trying to use Kubernetes conformance test for our conformance on all the cloud providers. There is a conformance sig and we're trying to follow some of the things that are happening there, like feature flags, that'll be able to test various features on cloud providers. It could be used. Dan, are you here on the car? Hi, I am. CNCF is both funding the cross-cloud work and running the certified Kubernetes. The quick context I'll give you is that on certified Kubernetes, the sort of clever part of that program is that although each provider needs to run their own conformance tests and upload the logs in order to certify, the program also has a crowdsourcing element that any future user or customer of that distribution or platform can run those same tests and evaluate it and report back if some API is no longer conformant. I'm not sure if there's a CI rule for that and we wouldn't necessarily be wanting to run a ton of proprietary software and having to deal with just inevitable bugs and stuff. We're up to 52 certified Kubernetes out there, so that would be a ton of effort. The part that I am potentially interested in expanding into is there's this nascent effort to try and separate out the different cloud aspects of Kubernetes into their own out-of-tree repos and then there's some question of, well, how do you ensure that that code is maintained over time and that it keeps running? I would love to see the CI, the cross-cloud work used to demonstrate the official out-of-tree version for each of the new clouds that's being developed and to show that the head of Kubernetes is not breaking things on it, but that really is a super new effort and it's a proposal. The code's open source, it's available to the cloud providers and it's something I mentioned on the SIG architecture call, but we'll see if that actually goes forward or not. Alvin, do you want to maybe specify your question a little bit more or does that address it? Yes, that's what I'd like to make a show. Thanks. Certainly. I would love to just hear other feedback on the cross-cloud stuff. Are there people on the call who this is new to and who think this is either kind of cool or they might be interested in looking at the Terraform recipes that we're using? Again, everything's under Apache 2.0 or do folks see this as duplicative with other projects out there? It's been a pretty significant investment from CNCF and it continues to be and so we're very interested in feedback on whether we're on the right direction or not. I have another question. Which Linux distribution does it use to test Kubernetes, for example? Does it use something like Fedora or Ubuntu or something completely different? Hey, this is Timber from the cross-cloud team. At the moment it uses CoreOS for the underlying OAS, but all the structure of getting everything on the OAS is done via CloudNet. Theoretically, we can replace the OAS with Debian or Santos or any CloudNet capable OAS. So let's say someone wanted to write this Bob-wise. By the way, it's really looking great. This has been a great effort. Bob actually should claim credit as the godfather of this because he's the one who asked for a dashboard 18 months ago. Bob, I think it took a little longer than we were expecting, but... Yeah, I know. This is really cool. Can you comment maybe a little on... Some of these questions I think are... Hey, we'd like to see something happen, like add some additional thing here and I think you understandably are concerned about explosion of combinatorial explosions, but let's say someone did want to, I don't know, add tests for a different distro as the other caller was mentioning. Is there a way for someone to volunteer effort or effort or resources to make that happen? Would you be open to that? Yeah, definitely. Changes should be pretty easy to integrate and it's just expands the matrix, the only... So it would just be a pull request into the code that deals with the provisioning. The only thing that might be a little bit of effort where we haven't quite got to yet is showing matrices on the dashboard. So it deployed on AWS with this or with this and that matrix would get really big, especially when you have maybe networking, like Weaver's, Calico, the matrix just gets bigger and bigger and bigger. You need to figure out a way to deal with it at some point because it'd be cool to show that. Right. I think there are likely some combinatorial issues even just within the projects. For example, Kubernetes with container D and Kubernetes with Docker just to pick an obvious example off the top of my head. Yeah, definitely. And Bob, yeah. I mean, if there's demand for it, we can add it to the main dashboard, but of course anybody can also fork this and run their own versions with their own combinations. I do see the value though in having the official version support more things. So Bob, we are planning on having some of those Docker and different choices and we'll be showing, I think some of those may go on to a sub page or a filtered view to show some of those others versus what is the main view. And then as Dan said, anyone could run the dashboard and the status repository, but any of the components can be split off. And the dashboard is actually all configuration driven so that if you spin up the code, then you can point to whatever your configuration is and say, I only want these projects to show whatever that may be, then that could be used. So someone may want to use it just for their project, which would be fine. Right. I mean, I understand the possible utility of using this to do some other kind of testing, but I think, I don't know, I guess for the sake of the community, I'd rather see more people contribute to this set of tests than split it off and run it separately. Feels like it would have more value in the long run as long as we can support that. Absolutely. I think when we release the status API with the history, it'll be, it'll help with having the different scenarios, because then we could have different tests that are running that may not be shown on the main overview screen, that people could access them either directly via the API or possibly another view, which may not be the dashboard, but we may have views or some type of big list for the matrix of different scenarios and all the different testing. As far as contributing tests, having any project work on end to end tests that work outside of their build system, so user end test versus a CI internal only really helps. So Core DNS is a great example. They're trying to make sure all of their tests will run independently for an end user, and then it doesn't depend on a specialized CI environment, and that's how we've tried to do the test examples, and we're trying to help. So any project that can do that, or if someone wanted to contribute to say Fluent Day and help provide those end user integration tests, then that would be great. A good way to help. And there's a lot of new projects we'll be adding, so if they're contributing to any of those other CNCF projects, then by the time we get there, if the tests are done, that would be wonderful. I know the point of this isn't scale testing, but just as a matter of curiosity at this point, what's the sort of server scale that these tests are being run against? At the moment, it's the three multi master nodes, and we're only using one worker at the moment because of the way we had to shrink, because of the way Onap's being deployed at the moment. Onap isn't able to use the single volume. It's got to be all in one host, so it was either set up NFS across all the nodes, or just use one worker. So we're just on one scheduler at the moment until the Beijing release comes out for that. Do all the tests even work with only one worker node? Thought there were some that required multiple nodes. There should be some that require multiple nodes. I think the only one that would give us grief at the moment would be Kubernetes, but we're just running a subset of the tests, so we don't hit that. But if we wanted to do some density tests, then we'll probably get struck by that. I'm not 100% sure on this, but I'm 80% sure that Kubernetes conformance tests will pass running on a single node. Okay, I'll try to get that fixed. I agree that demon sets would be like an obvious example where that seems... Well, networking seems like an obvious example. Yeah, and we define what that should look like, Brian. It doesn't know what we should do as a TOC, kind of define what that minimum set of compliance... The SIG architecture owns that in Kubernetes. Okay, I'll define that as far as you know. At the moment, there are a lot of areas where the tests are... We started with the tests that we had, so we're looking at places where the tests need to be expanded. This is definitely one of the areas it looks like. So I had a question, which is, have the folks working on this... I'm poking around the GitHub repo, looking at the Terraform. Have you talked to the Kubernetes cluster lifecycle special interest group about what you're doing? Because I think reinventing how to deploy Kubernetes is probably duplicates work that the cluster lifecycle SIG is doing. I don't believe we've had a call with cluster lifecycle yet. We did have a few calls with Aaron from SIG, the infrastructure SIG. That would be the testing SIG, I would guess. Yeah. Brian, Aaron and also Tim Hawken have both looked at this and Jago and a number of others, but we're not wedded to the Terraform approach that we're taking. Yeah, I just think it's going to be hard to... We would love to have that. I guess I... What's the specific project or code out of SIG lifecycle that you would compare this to? In terms of actually deploying for the first time, spinning up, spinning down kind of thing? Well, the Terraform bits, that part we call provisioning, and then actually bringing up Kubernetes on it, we call that bootstrapping. There are a number of projects. I think the two main ones would be the cluster API effort and the cops. Right. And so we looked at cops and thought the Terraform was going to be a more general purpose, but essentially we're willing to transition off of all this Terraform onto whatever the official way of doing it is if there's a recommendation on this. So I think starting with maybe a presentation to cluster lifecycle would be a good next step. Yeah, I think that would be a good next step. Yeah, we're discussing how to... What our reference implementation be for running our own Intuit tests as we replace the existing deprecated QBOP mechanism. So, yeah, I just think if you're maintaining your own on several different cloud providers, that's going to be hard to do as the system continues to evolve. Great, we'll take that as an action. Okay. Hi, this is Aaron Boyd. I had a quick question. With the Kubernetes testing on the different clouds, is that also testing persistent storage? I know I talked with a couple of you guys in the past about that component. I don't know the criticality of it, but I was curious if that was integrated. At the moment, we're just running the conformance subset and what that gets. So I believe conformance tests some storage, but it's not extensive. But we'd like to get to a point where we can profile and be like, we're on this cluster, AWS do a subset of the AWS integrations and include storage. Right, okay, thanks. Whoever's speaking is chopping up for me. Yeah, it's comprehensible. Maybe you can just type it into the chat window instead and we'll read it. Yep, yeah, you're still chopping up, so please type it in the chat. I don't know who it is, so that's the other problem. Well, I guess the question is from Albin. Is this building only daily tests or would it be plugged to a pull request on GitHub? That's the question. At the moment, this is a nightly run, so it goes slowly. Yeah, so not on every pull request. Okay, thanks. Any other questions for the cross-cloud folks? All righty, cool. Thank you, Denver, Taylor, and everyone involved at the project. It's great to see it progress. You're welcome. Thanks for all the great feedback. Appreciate it. Cool. All right, so moving on. So slide 27. Ken, do you want to give a little update where you are with the reference architecture? I know you're planning your first kind of official meeting soon, so be good to kind of get people up to speed and how to get them involved with the efforts. No problem, Chris. So we are organizing a meeting for next week for the reference architecture work. And there's a site in GitHub. There's a location you can sign up for that group. And just to please let me know, I'm looking forward to meeting next Tuesday with everyone. Can you also send out a note to the mailing list, Ken, on that one? I can. Just to my folks. Thanks. Yeah, absolutely. Any questions for Ken on reference architecture? 2.0. Alrighty. So moving on next up, slide 28. So as you're aware, CNCF helps put on events for our community and projects. We have a big event coming up in Copenhagen in early May, May 2nd to 24th. I will just remind people that please book your hotels early. We are going to sell out that event and hotel space is going to be a bit of a challenge. So the earlier, the better. We also have two other events that we're hosting the year, our first event in China and an event in North American Seattle in December. So sponsorships are available for Shanghai and Seattle. And we'd love to have a huge presence in China, given that it's our first event. So please submit talks and consider sponsoring the event. Dan, you have any other thoughts on call for action here? Well, just that the call for papers is going to open at KubeCon. So you can begin thinking about it. We're going to have an option to submit your talks to Shanghai, Seattle or both. Awesome. Alrighty. Moving on slide 29. Just a reminder, our next meeting will be April 27th. We'll have two community presentations. One from the telepresence project and one for a proposed working group around security called SAFE. We will also at that time have two new TOC members. So look forward to that announcement. Other than that, any other questions? Slide 30 is basically open Q&A. We have about 15 or so minutes left. So if anyone has time to ask questions to the TOC or the wider community or CNCF staff, we'd love to hear it. Chris, I have a question. Sure. Dan Shaw, I'm actually from the SAFE group. And I was just wondering, saw the working group process kicking off. Do you have any expectations about how long that process is going to take to land? You know, I see the discussion is getting off there. Yeah, it's hard to say. My assumption is just follow the proposed template. I actually have a meeting with some other folks from the SAFE group today and I'll guide them through the process. So in general, you have the slot to present to the TOC next week, or sorry, in two weeks. And we'll kind of follow the template as dictated in kind of that PR. I don't think we're going to var very far from what's there. Makes sense. Cool. Thanks, Dan. Any other questions out there? I see Brian and Bob arguing about cigar architecture and what it owns. Clarifying. Clarifying. Any other questions from the community to staff or the TOC? Well, if there's no other questions, I'll give everyone 15 minutes back with just a friendly reminder for the TOC members that are on the call to vote on the Linker D to Sandbox to Incubation Proposal. I think Brian Grant is the only one that has put forth a vote. So to Camille, can please consider throwing down a vote for the end of this week and hope to see many of you in a couple of weeks where the telepresence project and the SAFE WG will be presenting and then hopefully we'll see many of you in Copenhagen in May. So enjoy the rest of your day and here's 15 minutes back. Cool. Take care, everyone. You too. Thanks. Thanks. Bye-bye. Bye.