 Okay. Great. Welcome, everyone, to the CNCFCI Working Group monthly meeting, today is Tuesday, July 23rd, and the monthly meeting is held monthly on the 4th Tuesday, starting at 12 p.m. Pacific time. This meeting is recorded by CNCF, and will be published to the CNCF YouTube channel. The link to past meeting recordings is in the monthly meeting agenda and notes. Please add your name and contact information to the monthly meeting notes. I'll share that in the Zoom chat. And if anyone would like to propose items for discussion, please add those as well to the meeting notes, as I believe we'll have time. So far on the agenda, we have a link to the shared slides with the CNCF.ci status update. We'll go over some upcoming events, and we've got an introduction and roadmap from Click2Cloud. If there's any other items you'd like to add, please add them at the bullet point under Click2Cloud. Upcoming events on the radar, in September 17th, there will be a GitLab Commit North America Summit in Brooklyn, New York, the CFPs closed last Friday, I believe, for that event. Feel free to click through in the meeting notes to learn more. The Open Networking Summit Europe schedule was just announced today by Linux Foundation. That event is held in Antwerp, Antwerp, Belgium, September 23rd through 25th. And Taylor Carpenter from Vault Cooperative will be part of a tutorial about driving telco performance with the cloud native network function CNF testbed. That's a preregistration required event, so please click through to learn more and be added to the list. In October, there will be another GitLab Commit Europe in London. Also in the end of October is the Open Source Summit in France. The schedule for that will be announced August 8th. I believe some folks from GitLab will be giving keynotes and possibly have some CFPs accepted. In November, it will be the KubeCon Cloud Native Con North America in San Diego. We've submitted a panel along with GitLab and ARM on lessons learned from building CI CD pipelines for cloud native projects. We'll collaborate with Priyanka and Matt on some of the things we've learned working on CNCFCI with the CNCF-hosted projects. That schedule will be announced Thursday, September 5th, so we will know if our panel idea is accepted in a couple of months. There are some co-located events around KubeCon. One of them is EnvoyCon. The CFP is currently closed, but I believe there are still spots available. If you'd like to attend that co-located event, click through. And there may be other co-located events to be announced. All right, I'll jump into the shared slides. And we can go through a quick intro of the CNCFCI project. What has been completed since our last meeting in release V2.5.0? What's in progress and what is next? The CNCFCI status dashboard is at CNCF.ci and it refreshes every day at 3 a.m. Eastern time. At the top of this, actually, I will go to CNCFCI. The dashboard tests the CNCF-hosted projects on several test environments, running Kubernetes stable on X86 and Kubernetes on ARM. And we run those for stable and head releases of Kubernetes on bare metal packet. The test environment shows the status of the provisioning stage and then the projects are listed by status within CNCF. We've got graduated projects. We've got incubating projects and we've got a Linux foundation project, ONAP, currently on the dashboard. Let's see. So we can change the test environment. So this is showing all the projects running their builds and deploys on stable Kubernetes on X86. I can also switch it to ARM and see what projects are building and deploying on ARM, which ones support ARM at this moment. And the same is true for Kubernetes head and Kubernetes ARM. The CNCFCI project consists of the CI testing system, a status repository server, and that user-facing dashboard at CNCFCI. The testing system validates the build and deployment of each project for stable and head on X86 and ARM. And the testing system will be able to reuse existing artifacts from the project's preferred CI system. It can also generate new build artifacts. And the status repo collects those test results and the dashboard displays them. The goals of the dashboard are to complement the CNCF landscape and trail map and promote CNCF-hosted projects to help attract more projects to join the CNCF. We're also demonstrating the use of cloud-native technologies and we'd like to get feedback from cloud-native end users in projects. And this provides a third-party unbiased validation of the builds, deploys, and end-to-end tests for the CNCF graduated and incubating projects currently. The key features of the current iteration of the CNCFCI dashboard is that it's project-centric. We're highlighting and validating the hosted, graduated and incubating projects. So we're putting the projects front and center. And we wish to increase collaboration with the CNCF project maintainers. So as we'll see what is in progress, we're refactoring how builds, deploys, and end-to-end tests are currently retrieved from the dashboard and will be integrating into the project's own CI system. And it's also agnostic testing. We're validating the projects in configurable test environments. And those configurations are currently the Kubernetes release, the architecture, and we're running on bare metal packet. We could run on another provider of bare metal, for example. So that's a quick intro. And here's what we've been working on in the past month. So we recently released V2.5 on July 9th. And we've implemented supporting different CNCFCI environments for each project configuration. We also changed where the release details are retrieved for CNCFCI to allow for more collaboration and external contributions. And we've added those subheaders to display the projects by graduated, incubating, and Linux Foundation. So to dig in a little bit about changing where existing release details are retrieved, we can take a look at ticket 103, which has been closed. The goal is to change where release details are added and updated to increase collaboration with the CNCF project maintainers and allow the CNCF project maintainers to add release details and modify release details that are shown on the CNCFCI dashboard. We've updated the contributing markdown file, which is incrementally updated as each part of the dashboard is refactored to allow external contributions. So we had previously completed updating how the project details are added. And then we built on that using the same project configuration repo per project in the cross-cloud repo. So the release details are regarding the project's latest stable release or the project's latest commit on the master branch. And the CNCF project maintainer can update the release details by going to cross-cloud CI on GitHub, opening up the appropriate project configuration repo for their project, opening up the CNCFCI.yaml file, editing it to update the stable ref and the head ref as needed. So we can take a look at Cordeon S as an example. I'm here at github.com slash cross-cloud CI slash Cordeon S dash configuration. CNCFCI.yaml file on the master branch. And I see that as of yesterday their current release was 152 and their head ref is called master. Whenever they release 153 sample, you can type in 153 and they can create a pull request. And then a maintainer of the CNCFCI project would review and move it forward. And so we have refactored the column one, the project column, where a project maintainer could update the logo, name, caption, as well as the hyperlink. This currently goes to the Cordeon S GitHub repo. The next step was to update the release column where we have completed how to update the stable and head releases. Next or in progress is the build column. So that's what's in progress. We'll get to that in just a second. So we also supported different CNCFCI environments for each project configuration. So this means we have different environments on CNCFCI, we've got the dev, we've got a staging, we've got production. And the per project configuration also have a dev staging and production. The next item that we completed in V250 was the subheaders to display by CNCFCI maturity level, which you can see at a glance here, graduated incubating and Linux foundation. And once Linkerd is graduated, we can update that and it'll be shown appropriately under the graduated status. And we'll be adding more incubating projects as well. We also resolved an issue with our GitLab license disabling the Git mirroring. So the information on CNCFCI was instead of refreshing every day at 3 a.m. Eastern time, the license had expired. And so the information on CNCFCI was about 60 years old and growing. So we had help from the GitLab folks and we had an extension of our trial. And so that will resolve the issue with the disabled Git mirroring. Everything's up to date now, refreshing every day at 3 a.m. And the next, the long-term option would be to review the GitLab and open source license. So that's in progress with the CNCFC folks with GitLab to see if that would be a good solution. And we're currently working on switching to the open source license from GitLab. All right. So we have quite a few epics in progress. We are refactoring the CI on CNCFCI. We currently used a Kubernetes provisioning tool that was customized in-house called the cross-cloud provisioner. And we're switching from maintaining the cross-cloud Kubernetes provisioner to using CubeSpray and Kind with CubeADM. So we've started the refactor epic several steps. The goal is to add support to use CubeADM for bootstrapping Kubernetes clusters onto packet. We needed to update how to provision hardware, so how to provision those packet resources. We also need to add container D support to CubeSpray as container D is a graduated CNCF-hosted project and will be used in place of Docker runtime on the CNCFCI dashboard. And we're also adding ARM support to CubeSpray so that CubeSpray can support the multiple test environments, both x86 and ARM. Next, we will be creating a CNCFCI Kubernetes provisioning wrapper. Once we have the wrapper, we'll be able to create a CubeSpray plugin and a Kind plugin. And then we'll be able to update the Kubernetes release version to current version. Once we have all that in place, we'll be deprecating and no longer maintaining the cross-cloud Kubernetes provisioner. So to talk a little bit about the work in progress epic, oh actually talk a little bit more about the CI refactor. Would anyone like to give more details, maybe Taylor or Denver? Sure, this is Taylor. And if Denver, your audio is working, you can maybe give an update as well. Thanks for the overview, Lucina. The CubeSpray, being able to create clusters on packet with CubeSpray, that portion is done. So we'd call this like the minimal setup. And because the goal for us is to support stuff like at a Kubernetes infrastructure level, being able to plug in different projects, potentially CRIO as a container runtime and as well as container D and use other configuration, that's a good fit for something like CubeSpray. And Kind is very helpful if we want to do fast iterations and full separation of the clusters for, I will say, application level projects that are running on public Kubernetes, which would be the majority of the incubating graduated and what will be sandbox projects. So we're designing this so that we can support both and that's ideal with the plug-in. So it's essentially a very small wrapper that can decide on using CubeSpray or Kind. With CubeSpray, we'd normally be creating multiple, we would provision multiple physical machines in the hardware stage, which happens before this. We could do a single machine with ideas multiple. And then with Kind, we could run everything potentially on one machine, which would allow stuff like debugging a project, let's say, if you wanted to test something, we could easily have a cluster spin up and not worry about a large number of packet machines every time. So that's kind of the idea here, and breaking this into pieces. And the one thing that I was just saying that provisioning, the hardware provisioning stage, it is complete. We did separate that out from the way CrossCloud worked. So with CrossCloud, you would run it and it would actually allocate the machines with whatever cloud provider, and then immediately go into provisioning Kubernetes using, it is custom dropping all of the manifests and everything for Kubernetes. With the new setup, we have it completely segmented off. So those physical machines can be allocated and reused separately. So if we want to reset them, which is a built-in feature with CubeSpray where we can reset the machines to before Kubernetes was set up, we can do that. So we can get some speed up in the CI process. We can also fully wipe those machines and hand them back to packet school. But it's a little bit more configurable there. And as much as possible, we're sticking with trying to not have anything custom on the CubeSpray kind side, but see where we can provide upstream PRs. And that's where the, what we're saying is ARM support is actually pretty generic. Denver, if you're there, maybe you could speak to that. If you're available, I can give a quick overview. But if you'd like to go into that a little bit, what we're doing with CubeSpray, I'm not sure if you're there. Okay, looks like Denver sent me a message that audio may not be coming through because I'm not hearing it. So the main thing with CubeSpray, which we've, again, trying to create PRs upstream for everything and provide tests for whatever changes, we've added support for doing a external image repositories for the containers that CubeSpray would use, as well as it already supported the, I guess, the tags and different tags and images that you could choose. So the main thing was repositories. We did this primarily to get ARM support, but what it allows us to do is select different types. So if maybe CubeSpray doesn't support Kubernetes 115 release or something, if it's a little bit ahead, then you can actually select those by specifying the specific tags that may not be built in. And CubeADM actually supports those. So that's fine. It's a CubeSpray, basically, what is the current most tested stable? And it's a little bit behind. So that's the primary support. It does get us to be able to add ARM support as well by using the ARM binaries that are available upstream. And if we continue to need to support any of specific images for specific builds that may not be available upstream, well, that allows us to do that as well. So if we, say, wanted to build a version of Kubernetes with a specific plugin or something that may not be available upstream, we could still use the same framework and be able to specify, say, the same CFCI registry. Or if there's some other one, like maybe Kubernetes testing SIG has something released, we can do that. So it should be pretty useful for, I think, anybody that uses CubeSpray to be able to use custom registries and custom images along with the defaults that are built in. Kind will be kind of a similar type of thing where we're extending the support that it has. And we're also talking with them about what specific container run times and other things that they're planning to support so that we can focus on either helping them directly on those and getting it sooner in place or adding support for the parts that may be further out in their queue. Does anyone have any questions about this specific part of the, this is the back end, essentially, of CNC-FCI? To go back to it, there's your CI platform. This would be similar to your pipelines and other things that you could see in Travis or anywhere else. And then there's the different stages for creating the clusters, which is what we were just talking about. And then on top of this is a status repository and then the actual dashboard that displays and shows all of that. So this is for that back end piece, both restructuring how pipelines are used, how we're sitting on GitLab to potentially move somewhere else, and then how we actually create the clusters and test environment itself. So for this portion, does anyone have any questions, comments? Probably move on to the next portion. Okay, thanks, Taylor, for that information. It's a big epic, but we are making our way through and making good progress. The next epic that we are also making progress on is adding support for external contributions, changing how information is fed into the CI system and status repo in the dashboard to allow for external contributions and adding and maintaining projects faster. So as we saw on the dashboard, we've got our project details column that has been refactored. We've got our release details column, which has also been refactored. And in progress is changing how the build statuses are retrieved. So here's our ticket 104. We're adding integration with external CI systems that are used by each CNCF-hosted project. Not every CNCF-hosted project uses the same CI system. So we've decided to start with Travis CI. And we decided to look at Travis CI and look at what projects are hosted by CNCF and on Travis CI. And on the dashboard, we currently have LinkerD, which points to the original LinkerD project, and LinkerD has released LinkerD 2. So to move forward with two goals at one, we're adding the LinkerD 2 project to CNCF-CI and building the integration with the LinkerD 2 Travis CI system. So we can take a look at what we've done to start adding LinkerD 2, and then I'll hand it over to Taylor to take a look at this pull request number five on the LinkerD 2 configuration. So we've started adding the project details for LinkerD 2 and confirm with the LinkerD 2 team that they would like it to read LinkerD 2.x. So we've created a new configuration file for LinkerD 2. There also exists a LinkerD-configuration file. So we've built a new one. In that has the project details. So it has the logo URL, which is fetched from the CNCF artwork file. The display name LinkerD 2.x. The subtitle remains the same service mesh. The project URL points to LinkerD 2 in the LinkerD repo. The stable ref as of yesterday was called stable-2.4.0 and the head ref was called master. Their CI system type is called Travis and their CI Travis URL it was showing as Travis-CI.org LinkerD LinkerD 2. So that information is all helping us with the project and release columns on the dashboard. So next we can take a look at pull request number five opened yesterday by Taylor. Taylor, do you want to walk us through some of the files changed here and how we'll reuse this information with other CNCF hosted projects going forward? Sure. I guess I'll start a little bit with the plan here for this build column. So all of this is related to the on the CNCF dashboard, the build column. And for all projects other than on app, the build column is pulling the status information for the compiles and running the unit tests and whatever the artifact creation that's all within the CNCF CI build system, which is a GitLab pipeline. That's what all of this is and you can click like the scene is doing and then you're on the project itself. So this would be similar to any other CI system. So whether you're running your own GitLab, pipelines, Travis, CircleCI, whatever you might be using. And for CNCF CI, we were replicating what was done by the project. So CoreDNS has their own CI that they're running and doing their own test. And we would look at that and then have to essentially copy everything over and get that going. But there's some specific reasons to get maybe the build artifacts that we needed that weren't available at the time when this was started, as well as specific builds of different architectures. But the end goal is where we're going now is to try to integrate with those existing systems. So that's what's in progress here. And this is an example of what we're saying a project could add configuration to for integrating. So this CNCF CI YAML would be similar to like a Travis CI or CircleCI could potentially be in a project's repo. Right now it lives in what we're doing as a per project configuration repository. It could be copied over. So what we have here is a section where we can define different CI systems that a project uses. And the first two items are what we've seen already went over. What is the CI system type? What is the actual URL to the project? The additional pieces that we've started to add are what is the architecture that's actually used on this? So LinkerD2 right now, from what we've seen, the pipeline that we're referencing builds AMD64. So if they add ARM to this, we could add another architecture in that list. If they don't add ARM, possibly we would be adding it somewhere else in the time being or maybe I know that ARM themselves are working with projects and they have they've been using a different CI system. So this would allow us to actually add multiple CI system integrations if someone had that for whatever reason. And then the next piece is what are the images that we're looking for? And these are the containers created after a successful build and where they're published. This is important because the next stage that's seen on the dashboard is the app deploy. So where do we get those artifacts? And from our standpoint on the CI side, we don't consider a build to be successful unless the artifact was created successfully and published and we verified that you can access it. So if someone's done a full build but there's no binaries or nothing that you can actually use, then we're trusting, we're only trusting the system status, but not that there's anything usable. So that's why this is part of, we're working on this as part of the build phase for LinkerD2. So right now we have two examples. This isn't a full list, but we have the web container and then the controller container for LinkerD2 and for their head tags, we're supporting commit, shahash, and this is specifically the an A character version, which seems to be the common one that's spit out by if you're running the Git command line tools, which a lot of projects seem to be using when they're creating the tags. So this is a match for there. And what we're doing is allowing this to be essentially a variable. So if someone wanted to tag with just commit, that's fine. They could have different strings on here. That's what we're supporting right now. If folks want other, I guess, data available, then we'd love to hear back. Right now the idea is we would want ideally every project to create at a minimum an additional tag since you can do multiple tags on images that are pushed that have the Git commit so that we can tie it to a specific pipeline and build. And this goes all the way back to the dashboard so we can say, here's a commit. We verified the build status and we verified the actual container that was pushed. And we can see it all the way through. And that gets us to the end goal of testing in the end test running on specific commits, builds of the code with a specific commit. We're not showing the stable tag here because at the moment we're tying stable tags for the container image directly to the stable tag that's on the Git repo, which is on the left hand side or at the top you see stable ref. That is what LinkerD 2 is using for both tagging and Git as well as tagging and docker images. That seems to be the case for the CNCF projects that we've been looking at that are publishing containers. If they have a release and they've tagged it in Git, they've created a tag on a docker image that they've also published. So this is in progress. This is about the configuration from an end user side. What would LinkerD 2 do if they're going to maintain it, modify those sort of things. And then what would be behind the scenes, which we're working towards is the integration directly with Travis for checking the status that you're saying based on those URLs in using the Travis API and then going to whatever the container registry. So for image registry, so for that, that was Google's container registry. Being able to pull those down, validate the containers are available, and then we update the status badge. And the good thing with this is if any project is using Travis, then that integration for the Travis portion on the API would be usable. And then we're looking at what are the pieces that are different. And then of course, if there's a project that's say on another one like CircleCI, which you're saying there's now some on GitLab, we'll be adding those. So we're treating every CI system the same. We'll do an integration and as we come to the project. Are there any questions about this just direction that we're going or comments or anyone want to discuss this, either LinkerD 2 specific or just external CI integrations? I have a question. All right. This is Ed from Packet. Ed, do you have a list of target CI systems beyond Travis and Circle that you mentioned, or is that primarily driven by the CI systems that the projects actually already use? It's primarily driven by the projects. And so we're working through from supporting CNCF graduated and incubating since we have most, we're primarily working on the incubating now at this point. And we do have the graduated will kind of be going back to the projects that are currently running on the dashboard and moving from internal CI to external integrations. And the integrations themselves will be driven by those projects. That saying that the code and how we're doing this is of course all open source and we're trying to use as much upstream like the CubeSpray and other stuff. The integrations themselves, I would hope that it would be useful outside of just the CNCF CI. We're trying to make it pluggable. And then if there's any desire to add something else, then happy to have PRs and help on that. Terrific. Thanks. I think Matt Spencer from ARM is talking about Rover for use on a lot of the ARM builds and directly working with projects for adding essentially an additional pipeline that does ARM builds since often the projects are only doing full testing and container publishing for x86. So there it might be something where Rover would maybe be a good contribution external, maybe from ARM to add support for that. And that ties directly in with that multiple CI systems for project, the support that we're trying to account for. So we would add an additional section and it would have the architecture ARM and then we might say here's a public, you know, Rover image, I'm sorry, CI project URL and wherever the image repository is. That sounds good. Any other questions or comments? All right. I'll wrap up this section so that the next presenter can have time for their intro and roadmap. So what's next on CNCF CI? We'll be continuing on the epics that we've been working on the CI refactoring. We'll be creating that Kubernetes provisioning wrapper next, adding support for external contributions. We'll be wrapping up the build integration with LinkerD2 on Travis CI and then starting on the app deploy phase with LinkerD2 continue adding LinkerD2. So the app deploy phase and using their container artifacts that are published and updating the documentation accordingly. We're also doing some maintenance, software updates on the various system, various repos of the system. For more information about the CNCF CI project, you're welcome to take a look at the intro to CNCF CI slides from KubeCon Europe 2019 in Barcelona as well as the deep dive slides available. We welcome your feedback to do that. You're welcome to add an issue to the cross-cloud CI dashboard GitHub repo. Please join the CNCF Slack channel, CNCF-CI. You can send us an email if you'd like. Please join the public mailing list and we meet on the fourth Tuesdays at 12 noon Pacific time. We're also on Twitter at CNCF CI. Next on the agenda is click to cloud introduction and upcoming roadmap. I will stop my screen share and if you'd like to share your screen, feel free to do so. Hello, everyone. It's Shubham from click to cloud. Let me introduce my company. We are a cloud migration company. Hello. Yes. Hello. Thank you. Hello, Lucinda. So let me introduce my company. We are a cloud migration company and we are into China, India and US basically. And we are having our own product for cloud migration. That is cloud brain, cloud billing softwares and cloud compare. And we are core contributors in CNCF. Test test for ARM with its use cases by configuration ARM alert manager adding exporters building GitLab for CI process. We are looking to contribute more into CNCF. So Lucinda, can you please help us? How we can contribute more into this? Yes, I can certainly share some references. Are you on the CNCF Slack channel by chance? Yes, we are on a CNCF Slack channel. Wonderful. And have you joined any of the special interest groups or other working groups by chance? Yes, we have joined a couple of groups in that. Wonderful. What is it that you would like to help contribute with? Sorry, I can't get to your point. Hello. Yes. So the CNCF CI project is currently running on bare metal packet. We used to support multiple clouds. I'm trying to think of some cloud provider SIGs for your cloud migration. In what other ways would you like to contribute to CNCF in general? Hi, Lucinda. This is Rupal from the cloud. Can you guys hear me? Yes. Hi, Rupal. Nice to meet you. Nice to meet you. So let me give the basic background where we came and how we started contributing into CNCF. We are a core contributor apart from our product, which is the cloud brain, as she mentioned. We provide the migration to Alibaba, Buave and Azure and AWS. Apart from that, we are a core contributor in OpenLabs, HashiCorp and some of the OpenLabs and CNCF projects. So Anilai, who is the director of CNCF, she asked us to provide the support basically on arms, testing on all the projects, whatever the incubator or graduated projects on arms. So that is the first agenda that we wanted to test all the products. Second is we have your customers who wanted to open source their project under CNCF umbrella. So we are also wanting to give the entire updates of how we can contribute, becoming an advisor under the CNCF umbrella. So there are two different, I mean, one is definitely from IP point of view that how we can help in migration space. And second is few of our customer wanted to contribute their product and open source their product under CNCF. So how we can become an advisor for them. So these are the two asked from Flip the Cloud. Thank you for the background. Let's see. I'm trying to think of some references that the CNCF has given projects. There is if a one of your customers wants to become a CNCF project and become open source, there's a process to create a pull request and present to the TOC to possibly be included as a sandbox project or to be become an incubating project. So I could share that link. It's on the CNCF TOC. So is there any specific community meetings or board meeting where we can present that idea? Our project details like that project has been already open source, but it's not yet contributed in any of the sandbox or integrators. So we wanted to get a couple of projects under this umbrella. So maybe we have like more projects pipeline in future, but guidelines are really appreciated. Absolutely. I'll share the guidelines with you. I know the TOC has a backlog of potential projects. And so you'll be able to take a look at the instructions and go through the process that way through the TOC. And the TOC, the technical oversight committee of the cloud native computing foundation meets three times a month. And so through their GitHub repo, I'll share that in the zoom chat for you. And that will have more information on dates. The TOC community calls are also listed on CNCF.io community calendar. And the next TOC meeting on the calendar will be next month, Tuesday, August 6th. Got it. Thank you. I think till the time we should be able to come up with the exact proposal and roadmap that what are the expectations from the cloud and what we want to get from CNCF? That looks great. Yeah, I'm clicking through the TOC. Oh, I'm not sharing my screen. So I can share my screen and show you where I'm going. I am in github.com slash CNCF TOC. And I clicked on proposals. However, that may not be the best place to start. Best place to start is usually CNCF.io, the main CNCF landing page. There's a dropdown for projects. Oh, and it wasn't there. So maybe it is in the TOC process. So either way, thank you so much. Taylor dropped in services for projects. Does that say how to become a project? Very good. So I'll drop in the TOC process link as well, which should help with the guidelines and the process on how to create your proposal and how to add yourself to an upcoming TOC or a request to be to present to the TOC. Thank you so much. I think it's really helpful for us. Oh, wonderful. Because I would like to know like if you guys have like if you guys have the plan for the migration or multicloud management related feature and your pipeline, so let just let us know so we can discuss more further about that as well. Because in OpenSDS and some of the other communities, we are also contributing our efforts towards multicloud management and migration related IPs. So one thing to look at is the different the working groups have, they're supporting many different projects and the CI working group. The CNCFCI is one initiative that's part of this working group. CNCFCI is providing a dashboard for CNCF that highlights the different projects and helps with promoting them. It's not say a CI system that's taking over all of the projects. So if you were looking at something like helping a project, a CNCF project to support multicloud or something beyond maybe what their current features are, then that would probably be something to reach out directly to each project or looking at something like talking to the TOC or one of the other groups. This could be something that if you reached out to CNCF staff directly maybe through the help desk link, which is also on that services or the CNCFIO, then that could help with getting some direction if you're talking about multicloud support. The CNCFCI project itself isn't actually looking for that. We actually removed that feature, it was supporting multiple clouds at some point, but that's not the focus on the dashboard itself. Looking at the specific CNCF projects itself and saying which may, I would suggest actually looking at each project and then seeing where it would be a good fit because CNCF is both hosting as far as projects that are officially supported from graduated incubating and then sandbox for potential. It's covering a lot of areas and so the multicloud may or may not fit, but if you'll go and look at those you can probably see where it might be a good fit and then the other place I could recommend looking is the landscape itself, landscape.CNCFIO or L.CNCFIO. That'll give you an idea of where various projects, groups, and then companies are providing services and where what multicloud is doing might fit. I know that CNCF wants to make any type of vendor or project findable or viewable in the landscape, so if you could actually look and kind of think what are you providing and what are you trying to do and that would guide you on being able to communicate what you actually, how you could help the various CNCF projects. It's very large, but the main thing here is on the left that what Lucy is showing, you can select categories and other stuff, there's what type of work you're going to be providing. So if you can do some of that and then look ahead once you're talking to the appropriate people and CNCFIO be able to communicate, here's the type of projects that we can help with. Got it. I think it's like we've got enough information from you. Next meeting we'll be able to share the plan after leaving all those details and be so much for providing this. You're welcome and I think we're at the hour. So Lucy, if you want to close it out and then next meeting. Yeah. That's great. Oh thank you for in advance for concluding to the CNCF. Our next meeting will be on August 27th at 12 noon pacific time and some action items. It sounds like if you haven't already joined Slack, please visit slack.cncf.io into your email address for your invite and then join the CNCF-CI channel among others and feel free to subscribe to the CNCFCI public mailing list where we'll send out a reminder of when the next meeting will be and when the next release of CNCF-CI has been published. Thanks everyone. See you all next time. Thank you. Enjoy your week. Bye bye. Thank you.