 Okay, so I think we have a few people on the line, so I think, look at the agenda. So we have, I guess the first item on the agenda is the Harbor graduation review process or progress. So Michael, do you want to end this? Yeah, absolutely. So I had a question first. So last time we met, we talked about... I think you've muted. I can hear him. It says I'm not. Can you guys not hear me? I can hear you. Okay, cool. Yeah. Sorry about that. So last time we met, we talked about the possibility of a few folks looking into Harbor and reading some of our due diligence document, a couple of possible were either Quinton or Diane. Did anybody get a chance to look at our extensive documentation on that? I briefly took a look at it. So I need to go and review it in more detail. So, you know, the SICK is actually, I mean, we talked about the SICK B&B. And so the SICK approved and we're actually kind of getting more critical mass. We had a TOC Tuesday and Diane was approved as FEDIMA. There's two Dianas, there's Diane Mueller and Diane FEDIMA. So Diane FEDIMA is the next co-chair. So she's not here, but yeah, I'll sync up with her and see if we can get more eyes on your document. So yeah, and we also have a tech lead, Klaus is one of our tech leads and we're looking at adding a couple more. And so once we have that critical mass, we'll be able to kind of review the process. So there's also a chance, right, so for SICK runtime, so I think that's also very helpful and then we can, you know, make our comments there. I think it's bullet points and we can, you know, say whether, you know, we think it's okay or not. But overall, I mean, I think it looks good from our side. I mean, you know, keep in mind that, you know, we usually make a recommendation based on the due diligence and then later on it goes to the TOC for a vote. Yes. Yeah, absolutely. I think that's the process that the TOC wanted to follow as well is kind of see what your recommendation is and then from there on, see how it goes. Yeah. So yeah, so that's the status. So I saw that you pinned Quinton on the Slack and then you didn't get a reply. So I don't know what Quinton is exactly. I think he's, I mean, he's done a lot of work on the TOC and standing at the big runtimes. I think he might have taken a little bit of a step back last month. I think he got a new job. He started working on Facebook. So yeah, so he's been kind of, he might have been, you know, off the grid for a little while. And so. Okay. By the way, we forgot, are you recording this meeting? I don't see a recording. Yes. Okay. Yes. All right. So, I mean, let me get started then and see where we go from there. So I'm going to walk you all through the presentation that I did also for CNCF. Okay. Cool. Great. The Quinton doesn't really change between the TOC and here, some of the numbers will change, but overall, I wanted to kind of show you where Harbor is today. Obviously, we want this to be interactive. So if you guys have any questions, feel free to stop and ask. So today are, well, the TOC sponsor was Joe Beta when we started this, obviously, now Joe is no member of the TOC anymore. So I don't know if that will change and generally was responsible for the technical due diligence. He said in his latest comment that it's solid. So I'm assuming that's another word for his good with it. Okay. So sorry to interrupt, but yeah, so he's no longer in the TOC. So I would actually ask Amy, who will be, you know, maybe the next person to, you know, will become a TOC sponsor. Yeah. So, yeah, I mean, we don't, yeah, I talked to Amy on Monday. We don't need to worry about that yet. Like the, it's, it's in the review process. So we don't need another sponsor right now. All in me, it is once you guys review it, we'll figure out if it's approved or not. Like the sponsorship is needed so you can bring it into the table so the TOC can do the due diligence. Later on, it's just all about the vote. It doesn't matter if you have one sponsor or three, nobody votes for it, you won't get in. Got it. Yeah. So let's take a look at Harbor in a nutshell. So Harbor is an open source container image registry. We aspire to be the repository for all of your cloud native artifacts that includes images, home charts, and all OCI compliant files. And we secure all these artifacts with role-based access control, we enable scanning images for vulnerabilities, and then we can sign images as trusted. Our mission is to be the most secure, performant, scalable and available cloud native repository for Kubernetes. And that is why we're actually adding all that work around OCI compliance and Harbor to the door that will ship at the time of cubic on the U this year will have full support for OCI. Our pillars are around security and compliance, performance, interoperability, and providing a consistent image management for Kubernetes. A lot of folks are asking, why do you want to even run your own registry? And what are some of the advantages that Harbor brings into this ecosystem and into this space? First one, security and compliance. Harbor provides a comprehensive policy that enables IT operators and administrators to define it and be enforced. We allow you to have your registry and data ownership because it's a private package solution. And then we'll provide identity federation with built-in multi-tenancy. So that gives you, from a feature standpoint, vulnerability scanning, we have CDE exceptions, we have image signing, quotas, retention policies, we have full support for federated identity with YDC and LDAP integration, along with row-based access control and CLI secrets that basically make use of that identity federation. And then we enable you to have multi-tenancy from a project isolation perspective. We allow you to deploy on any infrastructure using the assets that you own today or you can go buy new assets or deploy it in the public cloud. So private, public, hosted, and the edge are all scenarios that Harbor supports. We enable you to enforce data locality because you can deploy Harbor on any hardware you own. And then we're both Kubernetes and Docker compliant with a full support for scalability and control. You get to have control access over the artifacts that you own, and you can replicate resources based on your business needs. Actually, we enable you to replicate Harbor artifacts to Harbor, Docker registry, Docker hub, Huawei cloud, AWS, Azure, GCP, Alibaba cloud, and many others. So we enable you to centrally store all your assets into Harbor and then create a hub and spoke model where those resources can be replicated to other control points near where your applications will run. Why is that important? Harbor can enforce your comprehensive security policy, so you can scan your images, consign them, you can make sure everything's good, and then when you replicate them to the outside clouds or to the edge, they can just be consumed without having to be rescanned. And that works, by the way, that scenario is really big for the edge, when you can actually scan all your images in your data center, and then the edge just gets the image and to use it without having to enforce policy. And the last is automation extensibility, being a good cloud native citizen and being in the CNCF. It was very important for us to be plug and play with existing investments in infrastructure and services that includes Kubernetes, and it includes a lot of the other things that we're doing in Harbor. So I'll give you some ideas here. We have Syslog integration, so you can integrate with many logging and analytics tools. We have Webhooks, so you can automate and extend Harbor with your CI-CD pipeline. We have a full REST API that basically our UI is based on, and then we have robot accounts that enable you to create accounts that will interact in an automated way with Harbor. This is a high-level picture of our architecture. And by the way, encourage comments if anybody wants to, I can't see the video, but if anybody talks, I'll stop. This is a Harbor architecture. The most important thing I want to point out here is that it's very modular. We can deploy Harbor on Docker. We can deploy Harbor on Kubernetes using pods. And you can have a scaled-out Harbor architecture where you can have geo-redundancy and high availability of many of these Harbor components, both at the Kubernetes layer, as well as at the data access layer that includes RADIS and our Postgres SQL database. So let's start from the left. We have identity providers, ADL, DAB, and YDC, Auth. We have, in the middle, these are the core services of Harbor. They're highly modular, and they're isolated. And each of them have their own tech lead that's responsible for these areas. Allowing us to grow both the ecosystem as well as the community in these areas. We have full integration with Chart Museum and Docker Registry and Notary to enable some of our capabilities. And with the next release of Harbor, we're going to be adding the support for OCI compliance as well, and obviously, we'll update this diagram. Our data access layer includes our key value storage, local or remote storage. This is where we store the actual artifacts. So if you push an image into Harbor, that's one gigabyte. It will get stored into the storage. And then we'll have our SQL database for the configuration management that we have. At the top, yeah, go ahead. So for the data layer, is that a key value storage just for just caching or? Yeah, that's caching. Okay. So all the artifacts are either on the file system or object storage, like S3 or Google Cloud Storage or something. That is correct. So all the images you push going to local or remote storage, you can go to S3. Any S3 compatible storage, you can go to persistent volumes on Kubernetes. And then our Redis database, which is our key value store can be reset, for example, and repopulated. And then SQL database contains all of the configuration that you have in Harbor from the policy to the users to replication attachments to authorization. Everything that you need in Harbor goes in there. At the top, we have our web portal. You can interact, obviously, using the KubeNet and Helm, because we have the clients set up in Harbor as well as the Docker not a client. And then everything is backed by a full API frontended with NGINX. Now, this is where we're going to talk a little bit more in a little bit as well, but to have our scan providers. Initially, for the longest time, Harbor only supported Claire as a static analysis tool. We found that that was a great model and we wanted to expand that. So we actually went down this path where it created a pluggable architecture to enable you to bring your own scanner into Harbor and to be able to leverage for the community and other CMCF projects and partner companies that are doing a lot of work in the security space. So we started down this path with Aqua and Ankor as the two companies that helped us both define the interfaces, define the work and start building the first scanners. And now we end up, we have multiple scanners. Aqua has 3D and they're also working on CSP. Ankor has NGIN and Enterprise. And then Dusek also has created a scanner and we're talking to a lot of the other security companies that are CMCF members to bring their own scanners. You can think of like Cystics, Twistlog, StackRocks, all of those folks are looking into it. And then the last part is the replicated registered providers that I mentioned in the last slide. We allow you to replicate artifacts and assets from Harbor to Docker distribution, Docker Hub, Huawei, all the public clouds in the US and many other locations. So that's important for being able to plug in Harbor and interoperate with whatever your data is and where your apps are running. Let's take a look at the high level of the project overview. We started Harbor in June 2014 at VMware. So this is a project that's been going on for almost six years now. We donated to CMCF in July 2018 and then we started incubating at CMCF a few months later in November 2018. We have 20 plus product implementations. These are products that are including Harbor either directly or indirectly. Some of them also have enterprise support for Harbor. 115 contributing organizations and over 300 community members from a variety of companies around the world. These are kind of our money slide for Harbor. Let's walk through this really quickly. Well, so we actually passed 10,000 stars. This was a month ago and we are well over 10,000 stars today. Well, 170 contributors with almost 3,000 folks. 14 maintainers, five of them are non VMware. The other nine are VMware. So you see a good distribution between VMware and non VMware maintainers. We have four releases since the CMCF donation. And when I say four releases, I mean four major, major slash minor releases, right? Like just like Kubernetes has four releases a year that these are releases of multiple, multiple patches. We have 30,000 downloads for sure that can be accounted for. And the reason why I mentioned this is you don't want to lie to folks. If I go to get to Docker Hub and I look at our container images, there are like a million plus downloads on each one of them. But a lot of those are CI CD pipelines. A lot of those are different things. So there is absolutely no way for us to know for sure how many end users have downloaded hardware. But we have 100% certainty that have been 30,000 downloads based on some metrics that we had on our Google analytics. Did you have some metrics on actual usage? Like some... Yeah, I'll tell you in a second. Yeah, absolutely. So we have 700 plus Slack members, 12,000 Slack messages that are tracked that basically have not expired over 1,200 Twitter followers, 8,000 plus commits, six blogs, three webinars, almost 5,000 PRs, 95 contributing companies to just the core hardware and not the outside projects as well, 71,000 GitHub views and 13,000 GitHub unique visitors. So, and as you can see on the graphs here, the activity on hardware, the hourly activity has kind of stayed steady over the years and increases whenever we have releases. But this project is well-funded, it is well-maintained, it will have a very active community that is still contributing heavily on hardware. And if you see the chart of contributing companies and developers, you can see that it has a steady and gradual increase over the years. Sorry, one question. One question about the maintenance. How many maintenance from different company? Five. Five. Any more details about this five company? For them, how many maintainers from company A, from company B? Yeah, so Aqua has, I mean, I'll show you guys as well. So if we go to, we have, today we have four new maintainers, what happened here, sorry about that. Because Github decided to, oh, they expired my password. Oh. All right. Okay. It's going to be quick. You can send this information later. That's fine. It's in the due diligence document. Okay. So the due diligence document has him, but I'll show you the maintainers, right? I just want to show it to you so you can see this wall. Okay. But we have Daniel from Aqua Security. We have Nathan Lowe from Highland. We have Dichen who's independent. He works for a company, but we couldn't update his company name for security reasons. We have Fanjiang Kong from Kihu. We have Mingming from NetEase. So these are, this is our maintainers list. And the due diligence document talks about it as well. So it's nine folks from VMware five independent. Okay. That's great. Yes. I need to update my Github password very soon. So moving on down the path around extensibility. I mentioned this earlier. We underwent that effort with Aqua and Encore to design this extensible plugin interfacing hardware. The first plan for this plugin interface is to enable just scanning of artifacts, right? The static analysis, just like what Claire did. So you have some images in hardware. You create an asynchronous job. You talk to a pluggable scanner like Claire or Encore or Aqua. They do the scanning and there is turn some results which are in turn displayed into hardware and then policy can be enforced on top of it. Policy like don't allow any developer to pull an image that has an image of high status in them. That's very simple policies. Now in the future, we want to use this extensible interface to do more things like for example, license enforcement, license check, inventory of assets. So you can see where we're going with this. We're going to add a lot more things into hardware, but we're starting with just scanning for static analysis. The interface is fairly simple. We actually have one company called DooSec that did not talk to us at all. They went through the documentation on the interface. They looked at what Encore and Aqua did and they implemented their own scanner and one day they just show up at our community meeting and said, hey, we have a scanner too. And it was awesome. But these are the scanners that we have today. When we look at the roadmap from hardware, we're very aggressive. We want to basically do a lot of things into hardware. We have a world-defined roadmap that takes us over a year ahead. That's also documented in our project board that talks about the current release, our backlog. So this roadmap is defined in there. From a management aspect, we're starting to build a Kubernetes operator. And actually one of our contributing companies called OVH has created the first release of the operator. And we're actually undergoing review right now and expanding it and going to add features so we can make it publicly available for everybody. We want to actually do sign-in policy replications. So that's a limitation notary and we're working with the community to actually enable that. Metadata management, perfect scale. We're always going to continue improving that. And then we want to add more and more things around observability. From an image distribution, we want to actually enable users to put image on the edge in a much more efficient way. So we're looking into proxy caching capabilities as well as P2P distribution. And for P2P, we're actually working with Kraken and Dragonfly, more with Dragonfly right now, but Kraken in the future as well, to enable the P2P distribution in Harbour. Proxy cache, by the way, we actually have it working, but it will not be in our next release. It will be probably the release after that. But we were able to get proxy caching capabilities in Harbour already. On the extensibility. Yeah, so question on the image distribution. So once you distribute that image, are you going to, do you have plans to scan that image at the edge? Or is it just going to be scanning in a central location and then distributing that afterwards? Yeah, it will probably be scanned at central location. And then for the P2P distribution, the way, one way that we could do it is we can replicate that image after it has been scanned and policies enforced to the hub or the core of the P2P. And then P2P will distribute it wherever you have nodes and agents, depending on how you had set up your P2P distribution. But the resulting image, all we have to enforce is that it's the same image as the one that was in Harbour, because it was already scanned. P2P software usually cannot enforce policy. So that means that you need to maybe have a signing mechanism at the edge? Yeah, that's why we need this signing policy replication so we can actually persist the signing on the P2P. On the extensibility front, we already have webhooks and we improve and add more webhooks with every release of Harbour. We want to continue it on the path we're not going to stop. As we add new features, we're also adding webhooks for those two features. The interrogation service is kind of the next generation of the pluggable scanner I mentioned earlier. Being able to interrogate images and understand everything from license to usage of libraries to the ability to do inventory management and more things down the line as well. And then on the cloud native artifact management and OCI initiative, we're trying, it's one of the big features of our Harbour to the door release that will ship next month and that's around bringing OCI capabilities into Harbour. So now you can actually store in Harbour, not just images and home charts, seen-up bundles, OPAs, anything that's OCI compliant. And we're actually working on creating a list for our end users to consume that indicates which file types are OCI compliant today and which of them may be in the future. So I want to talk a little bit about, we have a couple of customer testimonials here. When I met with the CMCF TOC, I brought one of our maintainers, Nathan Lowe, to talk about it. The conversation was recorded and basically what Nathan said is my company is using Harbour in production at a very high level. It's an important tool for us and it was so important that I started coding into Harbour and he was actually promoted to maintain it as well. So he became an active contributor into Harbour. But at Highland Software they have almost 2,500 tags, 370 plus container images 2.7 terabytes of storage, 175 Harbour projects and over a thousand active developers consuming Harbour. So being used in production. Another one of our customers, it's a leading payment systems solutions provider has a geo-distributed Harbour instances both in Austin and Plano where they actually build images and push them into Harbour and use Harbour replication to keep the two instances in sync so that applications can seamlessly move from one Kubernetes cluster to the other. This works beautifully and they have a fairly sizeable Harbour installation as well. I want to actually stop here for just a second because I want to show you guys one more thing here. So if we go here under our community we'll have a pinned issue that will ask the community to come and tell us hey, how do you use Harbour? And as we meet more and more folks we ask them to tell us how they're using Harbour. I want to show you guys very quickly three testimonials. The first one is from Newellsoft where they've been using Harbour for a while now. They have 5.5 terabytes on Harbour with 17 million pool operations and over a thousand unique images and 300 home chairs. A critical part of that infrastructure. Another company, Agoda, they're like a booking holding company. They have 8,000 tax in 1141 repositories and they have 28 terabytes of storage consumed in Harbour. And one other company, where is it here? All right, because I can't find it here. There's one more company that had a significant sizeable environment in Harbour as well. So every time we go to a conference, every time we have a meetup we're finding more and more customers that are using Harbour at really, really high scale in production. So all of those tools that the community is happy, they're using it, and sometimes we just don't get to know about them. Now from a graduation criteria we have the TLCPR, we have the project statistics that you can see on CNCF and we have over 25 plus listed in the adopters file. These are independent end users of sufficient scale and quality that are using Harbour in production. We have the testimonials linked that I just showed you earlier. And we are having multiple products that offer critical and enterprise support on Harbour. The three that VMware has is Enterprise WQS, Essential PQS and VQ. We have 40 maintainers with a healthy distribution of technical ownership across six entities. And we actually, in the due diligence document, we have a breakdown of who's the technical lead of what area. That's actually also part of our maintainers document. So if you go back to our maintainers document, one of the things that we have here is a list of who's technical owner for what area. So for example, for web hooks, you have meaning pay. And for the policy engine, you have Nathan Lowe. For security, you have Daniel Pasa. So every single person from our maintainers is responsible for one critical area of Harbour. Again, lots of contributions, lots of authors, lots of committers, and we have demonstrated a substantial ongoing flow of commit and merge contributions over the last multiple years. And we have a new release approximately every three months. And that's it. Questions. No, I don't have any questions. I mean, it looks pretty solid to me. So I think the next steps will be to just evaluate the due diligence, look at the due diligence and go through the checklist. That'd be great. Do you think that a timeline, I mean, we've been on the queue right now for many, many months. And with Cubicon coming very close, really, I mean, good opportunity for us to make a splash in the community. To be a graduate. Yeah. Yeah, I'm hoping before, maybe within the next month or something, so Cubicon is in March, right? And March, yeah. So hopefully before, yeah. Within the next month or a couple of weeks. So that's my hoping. So, yeah. And then later we'll have to go to the TOC for a vote. Yeah, absolutely. And I mean, that's our goal. Like the TOC might say yes or no, but at least the due diligence that we know everything is in good standing or when I get it out of the way, either with thumbs up or thumbs down. Yeah. Yeah. And yeah, we'll look at, try to get this as soon as possible because it still needs to be put up by, I think, six storage and there's another thing. Well, they were saying that if you, I mean, the due diligence doesn't need to be done three times is what Amy told me. Quinton asked, let's do six storage, six runtime. And like, but Amy said, we don't need to go through every six. This will never finish. Okay. Once, once it will take point, like for example, you only three six to make sure that we have good architecture or that we have a good, a good maintain a diversity or that we have good contributions over the last year. One sick does it. It's, it's enough. Yeah. Okay. Okay. Yeah. Sounds great. Anybody else has any other questions? Okay. I think, thank you all really appreciate your time. Thank you, Michael. So I think. We have another presentation. We have about 26 minutes left. So Klaus, do you want to go through your presentation? Yeah. Sure. Okay. Can I see my screen? Yes. Okay. That's great. I do. I would like to, uh, each, uh, give an introduction about the volcano. And we would like to, you know, donate to volcano at the sandbox into since F. And this is the cost. And, uh, I think Kevin came in one also on the line. Yeah. Yes, I'm here. Uh, uh, give some background about the volcano, you know, uh, full now more and more user would like to run a batch workload in the on the Kubernetes batch workloads such as a AI, which is running big data. And in the, uh, since every search is a group, there are also several people who like to run, uh, some HPC workload on the Kubernetes. But when we try to run this kind of, uh, workload, we found some, you know, uh, we found some, some gaps or some requirements to the Kubernetes. Now the first one is a scheduling about, uh, guns, gambling, fair sharing, something like that. Yeah. And the other one is a Q, uh, a job Q management, uh, such as an index drops, multiple, uh, multiple post templates. Yeah. Because the big data AI also, you know, uh, handling data. So data management is also important. Uh, and some others such as the accelerator. So, so based on this one, we would like to do some enhancement to the, based on the Kubernetes. And then in the, you know, in the, uh, in the last year, the Kubernetes sites, we are, uh, Kubernetes is in the, in the Chrome, it said Kubernetes is a platform for the other platform. So we would like to, uh, have a, you know, batch platform for the, uh, for this, for all this workload. So this is our motivation, motivation about volcano as far. Yeah. Uh, this is the overall of volcano, volcano is a Kubernetes native batch system. We based on the CRD operator and it's a multiple scheduler to support a different workload. For the cool flow, Spark and HPC workload, this, uh, we call this one as a domain framework. They focus on the, you know, big data AI and HPC arrow, but all of this, uh, uh, workload have a similar requirement such as batch scheduling, uh, enhancement job, uh, ordinary queue management. And another one is a data management and, uh, accelerator support such as GPU. So we would like to have a platform for, for this, uh, for, uh, for this workload, AI, big data and HPC. So this is the, uh, this is the, this is the volcano we would like to build. Uh, there is some, uh, some history about the, uh, volcano. We use, we use it to have a CUBE app teacher incubator project of, uh, in the Kubernetes to, uh, to bring batch capability into Kubernetes. And then later we, uh, rename it to CUBE batch and only focus on the schedule part. But in the, um, but later we found that, uh, batch scheduling is not enough for, uh, for HPC and for AI, uh, even for the big data, the only schedule is not enough because so we, we also have some such as CUBE management, data management and, uh, some enhancement to the accelerator such as device plugin. So in the last year, uh, we open source, we, we build the volcano and open source is, uh, as earlier for last year. So in the next stage, we would like to, uh, to make it as a since they have some box. Uh, we have what you have here, and we house like, and we also, you have a Google group there. Uh, uh, you know, we also have a logo here. Yeah. Yes. This is the volcano, uh, current status of, uh, current community status of volcano. Now we, uh, open, uh, a sandbox forecast here in the, uh, since FTOC, and we have more than 700 uh, and we have 70, more than 70 contributors from Huawei, AWS, JD.com, OpenAI, and Tencent. We have three, four more released for now. Uh, we are going to, uh, to have what, uh, we plan to have one release every, uh, every single month. Uh, for now, we have five maintenance across the three independence company. Uh, we have three maintenance from Huawei, one, uh, one maintenance from IBM, another maintenance from Tencent. We have, uh, 12 public adopters, uh, for now. Uh, we didn't include some, you know, we have some offline discussion with different, uh, different people who they already use this, uh, a volcano in their, uh, in their production environment, but they didn't publish this yet. Uh, the license is, uh, FHG2. Yeah. Okay. Uh, this is, uh, uh, major features of a volcano. We have a, uh, we have a Q, job management for the, uh, for the batch workload. So, and we also have some command line to have a user, have a user to a management workload. And it's also to have some similar interface to the HPC user, such as we stop, we, uh, we stop, something like that. Another important, important feature is, uh, the scheduling part. So for scheduling for HPC and AI training, we have, uh, fair share scheduling between job, Q, and between in space, cross Q for, uh, for, you know, for the batch workload. And for, uh, for the multi-tennis user case, we also support the predicts and prioritize, uh, for, uh, to schedule in the pool. Uh, preemption for, uh, for the workload. Uh, this is, uh, our title of, uh, volcano, uh, the green, the green part is the belong, is our, uh, belong to the volcano. The, the blue part is, uh, Kubernetes components. Uh, we introduce a two CRD here. The first one is a job. Another one is a Q to run batch workload. And, uh, we introduce, uh, a controller or controller to mind the life cycle of job and the life cycle of Q. And, uh, we have a, uh, uh, we have a batch of scheduler to, you know, to provide necessary, uh, scheduling policies such as, uh, uh, kind of scheduling fair share or something like that. We also have some device plugin to, uh, report, uh, uh, device information such as a top logic and, uh, uh, something, uh, some information. And, uh, we, uh, and in our prime, we are going to do some integration with the singularity. Another, uh, another container runtime in the SPC era. Uh, we also provide, uh, command lines which, which will have, uh, have user management, uh, life cycle of job. And, uh, it will also provide, uh, the similar interface to the HPC user. Yeah. Uh, for now, we did several integration with, uh, uh, other community. Uh, for, for example, we integrated with, uh, cool flow, uh, we integrate the TF operator of cool flow community. It's, uh, it will do leverage on scheduling in the volcano for their, uh, TensorFlow training job. And, uh, we also integrated with the cool flow arena. Uh, it will have to manage the job. We already integrated with the Spark operator. Yes, have to do the scheduling for the Spark part. Uh, we also have, uh, integrated with Cronwell, another, um, another software in HPC. Uh, uh, and we also have integrated with the pedo-pedo, another, uh, machine learning framework. All, all of this is, uh, you know, integrated with the other community and get the document and get the code merged in, in those community fire. Uh, this is the short list of our adoption. Uh, we have uh, uh, in Huawei cloud, we put the volcano into the production. And in the, for them, in, uh, GD.com, it's, uh, it's a valid volcano for their Spark on, uh, on Kubernetes environment. And in the other, uh, in our link, we have, you know, we have a tele-adopter there. They, both of them, all of them are running AI and big data for their, for their, uh, for their environment. Uh, for our roadmap, for now, we would like to, uh, to support more feature about the resource management and the resource scheduling for the batch workload, such as, uh, hierarchical keel plugin for keel or something like that. And for the ecosystem, we are going to integrate with, uh, link another, you know, another big data framework, which is also, uh, popular, right recently. And we are also going to do more enhancement for, uh, to, uh, integrate with, uh, Kubernetes device plugin for the, uh, for the accelerator, such as the GPU. And we are also going to integrate with, uh, Alexio, uh, uh, cache, data cache, uh, for the big data. So we, we will provide such as a data warehouse scheduling and a data locality in, uh, feature to work with them. Uh, we would like to, uh, for the Kubernetes, uh, for the CNCF, I think, uh, okay, we'll have uh, the domain framework, such as big data and AI to adopt a cognitive project, uh, especially the Kubernetes, because, uh, you know, more and more people would like it to run, uh, such kind of workload on Kubernetes. And, uh, for the volcano, we would like to, you know, we would like to have a neutral home for a volcano project so we can have more contributor and we will have more contributor here. Yeah. Uh, this is all, this is the overall introduction for the, uh, for the Kubernetes, uh, for the, uh, sorry, for the volcano. So any question here? Uh, I don't have any specific questions. Uh, great. I mean, uh, yeah, I'm interested in integration with Flink too, because I used to work with Flink. Uh, but, uh, uh, yeah, it looks great. So, I think, uh, next steps will be, I think there needs to be a current, um, you know, the whether to send this to, it's a recommendation to the TOC for Sandbox. Sandbox, uh, it's the requirements are not as high as a graduation, right? So, it's more like, uh, you know, I think we have three chairs now. So, um, so for me, it's, it's an okay. So we'll have to get an okay from another chair. So the presentation is recorded, right? So, um, I'll send it over to Quinton and Diane. I don't know if Quinton will be able to answer, but I'll send it over to Diane and, and, and from there, um, you know, we can send it over to the TOC for, for a vote. Okay. Okay, thank you. Thank you very much. Yeah. Anybody else has any other questions? Okay, so sorry, we can hear you a little more. Oh, okay. Okay. Sorry. Thanks. Thanks. Uh, so I was uh, so uh, yeah, I was saying, uh, yeah, so um, the next week we'll have a presentation on that and, and then so from Tom, I think Tom, Tom, you're on the, on the line, right? Yes, but it will be Jeff who will be presenting. I'm just here to listen if, uh, we're good to go, or if you're expecting anything from us before the meeting. No, no, nothing. So, uh, hopefully we'll have another chair in the meeting so we can actually decide to vote and give it a uh, to send it to the TOC for a vote, you know, after the presentation. You know, assuming everything is okay. And, um, yeah, and as far as, uh, the other item on the agenda is, uh, you know, we already have a new co-chair, so we, we have three. And then for tech lead, so there's, we still have two spots, uh, but there's two people interested. So, uh, so, um, you know, how the voting works, uh, we're gonna, uh, you know, just, uh, have a vote in the, with the co-chairs and then, and then send it over to the TOC to approve this new tech lead. So this will help us, uh, uh, have more people and more math, critical math to, uh, you know, help out the community. And yeah, that's it. That's all that we have for the, for today. So any, any other items that you guys want to talk about? Okay. So I think it's, I'm still not in. Oh, yeah, you're not on mute. Yeah, no, no topics for me. All right. Thank you very much, guys. Okay. Thank you. Thank you. Bye.