 And we're good to go. Thanks, Sarah. And hi, everyone. This is the OpenState Lucille community meetings. We are very excited to have a lot of very experienced developers here to share what's going on within the new release for the project. We still have people joining here. So and there's two children confusions. But don't worry, everything will be recorded. So if you miss it, there will be links for you, video for you. And the video for the previous meetings is already released at the mailing list. If you check it, you will see the previous one that Muhammad moderated. Hi, I'm Ricolene. I'm from the OpenState Technical Committee. And so we are here to very excited to say we got the, we got some hype through the Lucille. And we jump into the Korea. And thank you very much for J&P's from Sase. He's doing a very good job here. Muhammad is the new TC chairs. And so we'll be very excited to see what's going to be a new change here. And you'll see the number is a very crazy number. But the limit is two. We have like 24,000 code changes in Lucille, and which is very, I mean very big number. And more than 1,000 developers from 180 organizations during the Lucille cycle, I mean only the Lucille cycle, which is half a year. So everything we have is a very huge number of support from the community members. And we can't say how we can appreciate for all the works. So thanks everyone to whoever contributed and whoever do in any format of contributions. So thank you. And let's start with talking about, we have a lot of things going on in Lucille. One of them is we have some new SIG is generated. If you don't know what SIG is going, what SIG is for, and it's a special, interesting group. So it's supposed to be a bridge channel for developers and operators and users who sit down together to figure out how they can improve in this specific use case. And what is the SIG you can find out in the very below links comparison of short group structure. And the new SIGs are, you will see like, testing and corroboration tools, automations, large scale, multi-hatch, technical writing, unspoil, IE18, and containers. So check out those SIGs. We also have a lot of SIGs that's already running there and running well, like the security SIG and others. So if you're interested in helping SIGs, they're very much appreciated. And the Lucille cycle has generated a lot of SIGs like what we list here. We are trying to work hard to maintain and keep those SIG running and give a meaningful task to do. So stay tuned. So I just saw from the Zoom chat channel, say what is different between SIG and working group? That's very good question. And the difference between SIG and working group is we used to call a working group that is under user committee's governments. But right now, SIG is under both technical and user committee and the technical committee and user committee governments. So we're trying to make sure that we can put all the efforts to helping this kind of a cross user developer bridge to be more formal and get more government's blessing to keep running. Well, you can see the details in the links I post on the slide. So OK, next we have community goal. And community goal is something that we're trying to do every cycle community-wide. So we have two goals for Lucille. The first one is thanks to Kendall Nelson, who she put a lot of her efforts to trying to have the project-specific contributors for us PTAO documents. We, the progress of it is still under developing, but it won't take too much time for projects to work on. And she already provides a lot of helpful information for helping projects to achieve that goal. So I believe it will happen very shortly. And the meaning of this one is that we're trying to make sure that the contributors and the new PTAO tools, the volunteers, can have clear pictures to watch specifically for projects. So you should be able to check those documents for most projects right now to see how you can specifically contribute to projects without getting confusions, I hope. And the second one is drop Python 2.7 support, which thanks to Gaussian, this is just announced, completed, near two weeks ago. And we only have Swift and another, I think, mostly Swift and another part to still keep Python 2.7 support, but mostly most of the projects is already dropped in the Python 2.7 and declared to be Python 3 only. So this is very helpful for the entire community to adopting new structures. That is completely so check it out for projects and be aware that since it's already Python 3 only, when you're trying to upgrade it, usually you have to be aware of it. And we have a project in gathering will happen next month, early next month. And due to the current cases because of the coronavirus, we have to move it to virtually online, which might also be a good news because we already host a lot of things online. So people might, because of this, they might be able to join easily. And I think the ticket is free. So if you plan to helping and joining, you don't have to travel this time. But please join us, including projects. We have a lot of good projects that we can use a hand. And we can use the feedback from users, developers, and operators. We also have a lot of things. I just mentioned that a lot of them will be also joining the PDG as well, which they need user and operators definitely. So please join us, and also we must say a lot of appreciation for the sponsors who are actually making the PDG happen. Even though we move it to virtually, they still support us to keep this community running. So check out the schedule. Check go to registers. And if you have anything that, if you have a team that misses the deadline to register, you can reach out to PDG at OpenState.org, which I think they will help you to figure out what is going on. So please join us, and we will have a very exciting week. We also have an open-dev event this time, which is not a summit, and it will be also virtually online. And the different times that there will be instead of one big event, they will be separated to two to three events to talk a specific question and problems, which we hope that we can gather in people together to share the feedback, to actually discuss something and have some outcomes from it. And that should decide to help the community keep going. We have done a lot of amazing job in Oslo, which we will have project update later. What will happen in Victoria is definitely, it definitely depends on those events, outcomes, and trying to work for it. So you can register those events, I think. I think it's free as well. Probably I'm not wrong, but that's virtually so. It should be easy for you to join as well. So please join us and share your opinions and share your efforts. Next slide. OK, so we first have Singers. Brian was from the Singers. He's a senior PTL who shared a Singers project update. I will hand over to you, Brian. Hi, thanks. I'm broadcasting from the Cinder World Headquarters in Oxford, Virginia, in the USA. I hope everybody's doing well in this stressful time of a worldwide pandemic. What I'd like to talk about is the project health of Cinder and what we've been doing. So just to give you a quick refresher, the Cinder project provides the open-stack block storage service, the REST API, the scheduler, the volume service. And we also provide some client libraries, OS Brick that you use to attach stuff, and Cinderlib, which is a library that's used by Ember CSI. So it can be used for the container storage interface. And what it does is it allows you to use the drivers that have been written for the various Cinder backends if you don't need all the other Cinder services. So it makes a lot of sense for lightweight things like containers. So it's keeping Cinder relevant in this container-oriented age. As far as how the project's doing in terms of commits, so you can see the numbers there. In Stein and Train, we had roughly 150 people from roughly 40 to 50 companies. So the data I've got was from Stacklytics, which was last updated April 22. So I don't know that much has changed, but we're able to get everything we wanted to do during Yusuri, but we did it with 30 contributors from 13 companies. So that's not so diverse. I didn't mention earlier. I'm a senior software developer at Red Hat. And so those percentages on the right there with RH are the percentage of Red Hat. So Red Hat and Yusuri did 66% of the commits. I'm only mentioning that. I mean, we're competent software developers. And we're certainly very interested in keeping Cinder running and stable. But it's also nice to have a diversity of opinion. And people are articulating what they need. So if you're a developer or have developers that you can influence and they're looking for something to do, they might want to join the Cinder project. All right, so as far as content goes, very important part of Cinder is the backend drivers. So those are software that's stored in the Cinder source code repository and the main repository, but mostly written and supported by vendors who have various storage backends. So we've got 68 right now, seven more unsupported status. What that means is in order for a driver to be considered supported, it's got to be the case that the vendor is operating a third-party CI that runs on all code changes to Cinder so that we can make sure nothing's breaking any of the various backends because we don't have access to that much hardware. So that's pretty much the same number that we had in Train. The difference is that in Train, we had something like, I don't know, 14 drivers that were unsupported. So some of the vendors did make some effort to make sure their drivers got supported. There's one security. Notice I wanted to bring to people's attention. And I'll paste the link to it in the chat. It was announced on the mailing list December 5th, so several months ago. And it only occurs if you're using the Seth back end and you're using a non-standard configuration. But there's a configuration option that can cause a security problem. And what we're proposing to do is just remove it in the Victoria release. So I announced it on the mailing list. Never heard anything back from operators, so I'm assuming that our plan to just remove it isn't going to cause any problems for anyone. But you might want to take a look at that. And if it does cause a problem, let it snow right away so we can try to come up with some other kind of plan. Then as far as new features, I don't think there was anything major. A lot of drivers added capabilities. So for the back end drivers that we've got, there's a basic set of functionality that everyone must implement. And then there are optional things that people can do to make their driver function better and be able to implement more of the API. So a lot of drivers added some more capabilities. All the changes are documented in the Cinder release notes, so I encourage you to go take a look at those. There are kind of expenses, but they give you an idea of everything that happened. I said nothing major. I guess one thing we did do is we added support for glance, multiple back ends, and also for glance image co-location, which are kind of important things for edge use cases. So those were added. And then we're working on stability. We want to keep Cinder as stable as possible. We added some more voting gate jobs and more testing. And that's going to be an emphasis into Victoria as well. And next slide, please. OK, so for the future, we're already underway for Victoria Milestone 1, which happens like maybe two weeks after the PTG. It comes up pretty fast. So some things people are working on, there's a volume of local cash. That affects OS Brick, Cinder, and also NOFA. So that work on that was started during the story. And it's hopefully going to be completed early in Victoria. We're working on encrypted volumes for NFS. So that hopefully will land by Milestone 1. There's a new driver that patches are up for, and they've got their third party CI running from Hitachi. And then there are already some of the existing drivers have already put up patches for new capabilities. And then there's another effort to do on-the-fly encryption of data that's traveling around OpenStack. And the GPG encryption support is going to go in the OS Brick library. And so that's being worked on also. And then the virtual PTG, as Rico mentioned earlier, is June 1st through 5th. It's not too late to participate in the discussion. So if you're interested, please take a look at our etherpad. Feel free to add a topic or to look at what topics are there and attend if you have an interest or have something you'd like to discuss. Things we're working on, there's interest in the iSCSI driver for SAF. There's a group in Britain whose HPC group has expressed an interest in helping us implement that. So hopefully that will happen. And then we've done some work keeping unsupported drivers in tree longer. We decided to do that during this cycle. And it worked out pretty well. I mean, usually you want to kick the driver. Strategy has been to kick the drivers out as soon as they become as soon as the third party CI starts failing, because we want to have the code as solid as possible. But it's been people have had problems with getting the third party CI running again. And so in order to prevent code churn, we're keeping the drivers in tree longer, although they have to be removed as soon as they cause the main Cinder gates to fail. So just want to mention that. And then we've got a continued emphasis on stability and improved automated testing. We're working on adding a lot of tempest scenario tests to the Cinder tempest plugin. And then that can be run by the vendors and their third party CI's. So they get a complete workout. And we can be more confident of the reliability of the code. And that's pretty much it. So thanks for listening. If you have any questions, you can put them in the chat or you can help stick around. You can ask later. Thanks. OK, thanks, Brown. I'll say there's going to be a lot of exciting things happen for the PDG's for single-site police journey. And next we will have Octavia Piquiao, Michael here, to share the update. Thank you, Rico. Yeah, my name is Michael Johnson. I'm a principal software engineer at Red Hat. And I will be the PTL for Octavia in Victoria. So a few stats for our progress in USURRI, even though we're a very small team, we've got quite a few important things done. I want to say thank you to the team for all their contributions and working through the challenges we had in this release. And a special thank you to Adam Harwell, the PTL for USURRI. One of the more interesting things this release for us is we mentored four college students. And I'll tell you a little bit more about that. But I also want to say thank you to the foundation and Kendall Nelson for helping support that effort and working with us and the students to provide some new features. So USURRI highlights, one of the long requested features is load balancer availability zones. This allows an operator to define an availability zone inside Octavia. And when a user goes to deploy their load balancer, they can specify the availability zone they want to deploy that load balancer into at creation time. So for example, for the infor driver, a load balancer availability zone defines the compute availability zone, the management network that will be used, and the valid VIP networks that a user can attach to their load balancer. So this enables use cases, particularly in the edge space. So for example, deploying load balancing services to cellular locations or retail stores. Retail was one of the use cases that was brought to us and detailed that led to this new feature. Also on the client, we've added the weight parameter. This helps automation in that the client will now wait for activities against the API to complete. So just like Neutron, the Octavia API is an asynchronous API. So you can make your request. Then the back end will go and work on that request. And while it's in process of provisioning that change, the load balancer will go into a mutable state. With the weight flag, the client won't return until that immutable status is removed and the load balancer is now back into an available status. So really great for scripting and automation. Moving on. So as I mentioned earlier, we mentored four students. This was a partnership with North Dakota State University here in the United States. We brought the students in, brought them up on DevStack, taught them some basics about OpenStack, and got them working on the code. And so the feature they delivered for usury is TLS ciphers. So when you create a listener or a encrypted back end pool connection, you can now specify the list of acceptable TLS ciphers for that listener or that pool connection. This allows you to meet your security compliance requirements such that when somebody tries to connect to that load balancer and they're using maybe a weaker security than was defined on the TLS ciphers list, that request will be denied. And then we'll have to use a higher security connection to connect to the load balancer. So excellent feature for security compliance. The students also made great progress on adding TLS protocol selection as well. And that'll be landing fairly early in the Victoria's release cycle, where you'll be able to specify TLS 1.3 or 1.2 only for your listeners or your back end pool encryption connection. And then finally, but not least by any means, one of the major efforts from the team was working on control plane resiliency. So we're leveraging a technology that's part of the OpenStack Oslo Taskflow project called JobBoard. What this allows us to do is checkpoint at various points along the provisioning process of a load balancer. So each step that we take to create or deploy that load balancer, we'll save off the state. And if anything goes wrong with that controller that happens to be executing that provisioning sequence, we can redirect that provisioning request to an alternate controller. And it'll pick up exactly where it was in the process of provisioning that load balancer. So this brings not just the resiliency that we have with multiple control plane instances, multiple controllers. This goes into the sub provisioning flows and allows checkpointing and resumption of a failed flow down at that detail level. So much faster resumption in case something goes wrong. We're releasing that as kind of a technology preview in Yusuri. You can enable this through a configuration setting. And we're hoping to make this the default driver in the Vittoria release. Okay, thanks, Michael. A lot of exciting news as well. And next we have Zun. I'll hand over this to Le Hongbin. Hi. Yeah, so my name is Hongbin Lu. I'm going to give an update for the Zun project. So possibly most people don't know about this project. This is a, because this is a new project and so a brief introduction. So Zun is an open set container service. It provides a REST API for user to provisioning and manage containers in the open set crowd. And they can do that without, they can create a container without creating any VM or clusters because the container will derive a one in the compute host. So it just lies a VM in the lowers and each container has a neutral part. So it can connect to the neutral and L2 networks. That can be the same network as the VMs and each container can buy them on the senior volumes. So they have the options. So the user can create a container and be the options to configure the senior volumes as the storage. And the add data. One more thing is that Zun is integrated with the placement. So what that means is it is possible to have the VMs and container that is co-located in the same host because the placement is going to coordinate the scheduling of the VM and containers. And in the compute nodes, there's a Zun compute agent that's running inside the node and the compute agent is going to interface with Docker to manage the local containers. And that means that any container runtime that's compatible with Docker can be used together with Zun, for example. It can use Docker with Kata containers on using Zun. And on top there's observation layers. That Zun is going to integrate, sorry, the Zun is integrated with. The first option to trade a Zun integrated with is hit. So the hit is an open-set option service. And normally it's used to orchestrate the VMs with any other open-set resource. And right now with the Zun integration, it is possible to have the hit template that is orchestrating the Zun containers with VMs and with any other open-set resource, just so your user can use a hit template to specify the topologies of the applications that consists of a bunch of containers. And the second option integrated with is Kubernetes. And I'm going to talk about this in the next slide. So okay, the picture, it looked, the picture didn't look realized, but yeah, so in this release, Zun has several, implement several features. The most important feature of Zun implementation is the improvement of the Kubernetes integrations. So this includes the support of the CRI engines. So why we introduce the CRI engines? Because before we are, the only container engine we integrate with is Docker's. But in order to support the Kubernetes integrations, we need to support the concept of POT. But the POT is, the Docker doesn't support the POT very well. So instead of using Docker's, we introduce a second container engine, which is CRI. And the CRI can support, natively support the concept POT, and it also integrate very well with the Kata containers. So this will provide a better Kubernetes integrations for us. So, but in order to support the CRI, we need to introduce a network in this CNI. So we introduce a Zun CNI, which is basically doing the neutral and POT bindings for configure the network of the POT. So first, before creating a container, the Zun compute will call the neutral to create a neutral and POT. And then the Zun CNI will do the neutral and POT bindings for the POT. And yeah, so this is the Kubernetes integration features. And the second feature is called a specified horse on creating the containers. So this feature allow the users with a main period to create the containers with a horse. So this will bypass the schedulers and directly run the container in the specified horse. And the third feature is called a frame IP association. So this feature basically allow the users to associate a frame IP with a containers. And the last feature is support of the Docker entry point so the user can create containers with customized entry point of the containers. So that's everything from my site. Okay, thanks, Hopin, for the very details update and the diagram. Next, I will talk about heat. Hi, I'm again, I'm Rick Holain from EZSec. And next slide, please. So in, oh, sorry, can we jump to the previous one? Yeah, thank you. Sorry, I didn't saw that one. So we are very excited to say we usually, we have a lot of things going on. And if you're not aware what heat is, heat is orchestration services, managed services, resources across multi-cloud. And you can use heat to deploy resources across multiple open set clouds which is supported in this recent releases. And honestly, we have 62 contributors to help and 65 reviewers to have their review. So we can have 273 commits which it's not like the biggest in the earth, but it's a lot. So we thank for every efforts we have. So if you'd like to join us, please do. And if you have faced any difficulties in helping heat, be sure you send notice to me, a message to me or other co-reviewers. We will definitely help, we're definitely looking forward to help you to onboarding. So next slide, please. And lastly, we have some nice great news. Like one of them, we have a lot of Octavia, new resources since Octavia, as you see previously, Octavia doing a lot of a nice job. So we have nothing else need to do but we have to keep up with the resources they have so people can have new features. And we also have a nice guide to rewrite the XR routers resources from Neutron. So we now have a new XR router set resources. Also, there's a QoS Minimus bandwidth row so you can actually set those QoS calling of services rows in heat. And also we have a ironic support now, not every single ironic API calls, but we are still working on the wage for the rest of our ironic resource. Right now you can use, in Hades, you can use the ironic client and you can create ironic port. So I believe that will be helpful in certain ironic use cases. We also fighting hard to have resources for directly deploy ironic bare metal servers through ironic. But there's still some review. The patch is there, but there's still some review and update on the patch set. And we aware that we also update resources to adapting new features like you saw in Octavia, there's availability zone. So we also update the resource in heat to reflect that. And besides that, we have a lot of all the different changes in resources. Please, please read our release note. We're trying to make sure every features, every rule breakings or every upgrade relative issues note are there in the release note. So if any of these resources concerns you or interesting, please read our release note. There will be more details and you can even find the patches there from there. There's a one depacation, one depacate to resources, since as we mentioned previously, there are new extra route set resource in Neutron. So we remove the one in the route. And also, as you already noticed that the Python 2.7 support is removed and we now only support the Python 3 and we remove some API from service for those API are no longer supported in those services. So we have no choice but to remove them. So, but you probably already aware things that we are the last man standing to keep that. So, I won't say you will be breaking anything. Okay, next slide please. And as well, we, it is exciting to say we also joined the virtual PDG this time. You will see that it's a path to fill in to any topic that you're interested in. We are also looking forward to any operator, user and developer feedback. And the time will be Monday and Wednesday, 13 to 17 UTC times. So please join us. We will put more information in this virtual PDG is a path. If you have anything to say or feedback or crazy ideas, share to us. Also, you can check out the Automation SIG for who don't know what is Automation SIG. Automation SIG was originally auto scaling SIG and the self-healing SIG. These two SIG decide to merge together since there are a lot of overlapping knowledgements and a lot of overlapping efforts. So, if you'd like to join the Automation SIG which we are right now, we are sure of helping. We need more people to help us to keep this SIG active. We are doing some very exciting things to in community to guarantee the automations of this community and including creating tests, CIs and other documents. So join Automation SIG in the virtual PDG if you like. I think that's all I have to share for the heat. And if you have any questions or problems please reach out to me. I will be always be there to help. Thank you. And next we move to Manila. Gautam here will be sure he's PTL. Thank you, Rekha. Yes. Hello, everyone. I'm Gautam Placharavi, a software engineer at Red Hat and I've had the privilege to be the project tech lead for the Manila project for the UOSC theory cycle. So first off, Manila is the shared file systems service born an open stack. It can provide self-service multi-reader, multi-writer file systems to clients over our network and these clients can be anything, virtual machines, containers, bare metals, you name it. So the Osuri release happens to be the 10th official release for this project and it's been a fairly busy one at that. I'd especially want to call out the involvement of two outreach interns, Soledad Kuxala and Maury Tam who contributed a lot of code to the open stack client integration for this cycle and along with the other things to Manila UI and to the documentation as well. This cycle we continue to work with our Google Summer of Code intern Robert Wasik who created the Manila container storage interface driver in the cloud provider open stack repository and is also now the lead maintainer for that effort. So we've committed many improvements to this cycle. The full list of them can be found in the release notes but I wanted to highlight a few things through this presentation. First off, the latest micro version available through the Osuri release is 2.55 and with API version 2.53 we added quota control for share replication. So a little background, we began working on share replication several releases ago and last cycle we implemented replication for the hard multi-tenancy mode of shared backends and this cycle we're continuing this effort by introducing quotas for the number and the capacity of shared replicates across your share file system backends. In Osuri, we also committed several improvements and graduated the CRUD APIs for shared groups, shared group types, shared group extra specs and shared group snapshots. This means that these APIs are now fully supported for production use and they'll continue to evolve with the rest of the model API via micro versioning. We're thinking of extending these APIs already to perform a group-based replication in the coming cycles. We've made a few improvements to the scheduler, notably the capabilities filter might appear small but it actually has a larger impact to several crowds out there that are relying on share type specification operations but hopefully positive impact. We've had the feedback for this and we think that this is a positive change. We've also fixed up the provision capacity estimation in the scheduler. It's now smarter and it will not occur when it is not necessary and this provides some optimization and speed. With Osuri, you now have the ability to take colonial snapshots across availability zones and storage pools and what this means is Manila for a long, long time has expected snapshot cloning to be instantaneous and this prevented the evolution of snapshots themselves mainly because the expectation caused data gravity and snapshots, they couldn't be taken across the cloud. So in this cycle, we added some new workflows to accommodate a more asynchronous creation for snapshot clones and this will hopefully pave the way for a lot of back-ends that did not support snapshot cloning because it was slower, especially distributed storage systems like CFFS where cloning of a snapshot is never instantaneous because the data is spread across the CF cluster. We hope to work on that in the upcoming releases and the next slide. Thank you. We also changed the behavior of sharing sizing and this was also based on feedback. So extensions and shrinking, they no longer hard fail when the underlying shared file system is perfectly all right during the operation but something like, for instance, the quotas or I mean, you've exceeded your quotas or you're actually, I mean, we detect that there is going to be a data loss because you're trying to shrink below consumed space. And so we used to set the use and error status on these resources and then bail out. And so what this would do is disallow any further management or path interaction until you've ascertained that everything is all right. But then we realized this is actually in fractures. So now in situations when we're reasonably sure that the shared file system is all right, we reset the share status and alert the user via asynchronous user messages. And we've also improved the user messages API to allow querying by timestamp and intervals. We had several shared driver improvements as well, including from the NetApp drivers as well as the ZFS on Linux driver, both of which added support for cloning across storage pools and across availability zones. And the Dell EMC Unity driver, it now supports managing and unmanaging of share servers, shares and share snapshots. I wanted to also call out a security issue that's CVE-2020-9543, a vulnerability that was identified and fixed in this release and the fix was backported all the way to stable Queens. But it does affect older versions of Manila as described in the CVE. So if you're running an older version, you should be trying to fix this with the patches provided. This cycle, we also added support for the OpenStack client and you can now perform CRUD operations on shares, access rules and share types. Of course, there's a lot of work left to be done. More of this is coming in the next cycle. And I mean, we're hoping if you're going to be using the Osuri's release, at some point you can upgrade your client and have these new features show up because the client's going to be backwards compatible at all times. Alongside, we also worked on Manila UI a bit and it now supports IPv6 access controls and shared group capabilities. However, again, there's a ton of work to be done to catch up with the rest of the evolution of the Manila API here. And you're looking for help on this project. So if you're willing to help, please ping me or anyone in the Manila community. Finally, we're excited to work on many, many new things in the Victoria cycle. We're looking to discuss these improvements in the upcoming virtual PTG. So please join us there and influence what's coming. Thank you. Okay, thanks. Good, Tim. Very excited to see there's a lot of new feature to keep Manila stable and keep it with a lot of new static features. And now we move to Mac now. So I will turn over to Phelon. Yeah, thank you, Rika. My name is Phelon Wang. Now I'm the manager of Catalyst Cloud which is a cloud computing company based in New Zealand. And now I'm serving the PDL of Magnum. If you are not very familiar with Magnum, Magnum is the container for service of OpenStack which can help you sort of deploy a production level community service, community cluster in minutes. So it's just like GKE or EKS. But surely you can also use Magnum to deploy mesos or docker swarm. So, you know, those platforms are not very popular nowadays. As for Osori release, I'm very proud of the work we have done given we have such a small team. So for the highlights for Osori, we mainly focus on keeping the Magnum can support the latest version of communities. So now we can support the latest V1.16, V1.17, and V1.18. And all those versions can easily pass the CNCF conformance test. That means a user can get the confidence to migrate workloads from or to the other community platforms. And we also upgrade Calico. Calico is one of the CNI network driver we supported. To the latest Calico version, it's not V1. It's not actually, it's just a typo. It's V3.13.1 version. That's the latest stable version of Calico. And we also upgrade China version to V0.12 as well. And as well, the Core DNS has been upgraded to V1.6.6. And the Kubernetes dashboard, we just upgraded to V1, the V2. That's the latest version I think it just released two or three weeks ago. And another very exciting feature is we can support Cinder CSI because the entry Cinder support has been deployed. And another very excited feature is in Magnum, no user can do rolling upgrade to upgrade both the community's version and the base operating system. And the other benefit is, bonus is there is no downtime for the application, the service running on the community service. And another good one is with the latest Fedora Core as driver, we can support using the SHA-256 verification to make sure the heavy cube image is the right one you got from the, especially when you get the image from a public container registry like the doghouse that pretty much the work we have done in Osury. And actually there are some more features happening. It's very, very close to merged. And we probably cherry pick them back to Osury as well. The first one is master load balancer, a lot of CDRs. That means user can set a lot of CDRs for the master load balancer to control the IP range which can access your community's API. And another thing is rotate the CI source. And the Helm3 work is also happening, it's very close to be merged as well. And also we have done quite a lot of improvements for the North Group support in MikeNorm. Yeah, that pretty much the work I would like to share for this meeting. And if you guys are interested in MikeNorm, please just pop up into the OpenStack-containers ILC China. Cheers. Okay, thanks Phil. Very exciting to see, there's a lot of things working in the MacNorm. And also since we mentioned a lot of MacNorm and Kubernetes, they also mentioned that in the past, we wanted to psychose that and also in the Osury psychose. The bounding between Kubernetes and OpenStack becomes stronger. And you can also now use Kubernetes to manage or operate resources for all above services. I'm talking about using MacNorm to operate Kubernetes and Kubernetes can create resources in Manila, in Octavia, in Sanger as well. And also Zoom has a plug-in in the controller manager. And the only thing, he didn't have the Kubernetes bounding, but he is the one to help behind Zoom and MacNorm. So there's indeed a lot of exciting features and a lot of exciting cars coming to work in Rosely. Also, thanks Phelon for the great work. And we now move to Cyborg, so I hand over to Yimeng. Hi, thank you Riko. Hi. Okay, okay, can you help me? Yes. Okay, okay, thank you. This is Yimeng Bao from Cyborg Team. I will be the PTL for the Victoria release and shortly after the introduction, I will use Chinese. And so due to some communication gap between my team and the foundation, we actually, at the very first beginning, we didn't register for today's meeting. But yesterday afternoon, I was told that they still got some seats available for Cyborg. But at first I was, we don't got enough time to prepare for the meeting and the community staff is very considerate. And they even encourage me that Mandarin is okay, since most of the participants are Chinese. So I will try to use English as much as possible. But if Chinese were not clearly described with the features, highlights, I will use Chinese. Okay, so my apologies. And so next I will introduce the highlights. So, before the highlights, I will give a briefly introduction about the Cyborg. Cyborg has provided the same management service for a family member. And Cyborg NOAA can create a server power for a family member. And the server can use Cyborg to manage it. Cyborg joined BigTen from the same platform. After four versions of the program, it achieved the original goal of Cyborg on the real purpose. And then from our U版, we merged into 138 commits. The line cost amounted to 1619,000. And all of these commits were achieved by our 7 active co-contributors. And these 7 co-contributors came from Intel, Huawei, DTE, Lenovo, NTT, and our latest InstaLauncher. So I would like to thank all the contributors. Thank you for your hard work. And Cyborg can work on our U版 for real. Of course, we also have some contributors from RedHead, Suzy, and NTT. But some of them are no longer available in Cyborg. And this is some of Cyborg's previous introduction. And let's talk about some of the content we completed in U版. And the most important thing is that we completed the project with NOAA. So far, we can start with NOAA, and we can have Cyborg to manage the software. Some of the projects we're working on now, such as the creation and production and reboot, and post-on-post-stop-start-rescue, are not supported by LiveMigration, and Shave and Oshave. And Shave and Oshave are the projects we're planning to implement in U版, and we might be able to work on other projects. And in U版, we have a lot of changes. And on the API side, in the VE API, we deleted the VE API. This is because when we were in the T版, when we were in the S version, we entered a new data model, DB data model, and in order to adapt to the new DB data model, we made a certain adaption to the VE API in U version. And at this time, our VE API is no longer supported, so we've already decrypted the previous version. And we deleted this version directly. And we did some finishing work on this version. We did the API finishing work in order to support more driver-based hardware. And here's what we mainly do. What we mainly do is to describe the two data models, one is device data model, and the other is hardware. And in this version, we've completed some of the list operations. And the next version is more important. We do a system-level work on a disabled and enabled operation. This is one of the most important API operations, which allows users to control the device directly. We hope it will be found and used by the user. And the next most important part of the API is the deployable operation. This API has also completed a new list of VR shows. And we've deleted the original patch interface. Because we've created a new interface in front of us, so we don't need this patch anymore. And we've also deleted its interface. For the VR API, we've also supported the V2.0 version and the Subway version. And we've also done a synchronization with the OpenStack micro-version. It allows the developer to access the API when it's available and make it not affect the user's use. And the third major modification is we've also supported Cyborg's client's modification. The client's modification is a VR API-based synchronization, and we've also supported the OpenStack SDK. So far, all Cyborg operations are based on the OpenStack Accelerator command. This is the kind of operation we're working on. The client's core can be used directly through the pipeline. And the fourth major modification is we've upgraded the Tempest test application. We didn't have the ability to create this kind of application before. Now we've added the most basic build-on-exercises and some basic operations to the Tempest application. And in the next V2.0 version, we'll add more test applications to improve Cyborg's reliability. And the highlights of the V2.0 version are these. I'll send the link of the virtual PDG in June. I'll look for it. And then I'll send the link of the V2.0 version to the group. If you're interested, you can add Toby to our discussion. And so I just finished my introduction. So thanks, Layser. Thanks, Eiko. Yeah, no problem. And thanks, Yimou. And also thank you for bringing the diversity here. I'm so excited to see every time we have women leaders here to share. We have a lot, but I'm always excited to see more. And so now we go into the next section, which is question Q&A time. I think that's run this time this way. If you have any questions, you can, if you can use English, you'll be great. If you can't, we have a lot of speakers who know Mandarin Chinese. So if you can use, if you like to use Chinese or any other languages, if you feel comfortable, feel free to do so. And we will do our best to translate. And the next question is question time. If you have any questions, you can use English to speak Chinese. And we have a lot of speakers who know Chinese. And we also use Chinese to translate. Can you see any questions? Is there any question? Okay, before any questions pop up, I think I forgot to do two announcements at the beginning of the slides. I'm going to do it here. First, it's that we are talking about the U.S. release, but we forgot to thank to the great release teams and you can queue it in to actually guarantee the release is out, which is officially released yesterday. Let me share the list link here. So yes. So thanks to Sean and his team, this release team great work. And they put a lot of effort to make sure projects are on track with the schedule. So thanks for them. And the other is that we have a virtual U.S. release celebration tomorrow. Let me share the mailing list link here. So yeah, you can find more information for in the opposite.org slash U.S. release. And we have a virtual U.S. release celebrations. So if you'd like to join us for celebrations party, Kendall Nielsen, it's the one to gathering the party there. So it's virtually online. So please join if your Taizong allow you to. Or if you'd like to hosting a party in Asia's friendly Taizong, then you can also contact to Kendall. So before anything else, is there any questions for the update? Oh yeah, so I'm reading the Zoom messages. You do miss the MacNab update already, but don't worry, we actually have videos which will be posted very quick. So you can follow up the mailing list in OpenStack, which we, the meeting videos from the previous meeting is already published. And this one will be due as quick as well. Yeah, no problem. Okay, if we don't have any further questions, let me, yeah, thank everyone to join, attend and thanks OpenStack Foundation to make this happens and then everyone to contribute. So I'll see you, everyone in our virtual events. I see everyone in Victoria Psycho be there, contribute, review and send mailing feedback. Thank you everyone. Thank you.