 Hi, thank you for joining today's episode of Open Infra Live, the first episode of 2022. So happy new year from me to you. The Open Infra Live is the show that hosted by the Open Infra Foundation that airs on Thursdays at 1500 UTC. We have a lot of past episodes that you can find in our YouTube playlist, but today we're going to be talking about what the global Open Infra community did in 2021 and what they're looking to do in 2022. First, before we get started and introduce our speakers, I'd like to thank the Open Infra Foundation members. With these organizations, we can't do this without their support. All of our speakers today are from one of these organizations. And these are the folks who are supporting the foundation and helping us build open source communities who write software that runs in production. So what I'm really excited to talk about today is we have four leaders from different open source communities to talk about what kind of production momentum they got last year from a development side, as well as some adoption stats as well. And then what's into the future for them for 2022 as well. So today we're going to hear from OpenStack, Cota Containers, StarlingX and Zool about what they're doing as a community and how you can get involved. Like I mentioned, this show is live. So if you have questions for any of our speakers throughout the show, please drop them into the chat and we will try to answer those as often as we can time permitting. And we'll also jump into the chat and answer them there if we don't have time. But let's not hesitate any longer. Let's go ahead and get started. So the first speaker that I'd like to introduce is the chair of the OpenStack Technical Committee. We have Gonsha Munn on to talk about the progress the OpenStack community made in 2021 and what's ahead for 2022. Welcome Gonsha. Thanks, Lisha. All right. Yeah, it's been interesting. Like in 2021, we did a lot of thing and now what's our focus will be in 2022. Myself Gonsha Munn, I'm OpenStack Technical Committee chair. So let's first talk about the releases and highlights we did in 2021. We had two releases, OpenStack Wallaby which was our 23rd on-time release. OpenStack Xena was 24th on-time release. So on-time release is always there. So that's a really great work by our release team. And in OpenStack Wallaby, we had focused more on the security and cross-project integration with other OSS software. And in Gena, like it provide more powerful hardware support and integration amongst the OpenStack projects. Next slide, please. Yeah. So there are the key features. And if we go with like, first is improved integration with the adjacent OSS. For example, Kola added the support for Pymethys version two, Magnum updated the version of Kubernetes and container D. That's really great. Like up-to-date integration with Kubernetes. And then Cinder added the support for Chef backend driver, Chef iSCSI even, I think they have more than 60 or 64 drivers supported currently. Then Tacker had implemented a lot of features to move towards the ETSI NFV standards. And other is like integration among the OpenStack project. For example, Noah, Cyber, Blazor. So there are a lot of integration has happened between OpenStack projects, which will give like all the OpenStack user a more stable features, or if they wanted to integrate the different services in their cloud. Then advanced hardware support features. There are a lot of features in Cinder, Ion, E-Kola, Neutron and even other projects also. And Keystone Unified Limit. So Keystone Unified Limit is like a new way to set and enforce the quota for your tenants or projects. And Glance is the first project to implement that. And the yes, long pending or long deprecated Cinder v2 APIs has been removed. So now Cinder provide the v3 APIs with micro version where all the improvement you can see with the micro version which will be a great help or an easy way for the intro perspective. These are the few key feature, but it doesn't mean like these are the only things we did in 2021. We had the separate OpenInfra episode for Wallaby release and the Zener release also where we had many other projects talk about their features they implemented in 2021. So links are there. You can watch those episodes to get the full list of features we did in this year. In term of contribution in 2021, I'm like collecting both the release, Wallaby plus Zener. So it's around 32,000 plus quotes and it's a huge number and around like 800 developers from 140 organization. And with these like OpenStack is one of the most active open source project. It's been in top three with the innocent Chromium open source. In term of community updates, if we talk about the community wide goals, we had two goals in 2021, which is one is the migrate RBAC policy format from this into YML. So this is like on for RBAC API access control. We have implemented the policy in code. Policy in code means we have the default defined inside the code and they are enforced. But if anyone would like to override those access control, they wanted to restrict the access or they wanted to give more user access to those APIs, then they can always do with the policy file. So we supported two format for policy file, JSON and YML. JSON has been always means not so easy for us to like map with our policy in code things because you cannot command the rules in the JSON format, right? So what operator does usually is they dump all the default rules in the policy file because we have the tooling and all and they commented out all the one which they don't override and they only keep the one which they wanted to override. So these things are very easy in YML format. So that's why we decided to deprecate the JSON one and maybe we'll remove it in future. Second goal was migrate from root wrap to preserve to run the command in pseudo run. So root wrap has some performance and security issue. So that's why we are moving to the new library. This work is still under in progress. So these are the two things we focused in 2021. In term of project update, we had new project very excited about the skyline which is a new dashboard for OpenStack. So skyline is now official OpenStack project now. Next is Venus, which is the log management service that is also a new project. And tap as a services is also added back to the Neutron Stadium project. And on the project retirement, if we see, we had like four projects queuing such light, carbon and Neutron FWS being retired. Retired doesn't mean they are gone forever. I mean, if anyone or any of your company, team, friends, they want you to maintain it, you can always bring back them and raise the application to governments. For example, Neutron FWS, if you see, there's a discussion going on in OpenStack that's discussed mailing list, like a few of the people or developers, they wanted to maintain Neutron FWS and Neutron team is going to reconsider that to add it back to the Neutron Stadium. So if you would like to maintain those, if you are using those, so feel free to add back. So currently we have a total of 52 projects under OpenStack. And in terms of leadership, we had technical elections. So they both are like project leadership as well as the governance like technical committee for nine members. So they are open leadership. So anyone who is contributing in OpenStack and they are eligible to run. So they can always raise hand for these project leadership or the governance leadership. So they are very open. So we had two election and it was like a smooth going in those. And few of the other updates, we defined the upstream investment opportunity. So what are they like? Wherever community need helps. For example, in some of the project, we need help or some of the area where we need help. So we define them in upstream investment opportunity with all the details, how to contact and what are the business values of those. So if you are individual developer or you are a company, so you can read out those and you can see like where all community need help and you can plan like to support the community in those. Technical committee week limiting, we re-initiated and it's been very helpful for us to speed up the things what technical committee used to do, not just like project addition or removal, but a lot of other topics also in terms of providing the technical help to the community. And you might know this OpenStack IRC network moved to OFTC. And if you would like to know the details, how to join, what are the settings? There is a blog or there's a contributor guide or even the open dev documentation you can search and you can find out like how to move from, you know, to OFTC. Project team gathering, two PTG we did in 2021, one for Z9, one for yoga in April and October. And these PTG like are not like just OpenStack developers are discussing among them, but it is beyond that. There are like other infra project also, even other OSS communities also they are joining and discussing about their use case features or even governance. So they have been very helpful for us and both were like virtual PTG in 2020. In terms of like cross community calibration, we had few of the progress among like other open source community we did calibration. One is with the Kubernetes community leadership with the Kubernetes steering committee we had met various in various platform in PTG we have usually a session with them and we share the feedback and their governance part or even some time technical part also. In open and for live episode also, we have came together and discussed the thing. So that has been a really great collaboration between these two community. And one is OpenStack regained the core infrastructure and status system best practice badge. And thanks to Terry for checking and updating those things. It's a really good badge for OpenStack community. And cross community integration and feature which we talked about for example, Magnum updating the Kubernetes version and continuity. So there are a lot of cross community integration going on in various OpenStack projects. And about the next in 2022 and beyond, we had like every project plan their features. So in PTG, if you join them in yoga, PTG discussion, if you see or if you are going to join the next PTG happening in April 3rd or 4th. So there they plan their individual features. So these are the three things and I listed here which we as a community is focusing as a whole. So our back is about the default we wanted to make as more close to the operators need so that they don't need to override the default. So we are introducing system scope also like a kind of a higher level of a person doing the system level operation or a project level operation. We are isolating those things. So there are a lot of improvement in our back and yes, that will be great to focus in 2022. OSC OpenStack client is a unified limit, sorry, unified our CLI. So which have been focusing since many years but we'll be focusing more in next year also. And this is the new thing we have collected the pain points from projects as well as from the operators and we have continued the brainstorming on those. In TC, we schedule the meetings and we go through those pain points. So we'll continue that. And if something comes up like we have to focus or implement those so that we'll continue all these. So these are our next focus. Awesome. Thanks, Gantra. There's a lot going on in the OpenStack community and across community. And I think you'll have another release coming up in the next couple of months, right? Are we on Y, I think? Yeah, it's named yoga. So that is coming on 30th of March. Development is going on. So if you'd like to join us or help us, bring us there. Awesome. Well, thank you so much and we'll bring you back at the end for questions from the audience. All right. Next we're gonna hear about CODIC containers. This was the first project that the OpenInfo Foundation started supporting in addition to OpenStack. So I'm really excited to have Peng Tao on from Ant Group to talk about what they've been doing over the last year and what's to come in 2022. Welcome. Thank you, Anderson. This is Taepan from Ant Group and next slide, please. And, okay. Codic containers now, this is the current architecture of Codic containers. Codic containers continue to work smoothly with Kubernetes ecosystem. And this year we have ended Codic Monitor to connect to permissions so that we can export Codic metrics for the users. And this makes Codic containers to work even more smoothly with the Kubernetes ecosystem. And key features this year, except for Codic Monitor, we have also supported VFIO pass-through so that if you are searching for extreme performance for with Codic, we can now pass through the VFIO devices directly to the guest and it will deliver the raw device performance for your application. And also we have supported non-root VMM so that in cases where security is extremely concerned, we do not want to expose the privilege even to the QMU. And also we have supported watchable months. It's a non-requiring feature for our users so that they can do a notify within the guest. And also we have supported DDX and SEV so that Codic is now out of the competition container ecosystem. And Codic school wise, this year we have delivered the steady 23 releases and the latest stable is 2.3 to 0.0. And we have seen more than 1,400 changes by 95 contributors from 16 organizations. And from what we can see, the top five contributing companies includes Intel, Red Hat and Group, IBM and Apple. And we are seeing more and more production usage of data. And as usual, and Group is deployed in Codic containers in very large scale in production. And this year we have seen Red Hat delivering Codic containers and as a vendor product, they have just GED, their Codic container sandbox solution in OpenShift. And as usual, IBM is a strong supporter of Codic and is using Codic with their production, with their products. And we have seen more and more startups using Codic as well. Databricks is using Codic to build their cloud solution for their users. And then there's a new startup, Xtortennium that is using Codic to run spot instances on AWS and show their solution to, and save a lot of funding for their customers. And we are connecting more and more usage of Cata and the usual show, they will all show up in our latest annual report and then we'll come soon. And then next year we will focus on these points. First, we will do, make Cata work even tighter to the container ecosystem. Namely, we will look at Cata uniting the CNI and CSI to make the CNI and CSI components work even better for Cata. And another key feature we will continue working on is to justify Cata Runtime. When that Cata Runtime is written in code and it will cost some memory for the users and we are working with the community to do it in Rust and this will save a lot of memory and then turns out to be more density for our users. And part of the work we are doing in 2021 we are supporting Cata containers confidential container use case. And this work will continue in this year as well. And with all the above, we are aiming at releasing Cata, sweet or zero and at the end of the year. And hopefully we can release it before the burning summit. Awesome, thanks Peng Tao on the update around Cata containers. It's exciting to see all of the momentum, especially around the confidential computing use case. I know we did an episode of Open Infra Live a few months ago around and folks from AMD were talking about how they're using Cata containers for that. So very interesting and exciting to see that progress. We'll bring you back at the end of the show for questions from the audience. Okay, thank you. Thank you. All right, next we have James Blair from the Zool community and who is a maintainer of the project to talk about what Zools has going on and what's coming this year including hopefully a new release. Take it away, James. Thank you, Allison. As Allison mentioned, I'm the maintainer and the current project lead for the Zool project. Zool is a project gating system. It originally started out helping OpenStack test all of its changes before they merged and continues to do so. But over the past few years we've really grown the project and it's being adopted by more and more other projects and organizations. So what's been happening with Zool for the past year is we've been working on a pretty big release that addresses a number of issues, scaling and high availability. We've improved our admin user interface. We've upped our game as far as our drivers for interacting with other systems. And probably like everybody else out there we've been doing more and more work to help the system run better in containers. So I'm gonna talk about all of that and give you a little bit of our version four and version five roadmap coming up. So for, I think our biggest focus over the past year has been around scaling. Zool has since almost the beginning been a very scalable system in that it has a number of distributed components. It has components that we, it has a web component, executor components that actually run the CI and CD jobs, mergers that just perform Git operations, log streaming gateway, stuff like that. And all of these components have been scalable almost since the very beginning. So the system can get larger and larger as it runs more jobs, all except for one. And that is the component we call the scheduler which is responsible for watching all of the events from remote systems, deciding what jobs to run and with what parameters and that sort of thing. So that has, because that's sort of the brains of the system, it's been a bit difficult to scale that. And so we did not start out that way. We did not start out with that as a scalable component. It was a single point of failure. And as the systems have gotten larger and larger that's become more and more of a problem. It's, I've seen Zool systems where when you need to restart for an update it might be down for 10 or 15 minutes. So that's not great from a user usability perspective. It's not terrible because I mean, it's a CI system. Everything's a little bit asynchronous anyway, but still, especially if you've got long running jobs and things like that, you really want to take as little downtime as possible so that you can keep everybody moving at a good pace. So we've been focused a lot on that over the past year. What we've done is we've actually moved all of this in-memory state into ZooKeeper. So that's a distributed storage system so that we can have multiple schedulers. They can all seem the same state. And in fact, we actually have all of the components using this ZooKeeper cluster as their state storage now. And this allows us to run multiple schedulers that run at once for fault tolerance purposes. They also act cooperatively. So we can actually increase the overall throughput by running more schedulers. And it lets us do rolling upgrades and rolling restarts and things like that. So at this point, with these changes in, Zoo actually has no single points of failure. The system itself relies only on ZooKeeper. ZooKeeper itself is a fault tolerant distributed system. We also require a database, but we're several decades into knowing how to run fault tolerant databases at this point. So things are looking really good from both a scaling and high availability standpoint at this point. We've made a number of improvements that are user-visible as well. The most notable is probably the admin user interface. So Zoo has had a web interface for a while that's basically been read-only. And so that lets you see the jobs that are in the system, see what they're running, see the logs that they've produced, that sort of thing. But any interaction with the system has always been through either the, primarily the code review system, Garrett GitHub, that sort of thing. And then secondarily, there are some admin commands that if you have root on the Zoo scheduler or something like that, you can run. And then recently we started adding a REST API. So there's actually a Zoo Client command now, but you need to get an admin token for it and that sort of thing. So it's still not terribly convenient for end users. Our recent changes, however, have added the ability to log in to the web interface with a number of pluggable OpenID Connect based systems. So users, if they have accounts with administration access, they can click a button to log in. And then once they're logged in, there are buttons on the interface to DQ changes, re-enqueue changes, promote changes to the top of the queue and auto-hold nodes. The auto-hold thing is particularly interesting because that lets you, very quickly, if you see that a change is failing repeatedly and you don't understand why or you can't reproduce it locally, that sort of thing, you can log into the system and tell it to auto-hold the next failing job for that change and get access to that node right away. So it's a big help in debugging. We've also added some other usability improvements. There's now a visual build timeline. So once all of the builds for a change are complete, you can look at a sort of a Gantt chart of the builds and see what build depended on what other build, what took the longest, where you can focus your effort on improving build run times to speed things up. And then finally, we've done a little bit of UI modernization as well. So the interface is, it looks a little better, it's a little easier to use, more consistent, that sort of thing. So we've actually put a lot of effort into the user experience here. Again, on the backend, Zool is nothing if it doesn't connect to remote systems. That's either code review systems like Garrett GitHub, et cetera, or cloud systems like OpenStack, AWS, and others. We've focused some effort in this area as well. We've added a new driver for Azure. So Zool can now run jobs on Azure VMs and other Azure resources. We've improved the support for both GitHub and GitLab. So those drivers support some of the newer features now. And we're working on an IBM cloud driver that's mostly written, it's in review. So I actually expect that to land really soon now. The really interesting thing about all of these drivers is that we support, this is not an either or situation, Zool supports combining changes from, say, Garrett and GitHub or GitHub and GitLab or any of the drivers that it supports together and testing them together. Similarly, all the clouds that we support, we support them all at the same time. So sort of going back to the theme of scalability and fault tolerance, you can have a CI system that is fault tolerant at the cloud level. So you can lose an entire cloud and the system stays up and running. And then as I said, like just about everybody else out there, we are spending a lot of time on containers. We've been working on the process of running Zool itself in containers. The upstream project builds container images for every release and every commit. And I think at this point, quite a large number, perhaps the majority of Zool installations at this point are running from container builds. That's the way that we run Zools, UnZool and I highly encourage it for anybody else. Whether you run those containers in Kubernetes or some other system, we're not terribly opinionated about but I personally would recommend running it in containers because you get the benefit of, I mean, Zool has a lot of dependencies on system tooling and whatnot. And so it's great to have a consistent environment between all of the different Zool installations out there. It really simplifies things a lot. We do have a Zool operator for Kubernetes and OpenShift. We've improved that a lot over the past year. So if you want to sort of deploy a system in a declarative manner, that's a great way of doing so. You can install this operator and then create a Zool CRD that specifies how many executors you want, that sort of thing, and the system gets spun up. We've added Prometheus support. So there's sort of an initial amount of statistics that we're emitting via Prometheus. Right now we're actually more focused on getting out there to get sort of some basic performance metrics and liveness checks for DNS probes, that sort of thing up and running. So again, these are all really in service of running in that containerized, possibly Kubernetes environment. And finally, at the end, I will talk about our roadmap. And I'm talking about the end because how we've been doing this is I think pretty interesting. Zool doesn't do time-based releases. We actually, we release when we achieve certain levels of features. And so what we wanted to do since we knew the effort to make Zool multi-scheduler was going to be a large effort, we didn't want to do that sort of off on the side and then dump it on everybody at the end. We wanted to try to bring the entire community along with us as we made this change. So what we did is we released version four to signal the start of this effort. We made a couple of breaking changes in version four that said, okay, operators, invest a little bit of time here, prepare your system for what we know is coming, do that when you upgrade to version four. And then as we move closer and closer to making Zool multi-scheduler, we'll keep making releases throughout that process. And all of those releases you should be able to upgrade to without making any major deployment changes. And then before you know it, someday you'll just be able to run a second scheduler. And that's actually the point where we're at now. We've released something like 17 versions of Zool since version four. And at this point, Zool, you can actually run multiple schedulers in production. We're still kind of putting a bow on it. We're making sure it's stable. We're looking for places where we might need to do a little more performance improvements, that sort of thing. But it is running with multiple schedulers in production now. And so we're, I would say, nearest makes no difference to Zool version five. We will very shortly, hopefully within the next few weeks, be releasing version five itself, which will, that'll sort of signify that we're confident that we're at the end of this road. And all of our features are in place and really ready for use. So that's what we've been up to for the past year and what we're going to be looking forward to in the next year. Awesome, thanks, Jim. And I know Thierry Perez had dropped a comment asking what features would be coming in v5, but you actually kind of talked about that. But one of the things that I think is really interesting about Zool is a lot of your production users, like BMW and Volvo and Le Bonquan, are actually upstream contributors as well. So with this like long list of features, which ones have been particularly user driven based on what their experience is with Zool itself? I think actually all of them are the, the, so to focus on the headline feature, right? The multi scheduler, HA fault tolerance, scalability stuff, that's been, it's really been a community effort, but BMW kind of has been really pushing on that because they really want to drive the scalability aspect. They really, they're doing a lot with Zool and they want to continue to be able to do more with Zool. So they're really pushing the envelope there. Awesome, it's awesome to see that they have that opportunity since they are involved with the upstream process. Thierry did want to get a question in though. And so he actually asked a new one and said that Zool started with OpenStack. So that's where the original requirements are coming from, but which of those new features are actually coming from a direct different user. So I know you mentioned BMW, which is also an OpenStack user, but are there other requirements that are coming specifically from non OpenStack users? Yes, so you mentioned Volvo as well earlier. They're actually, they've been driving a lot of the work on Azure and using running tests on Windows Notes. So I didn't mention it in the slides here, but that is something that we've worked on over the past year as well. We've been improving support for running jobs on Windows. So Volvo has been a big driver of that and that's coming directly from their use cases. Awesome and it's fun to be able to say that BMW and Volvo are driving the requirements to squeeze a pun and is always a nice benefit as well. I'm not going to apologize for that. Well, thank you, James. I know this was a lot of great updates and it's exciting to see Zool V5 coming out very soon. It will bring you back on shortly for questions from the audience. All right, thank you. All right, and our last presenter today is Greg Waynes from the Starling X Technical Steering Committee. So he's going to be talking about what that community's been up to last year and heading into 2022. Take it away, Greg. Great, thanks, Alison. The first slide I had was really just a refresher on what Starling X is because we always get that question occasionally in these meetings. So Starling X just as an overview is a fully managed, fully integrated, ready to deploy Kubernetes or Kubernetes and containerized open stack solution on bare metal. And so the key features that Starling X provides is really infrastructure management of the bare metal hardware underneath, as well as all the different open source software packages that we're running in the solution where Kubernetes and open stack are the biggest of those software open source packages. And so the Starling X infrastructure management is all about making it easy to install, easy to manage, highly scalable, highly performant and highly available. The other key feature in Starling X is our ability to scale from a single server solution, really up to kind of a geographically distributed system of multiple edge clouds. And that's what's shown in the bottom right is really we have a distributed cloud capability that has a central cloud that really manages multiple sub clouds at the edges of the networks where those sub clouds are standalone, Kubernetes or open stack clouds, either running, well, yeah, either running Kubernetes or open stack. And the idea is that the Starling X infrastructure management has been extended to basically provide orchestration of the infrastructure across all of these sub clouds in order to provide that ease of management of the multiple sub clouds. And we've actually been very successful in the last couple of years around users in the 5G telecom area, taking advantage of Starling X and specifically the distributed cloud architecture in Starling X. So yeah, so what have we done in 2021? So Starling X has two releases a year and just listed some of the key features that we've done in the first half of the year, we did some enhancements to our storage, we supported rewrite many for SAF, our SAF solution, which allowed, you know, shareable PDCs between containerized workloads. We added an option for using Rook instead of our, you know, host-based uncontainerized SAF solution. So that was a nice ad. In security, we added, did some integration of some other open source containerized projects. We have integrated HashiCore's vault, which allow, provide secret management. So really making it such that if there's containerized applications that already leverage HashiCore vault, makes it easier for them to port to Starling X. We leveraged IBM's Porteris, which does container image signature validation just to ensure that the images that you're pulling on pod creations are authentic. We also added B3 for SNMP. We previously supported SNP B2C. So B3 is definitely more secure solution. We integrated metric server that gives us auto-scaling for containerized workloads. Edge Worker node was a nice ad that we added. It basically allows a non-Starling X node to actually join the Starling X Kubernetes cluster. So it's kind of like a less managed node, but because it really only requires Kubernetes, but the advantage to that is it can be a very small footprint node, right? Because Kubernetes can basically run on Raspberry Pi. So you could add Raspberry Pi into the cluster and then that node gets all the advantages of whatever services are on that, the Starling X Kubernetes cluster. As far as distributed cloud, chicken mentioned it is a key feature. We kind of completed support for some of the different deployment configs of our sub-clouds, the all-in-one duplex, as well as the standard multi-node sub-cloud. We fully supported those now as sub-clouds. Our key users for distributed clouds are actually focused on all-in-one simplex sub-clouds. And we improved the scaling there. I think it used to be only 50. We went up to 200. And you can see in the second half of the year, kind of the bottom right, we increased the scaling up to 800 sub-clouds and we're continuing to do scaling work on that. The other things that we did in the second half of the year was Starling X does include a built kernel based on CentOS today. We did upgrade the kernel in that CentOS solution from, I think we were on a very old Linux version, three-something. So we up versioned it to 5.10. We also updated Kubernetes from, I think 1.18, we were on up to 121. We want to kind of regularly keep pace with Kubernetes up versions. They up version three times a year. For storage, again, we improved, we did the up version of our Ceph solution from the McDonautilus security. We did a lot of work on certificates in security in the second half of the year. We leveraged CertManager, which is another open-source containerized project. We would leverage it for platform certificates, for the Starling X certificates, basically making it easier for auto renewals. We added alarming for certificate expires, because there are a lot of certificates in the Kubernetes solution, as well as all of the payloads can have certificates as well. We added a mechanisms update to Kubernetes RootCA, which is a tricky one because that's the trusted CA at the top of Kubernetes and it's kind of the Kubernetes CNCF provides a procedure for that, but it's quite complicated. So we provided some wrappers around it to make it much easier and even orchestrated it in our distributed cloud solution. The other distributed cloud thing we did was really supporting Simplex deployment configs and sub-clouds being migrated to Duplex, basically, without having to do a full reinstall or anything, there was some work that we had to do there. We also, because of the increased scaling in distributed clouds, we had to do the ability to re-home sub-clouds to different central clouds. And that's basically because some of our users in the field right now have literally, I don't know, I think it's more than, like down the order of four or 5,000 sub-clouds and deploying hundreds a month, basically. So because we're providing better scalability, they used to have multiple distributed cloud systems and now they're kind of merging them together with the sub-cloud re-home. The next slide for 2022, a lot of this is really being defined. So none of this is confirmed yet. We talked about some of this in our PTG. Well, a big feature that we have for the first half of the year is, as everybody knows, CentOS is not being provided as an open source solution for RHEL anymore. And we were starting Xs based on CentOS for its kernel. So we're moving to Debian. There's a lot of work involved in that. And we're obviously updating the installer as part of that for Debian, but we're also leveraging OS tree in an attempt to provide kind of faster and more in-service type kernel updates when we do upgrades of Starling X. Some of the other things that are kind of on the list of things to consider is, like I said, we're gonna really regularly update Kubernetes. So we're gonna move to 1.23. There's some Kubernetes configuration that we manage as part of the infrastructure managing Kubernetes. We wanna enhance that. We leverage our Armada and Starling X from Airship. And Armada is sort of being deprecated. It kind of got a bit of new life in it with support for HelmV2 recently. But we're following kind of Airship's lead in moving to Flux in order to support complex applications that have multiple Helm charts for the overall application. OpenStack, containerize OpenStack is the biggest example of that, it has basically a Helm chart for every OpenStack service. Some other things, we're regularly doing work on PTP enhancements for the 5G support. This one is to bring in 5G sync support. We're gonna integrate Istio for service mesh capabilities. Some security work we're doing. We're providing some value add around multi-tenancy. Kubernetes has a lot of bits and pieces of multi-tenancy and we're providing kind of a playbook and talk update for that. And then the other key security thing that we're doing is a lot of customers want even the Kubernetes certificates, all their certificates on the solution under the same external root CA. So we're looking to support an ICA at the top of all the Kubernetes certificates. And then as far as the last half of the year, one of the things we're looking at is, there'll be many more obviously, but we have some users that are looking for mandatory access control. So in the past, that was gonna be SE Linux with CentOS, but it'll be out-parmer now that we're moving to WM. And the last slide I had was really, just talk about some of the community in Starling X. I guess maybe I shouldn't have said problem. Our biggest challenge in the Starling X community is really our diversity. Since the beginning of Starling X, the Starling X community is primarily consisted of Wind River and Intel. Intel contribution sort of dwindled a bit, but we're kind of reviving that in some different areas. So we're at least trying to maintain that. But we want to bring in many more companies into the fold as far as the community to get just to leverage different use cases that these users are gonna bring in as well as contributions. So we've discussed this in both the PTGs this year. I think one of our strategies is that we first want to attract more diverse users of Starling X and then really work on those new users to become contributors. So we identified a number of actions around how to attract the new users and how to bring those users into a position where they can be contributors. So there's a number of things that we're looking at, you know, improving our website, writing technical blogs, making it easier to keep the tires sort of thing. And then on the contribution side and support for new users, it's really, we have a bad reputation for poor presence and response on our mailing list and a lot of issues at OFTC. But so we'll make a conscious effort to improve that as well as a conscious effort when we're working with the new users to basically understand the use cases kind of an intent to, you know, suggest improvements that they could make in Starling X for their use case and kind of turn them into contributors. So that's, again, that's our key focus for the community area in the coming year. Awesome, thanks, Greg. It's a lot of, you know, great work going into the Starling X community. And I think that last point you made about, you know, conscious effort to, you know, maintaining like the mailing list and the OFTC things, it's really challenging. And I, like, what are some of the things that y'all are hoping to improve on that this year when you're looking at new contributors? Is it getting more folks active or is it just decreasing response time? Yeah, it's, I think what we've seen in the past is that a lot of the existing contributors from Wind River and Intel that are the experienced ones kind of have their heads down, you know, working on their contribution and are not, you know, supporting the mailing list. So we're, yeah, we're looking at, you know, explicitly, you know, assigning on a rotating basis, you know, okay, you manage the mailing list this week, you manage the mailing list last week. And we're going to try different techniques like that to see which work and to get better support in there. Awesome. Well, thank you. It's a lot of great things coming out of y'all's community. Now I want to bring back all of our speakers from today. So I think we want to stop sharing the slide. There we go. And then if everyone wants to come back on video. There we are. So one of the questions, and we didn't have any additional from the audience, but I personally had one because, you know, I know we're two years now into a global pandemic that's really affected, you know, in-person events as well as how we collaborate synchronously. And while we do have async methods like mailing lists and OFTC, I wanted to see what kind of impact that this has had on y'all's specific communities and what kind of things have you done to try and overcome some of those challenges. So maybe Gansham, I'll start with you from an open stack perspective. Yeah, sure. So like, in terms of like, if we quantify what impact or in terms of contribution and community development, it has been, it's difficult. Because if you see the number of changes or the things feature implemented in open stack from before pandemic and during pandemic is around, I'll say around 20% has been decreased, but there are a lot of factor in that. Pandemic is obviously one. Company focusing on other things is also one and open stack is becoming more stable or mature that is also thing, right? But we have seen like few of the small projects where we have the like few contributors maintaining those. So they are obviously decreased in the contributors and in terms of like other big projects like no ironics, inner glance, there has been things going in the same way. So because like it's a development on computers, so it's a remote job, we don't need like separate infra to physically go somewhere and do our job. So that has been very helpful even in the pandemic. And as overall, I'll say it's going smooth. It has not impacted much, but the one thing has impacted is our face-to-face meetup and discussion that obviously we do in virtually event and virtual way, but obviously a human touch is when you meet face-to-face to the people you are working over the years and you're the discussion, you do the face-to-face. So that has been like some extent impacted that are important things. And to overcome those, yes, we do meetup virtually more and more. You're not just the PTZ or infra summit, but also like the meetings and ad hoc discussion. For example, the paint point discussion we are doing on video call so that we can interact in more speedy and productive way. So those are the things we are trying to improve the things. No, interesting. And Peng Tao, how about you? Is there any kind of impact from the CODIC containers community that the pandemic has had? Well, as I can say, I will say not so much. In last year, we have delivered a very steady release throughout the year, same as the one before. So the impact is not that much, but we do certainly meet in the community folks and hope they're pandemic and sooner. Yeah, me too. James, how about you for Zul? Have you all seen any impacts? I don't know about you, but I'm starting to run out of t-shirts. I haven't gotten a new t-shirt in years. Yeah. But I think probably all of our communities were set up pretty well for distributive work before all of this started. So while we've all seen major changes to our lives, I'm not... It's maybe been a speed bump in terms of development, but it hasn't been a major impediment. And I'm glad that we were in that position before the pandemic happened, and I'm glad that people have been able to see the benefit of remote work and online collaboration and so forth during this time. I'm hoping that that sticks around even after the pandemic because that's really the key to building communities that work across organizations. So yeah, the other thing I've noticed is I feel like with more people working from home, I've been seeing people interact a little more off hours or that sort of thing. Maybe that's a good thing, maybe that's not depending on what your culture is and what your own life work balance goals are, but it is a change. I think as long as you keep an eye on that and don't overcommit yourself, that can be a healthy thing. Yeah, it's really interesting. And I agree, we've had this really good foundation setup because I mean, just on this episode, we have people from Canada and China and the United States, and we have this way of collaborating that whether you're at one organization or in one country or another, that some of these benefits have hopefully carried on through the pandemic, but yeah, the time when you are online and not online has started to blur a little bit, so hopefully that's something we can all personally work on. Greg, before we wrap up, is there anything else that you think that has impacted the Starling X community around the past couple of years? Yeah, I think I just echo what everybody else has said is that one, we're super lucky the industry that we're in that we can work online almost at 99% efficiency sort of thing, but I do agree with the face-to-face like PTG and Summits definitely miss those just because obviously the face-to-face collaboration makes a big deal if you're not directly working with someone a lot, and it's really the outside of meetings discussions that you have at coffee break or at dinner or something like that that are valuable, and then the other thing that the Summits and PTG's help with is that you're totally isolated, so you're completely away from your job, you're thinking completely about your open source activity, so then definitely miss that. Yeah, no, I miss that as well, and that actually kind of bridges well into the next thing I wanna talk about, but we have run out of time, but first before we do that, I wanted to thank y'all. I know that y'all are all leaders in the technical communities for your project, so your time is extremely limited and valuable, so thanks for coming on to kind of provide those update of what y'all have been working on and what is going to come from your communities. I also wanted to thank y'all because y'all all work for an Open Infra Foundation member, so Wind River, Ant Group, NEC, and newly AcmeGating has just joined as an Open Infra Foundation Silver member, so thank you all for your support and your continued work to continue to build communities through open source software that runs in production. But to Greg's point, we actually do have a summit that's coming up in person that I'm really excited to be talking about, so from June 7th through 9th, we are going to have the summit in Berlin. Registration and sponsorship is now open, and personally, I'm very excited. Next week the CFP opens, so if you have a talk around any of these open source projects or other infrastructure projects that you'd like to talk to the global community about, please submit those topics, and if you'd like to actually help curate the content for the summit, we are accepting programming committee nominations right now, so bookmarkopeninfra.dev slash summit, it has a lot of great information around registration and sponsorship and the CFP that will open next week. But in addition to the CFP opening, we wanted to actually provide more context to what makes a good submission. So in two weeks on January 20th, or yes, 20th, my math has been off apparently, we are going to have some of those programming committee members come and talk about what kind of submissions would make a good summit talk. We have the tracks here, one new one to note will be hardware enablement, but if you've never presented at a summit before or like most of us, it's been a while since you've been to a conference, join us on openinfra.live on January 20th to hear what makes a good submission and how you can increase your likelihood of presenting at the summit. That's all from us today. So thank you for tuning in from wherever you're streaming and have a great time with openinfra.