 Okay, fantastic. Same folks can see my screen. So welcome, welcome, OpenStack community meeting. Still struggling, okay. So we're gonna go through some of the updates from the Rocky release, which just came out approximately five minutes ago. So a thank you to everyone who contributed to it and a great congratulations to all the contributors and project leads and OpenStack ecosystem as well. We're gonna hear some pilot project updates. So the new pilot projects that are now living under OSF governance, airship, Cota containers, Starling X and many OpenStackers are familiar with Zool. And then we'll get a few updates about the Berlin summit. So starting with the Rocky release. So now we're gonna go through some of these slides. It's gonna be at a pretty high level. So if you're reading every post that was on the Dev mailing list, you're probably familiar with these features. But in the blue is kind of a very high level category. So is this feature about manageability? Is it about security? And then the red is how it might be used. So it might be a feature that speaks more to folks who are thinking about NFV. It might be something for folks who are thinking about Edge just to kind of guide you in these different features. And actually with this, I am going to turn it over to Julia Krueger to talk about what's going on with Ironic. So the joy of having so many attendees is that it's difficult to find Julia in the list to promote her as a panelist. Hey, Allison, are you able to promote Julia for me? There we go, thank you. Take it away, Julia. Thank you, can everyone hear me? Yes, go for it. Awesome, okay. So this past cycle in Ironic, we've spent a lot of time building foundations that we will build more things upon as we go. And mainly make operators' lives easier, more scalability features, things that operators have really been asking for because we really took the feedback that we've gotten in the last two cycles to heart and tried to direct what we were doing to that. One of the things that we implemented this past cycle was a RAMDIS deployment interface, the idea of being for scientific and large-scale ephemeral workloads. The ability to spend very quickly, to spin up a machine that's voting to RAMDIS, so that way we're not spending the entire time writing something to disk on that machine. If we can go to the next slide. We also added the ability to manage biosettings remotely. So this allows you to enable virtualization or hyper threading or SR-IOV or DP-DK, or any other biosettings that the app vendor exposes to the actual interface to change. And from the vendors that we're seeing, we're seeing 30, 40 settings that can be changed. So it seems like this is gonna be very useful in the future for operators, especially so that they don't have to go one by one through all their servers to verify their biosettings and change them. We've also added functionality to recover machines from PowerFaults, which is another operator headache. Just, it was really a minor thing, but it has a huge impact in how operators experience and use Ironic. Thank you. Thank you, thank you for the updates. And now we're gonna talk a little bit about some of the changes that have come for the upgrade experience. So we're gonna toss the floor to Alex Schultz, PTL of triple O to run through what triple has been up to with fast forward upgrades. Hi, everyone. My name is Alex Schultz. I was the PTL for triple O during the Queens and Rocky Cycles. Today I'm gonna give you a brief update on what we've been doing over the last couple of cycles. If you're not familiar with triple O, we're a project to install, upgrade and operate OpenStack Clouds using OpenStack APIs, including Heat, Ironic, Keystone, Neutron, Nova. More recently, we've worked on containerizing our services. So this includes pulling in the Kola project. We provide tooling for planning, deployment and day two operations. We support advanced network topologies, allowing operators and deployers to configure their services if necessary. In addition, we have support for updates and upgrades and scaling up your cloud. Most recently that has included the fast forward upgrade. This was primarily targeted in Queens but we continue to work on the effort in Rocky. Our target was going from Newton to Queens with traditional upgrades supported for Queens to Rocky and will be supported for Rocky to Stein. We are going to continue this fast forward upgrade path. We are assuming our next one will be from Queens to whatever the T Cycle. Our fast forward upgrades are primarily driven via Ansible under the covers. With this, we pulled in a feature called config download which allows operators and deployers to describe their cloud, download a set of Ansible playbooks and actually execute the deployment. This include external deployment tasks for things like staff or OpenShift and it's completely integrated with their UI. All of these things will continue to be supported and work as part of our upgrade process. Additionally, as part of Rocky, we've included an all-in-one installer primarily for proof of concepts or development needs but it allows you to deploy a single node OpenStack cloud without requiring any additional management. This is a very powerful tool for people who just want to try out OpenStack. As I previously mentioned, we've moved to containerization of OpenStack services. Prior to this, our management node was not containerized. As part of our upgrade process from Queens to Rocky, this includes the containerization of our management node. With containers comes more things to configure. We've improved our container image prepare workflow to allow the tooling to automatically figure out what versions of containers you were running and what you need to update and upgrade to. Additionally, we've added supports to allow for scaling over the overcloud networks and add more advanced networking capabilities via routed networks. Additionally, we've added in the functionality to split the RPC and notification backends, improving scaling and allowing folks who want the latest features available can use our upgrade path and get those. We've improved our upgrade backup and restore process. We've added designate support, set blue sort support, and we continue to add additional multi arc support. So as you follow the triple upgrade path, you get all of these features. Thank you very much. Thank you. And the next person we're gonna hear from, Mohammed from Vexhost, who's also involved in OpenStack Ansible, is gonna talk a little bit about how they're already offering Rocky today on Day Zero. So we'll toss the floor to you, Mohammed. Hi, everyone. Thank you, Ann. Again, I'd like to congratulate the OpenStack community for another successful release. It's awesome to see the whole community always work together and get all these cool things done. Brief introduction. I've been a long time contributor to OpenStack since 2011, I'm an elected member of the OpenStack Technical Committee, and I'm the PTL for OpenStack Ansible for this upcoming Stein cycle that just started today, I guess. I'm a core on Puppet OpenStack, so that's something that I have worked on for a little while deploying projects. And so commercially, I'm the CEO at Vexhost and our public, private, hybrid, and OpenStack Consulting company. I wanted to talk a little bit initially about OpenStack Ansible. OpenStack Ansible is a deployment tool that uses Ansible to deploy OpenStack. We've seen it being, you know, helping users deploy from one to two nodes to 500 to 1,000 nodes. We've seen some really big deployments. And for the Iraqi release, we've done things like helping introduce deployments using distribution packages. So before we deploy from source, and now it does both source and distribution. We can deploy across three operating systems now. So OpenSUSE, CentOS, and Ubuntu, which really helps users really use the operating system that they're mostly comfortable with. And then the team has really put a lot of effort into improving the stability of deployments and making sure deployments work really well over the cycle, which is an interesting thing that kind of comes out to what I am talking about, which is that at Vexhost, we just officially launched a new region as of today that is based out in San Francisco area, or the Bay Area, specifically in Santa Clara. So right in the heart of the Silicon Valley. And that deployment is running rocky starting today. It's fully actually accessible. The new region has, you know, the latest hardware. It has 40 gig internal networking for tenant networks, 10 gig public networking for every single virtual machine. And this thing's like nested virtualization allowed by default to make interesting things like Cata containers and other things that depend on using virtualization inside of virtual machine possible. And the really, really cool thing is that the cloud is already being used by the upstream of second infrastructure team. And as of today, the jobs that do the releases were running on it. So I believe Neutron today, the jobs to release it ran on a rocky cloud. So the rocky release ran on rocky, which I think that's a really cool, exciting thing. So that's kind of the update on our side. And again, congratulations to the OPSAC community and thank you. Thank you. Those are super exciting updates. So that's what we're gonna run through a little bit. Some of the advances that came from newer projects, some that you may have heard of, some may not. So one of the new projects we had for the rocky release is Ching Ling. And Ching Ling provides function as a service specifically for open stack clouds. So it's using Kubernetes to orchestrate containers and run those user functions. You know, there were, there's quite a few of these serverless projects out there like OpenFast, OpenWisk, but they weren't connecting to open stack services. So what the, you know, the problem area that made Ching Ling come forward is that they wanted to be able to say, hey, I wanna use Keystone for authentication. I need that to connect in. So now we have in rocky, those serverless capabilities for open stack clouds. The Masakari project, if you were saw in Queens, that one came out. And now in rocky, they're expanding some of those instance, what it's monitoring. So Masakari is about recovering automated recovery from failures. And in the rocky release, we're looking at things that are more on the internal side. So like a hung OS, scheduling failures, data corruption. This is really about expanding what Masakari can monitor and recover from. Cyborg is another new project and Cyborg is about life cycle management for accelerators. They're looking at both hardware and software accelerators. So DPDK, FPGA, and particularly with FPGA in the rocky release, they've added a new REST API. So one of the big advantages of FPGA is that it can be reprogrammed. And this REST API is now letting folks do that. So they'll be able to update the functions that are loaded on an FPGA device and give that option to their users. We're also gonna run through some of the changes that came for ease of operations. And then just expanding functionality of OpenStack. In OpenStack every cycle, one things we have are community-wide goals. And for this one, enabling mutable configuration across all the projects was one of those community-wide goals. This option, being able to change those configuration options has been available since the Newton release, but it wasn't something that every project was able to support. So this was really just about getting all those services, having that mutable config and let's an operator restart a service and avoiding that downtime. One other nice things we added was port forwarding for both TCP and UDP transport layers in the Neutron project networking. So if you're an operator, you might have limited public IP addresses. This is letting you reassign those. So if you have that small availability, gives you many more options, easier manageability, nice for our operators and also provides the opportunity to integrate with things that exist in other ecosystems like Docker's port mapping. Another thing we added in Blizzard was being able to specify an ability zone in reservations. So this is expanding Blizzard's awareness of those AZs. And when you're making that reservation, you can go ahead and say, this is where I need it to go. Particularly for folks concerned about compliance, this is a great feature. From Glance, the image service, big operator request is adding the ability to hide images. So you don't accidentally want to have someone deploying something that's outdated or for some other reason you don't want it to be used. But if you have a failure, you need to be able to go back to that image and recover. So in the default list, operators can say, I'm gonna hide that, but then if they need to revive it for recovery, they certainly can. Something nice for our IoT and Edge interested folks is adding UDP and Octavia, which is load balancing. So UDP is a transport layer, but particularly for folks concerned about Edge, it works a little bit differently than TCP, which is a pretty common transport layer. And it's preferred because of the way it operates and changes, and I'm sure the Octavia team would be happy to tell you all about it. But it's used in streaming voice video, anything real time where I, actually I'm just talking, getting into these translators. Anyways, used in streaming video voice. And now we offer that option for load balancing. Another one from the Glance team is letting operators generate a secure hash that users can say, this is the image that you tell me it is. This is a security issue, seen kind of quite a bit more in the containers ecosystem, but for anything where I have an image and I'm gonna deploy it, I wanna make sure it's exactly what you tell me it is. So secured enhancement can let users confidently know what they're deploying. And if you're interested in learning a little bit more about Rocky, we have the links for openstack.org slash software slash Rocky, as well as the highlights that come directly from the PTLs. The Rocky release highlights are their selection of what they worked on this release that they're quite proud of, that will really make a difference for users. And next we're gonna run through some of the pilot project updates. So you might be aware of the expanding scope of the OpenStack Foundation and some of these new projects that are coming in under that umbrella. And we're gonna start with Airship. So I'll turn the floor over to Chris Hodge to talk a little bit about Airship. Hi there, thanks everybody. So Airship was a new project that we have started to bring under OpenStack governance. It's in what you would call the early project stages. And so we're not, we're trying it out, seeing how the project works, what sort of update we get. And so far it's been really fantastic. So Airship is a project that makes managing the life cycle of open infrastructure simple, repeatable and resilient. So what does that mean? It's a bunch of tools, some of which are developed for the Airship community, some that are other open source tools, many which come from the OpenStack community itself to define what your infrastructure looks like and then to deploy all the way from the bare metal up to the application layer on top of that. The Airship team has been working very hard over the last little while. The homepage is at airship.org and they have weekly development meetings if you want to drop in and see those. Those are on Tuesday mornings at 7.30 Pacific time as well as bi-weekly foundation synchronization meetings where we talk about higher level strategy and marketing. These are at 10 to 11 a.m. Pacific time every other Thursday with the next one being on September 6th as well as weekly open design calls. These are open software design calls which tick-tock between the afternoon in Pacific time and the a.m. in Pacific time. And so they're from 1 to 2 p.m. Pacific time every other Thursday with the next meeting happening today, August 30th as well as 7 to 8 p.m. Pacific time every other Thursday with the next one being on September 26th. Yeah, so I don't know if there are any questions about this but yeah, hopefully you'll take a look at Airship and check it out. And you can also have a mailing list where open development happens over at list.airship.org as well as a free note IRC channel at hashtag airship it. And actually that's a great point about questions, Chris. One of these sweet new features of Zoom webinar is there's an Q&A box. So if you do have a question we can take all those at the end. You can type in your question. You can also submit a question anonymously if you really wanna try to trip Chris up about Airship. But with that, we will go ahead and hear a little bit about Coddard containers. Great, I will hopefully I was going to share my screen. Is that right, Ann? Yeah, go for it. And actually while you're showing your screen I'll introduce you, I'm sorry, this is Eric Ernst. He's one of the technical leads on Coddard containers. I'm trying to see here. I am on this webinar I do not see the option popping up here. Yes, yes. I'm fully promoted or half-promoted. Green button at the bottom. Right, I don't have that. All I have is my microphone. Okay. Let's see. Yeah, you only, oh you only got a half-promotion this morning. I'm sorry. Alison, are we able to bump up Eric one more? Thanks everyone for bearing with us. This is a new tool. Yeah, I don't need luck. With too many capabilities. He bumped someone down. Well, why don't I, Eric? I will go ahead and share my screen. Yeah, perfect, I can speak to that. You can get going while Alison sorts out your promotion. Awesome. So, looking at it, basically the Coddard containers project is looking to provide stronger isolation in the container ecosystem. So, in doing this today, a typical container is just a great process running on your Linux hosts and it's isolated from all the other processes running on your hosts via the featured namespaces. And then it's kind of constrained on its CPU utilization, on its memory utilization using C groups. And then there's other stuff on top of that as far as mediating and determining what capabilities that process should be allowed and everything else. But that's, in a sense, all the containers are. It is just a couple of features coming out of the Linux kernel. Because of this, they're super lightweight. They're very fast and the design patterns that result from this are very strong and very widely adopted today. What we're looking in the Coddard containers project is to provide a different form of isolation. So instead of using just namespaces and C groups, we're actually, instead of really just using software virtualization or making use of hardware virtualization, in doing that, each container that's running or pod if you're looking at more of something like Kubernetes is actually executing within a namespace, but that namespace is actually within a virtual machine itself. So normally when you think of this, there's a bit of a trade-off as far as overhead associated with a virtual machine. And that's where a lot of the work that we do in Coddard containers is spent is how to mitigate that, how to make it from an end user perspective look like it's just another container on the system. So in doing this, you know, I wanted to clarify that we are an alternative runtime engine, but it's not a replacement. It seamlessly integrates into the ecosystem. So if you're using Docker, Kubernetes or anything container-wise out of OpenStack, it'll just plug in and work in just the same way. And what you end up having is that actually each container will be running with its own kernel and you can actually leverage that fact and have your own kernel. Moving forward, so we had a 1.0 release in May of this year. And at this point, we are at 1.2. So some of the key features that we've introduced since then include VM templating support, Vsoc support, and let me just pull up my notes real quick and a couple of other items. Yeah, multi-architecture support. So if you look at AMD, ARM, and IBM, they're now fully supported, as well as being able to deploy Cata via daemon sets. That's also been introduced since the 1.0 release. So VM factory in Vsoc, really these aren't user perceptible. These are performance optimizations. Much of the work that we're doing is stuff that an end user shouldn't really notice unless they're the ones actually going through and looking and seeing, what is this costing me to get this extra security? What kind of, and by cost, I mean footprint and boot time generally, but then also some of the IO performance as well. So VM factory is a way to be able to essentially run, use a cached virtual machine. So it really kind of helps benefit as far as boot time and also because of this caching, it helps reduce the footprint since we would deduplicate the common parts of each virtual machine. Looking forward, there's a lot of work to do. We're looking at a 1.3 release in mid-September. And some of the areas that we're looking there include things like full network and storage hotplug. So hotplug is a way that allows us to do a lot of things in parallel when it comes down to it, but this is literally hotplugging devices into a virtual machine. So by doing this for networking and storage, we help a lot of use cases for potential end users. So people who are involved or excited about this include Huawei and definitely HyperSH for the factory. On top of that, open tracing support. So we're using Yeager internally now with any project with multiple components. For me anyway, this has really helped a lot rather than just having to grep through journal all the time. So I think this is a big usability improvement for sure. And then some CNI extra support for Mac VLAN and IP VLAN. Looking forward beyond that, these are all things that are kind of lined up for release already, but there's a lot of other future work that we're looking at. One of them is just generally security enhancements, whether it's looking at the hypervisor itself or what we can do on the host to help add one more layer of security around the hypervisor. If we look at more native integration into the ecosystem, I'm sure as you're aware, the ecosystem doesn't really stop. Fortunately for us, virtual machine isolated and sandbox concepts in the cloud native ecosystem is becoming more common. So at least we're moving in the same direction as the ecosystem at this point and making sure that we more natively align. And then other future work is, looking at more product readiness type of features. So being able to do live upgrade, being able to definitely enforce backwards compatibility. So if you have long running containers that these can continue without being a reset. So that way, you know, a cloud provider can meet their SLAs with their end user. And then, you know, future performance optimizations. So that's all I have for Cata containers. Fantastic. Join the community, join us. It's fun. It is fun. Yeah, there's a quick list of the mailing list, list.catacontainers.io. The developer channel is on a free note IRC, Cata-dev, and there's a bridge to Slack if you are a Slack preferred user. And we keep our development on GitHub where you can check it all out. And the next one we're gonna hear from is Starling X. So we will toss the floor to Bruce. It's gonna run through Starling X. Hello, good morning. Can you hear me? We can. Hi, welcome. So we are a new project to the OpenStack community. We are very pleased to be here. Go ahead, please. So Starling X is essentially a deployment and a productization of OpenStack and a number of other open source components that have been highly optimized for edge computing. It includes a number of features. I won't read this whole list because I only have a few minutes, but essentially we are a highly robust, highly available, high performance, low latency system. We're all singing, all dancing. It's very, very cool stuff. Starling X is based on a Wind River product. We have taken that to open source and we are working very hard to improve and extend and enhance the product. Please go on. So looking at Starling X today, you can bring up small clouds between two and 100 nodes. You can actually run all of Starling X on one node. You can run all of Starling X on two nodes or you can have up to this full configuration. Configuration is a bit smaller. We're targeting this solution toward edge network computing and typically those are small numbers of nodes. We're not trying to deploy this in a large public cloud. Let's go ahead and go on. So we're using all of the standard OpenStack services. We're currently on Pyke. We plan to move to Rocky as quickly as we can. Starling X includes additional services for configuration management, fault management, service and host management, as well as an orchestration infrastructure. All of these are open source. There is a bit of overlap between some of these services and existing OpenStack projects. There are also some new features and functionality here. We're very much looking forward to going to the Denver PTG and talking with the community about how we align much more closely with Upstream OpenStack. Let's go ahead and go on. So for more information, please visit our website. There's more details in the presentation that you can see here. We have our FreeNode IRC channel. Our documentation is currently just on the Wiki page. We're in the process of learning how to use the OpenStack documentation frameworks and publishing our code on the web. And we're looking forward to meeting everyone at the Denver PTG where we will have our room on Wednesday and our team will be there all week. Please come join our project. We would love to work with you. Wonderful, thank you so much, Bruce. The next project we're gonna hear from is probably familiar to quite a few of you. So I will toss it over to Clark to talk us through Zule. Hi, good morning. So Zule is a continuous integration and continuous delivery tool that has its origins with the OpenStack project back in 2012. At the time, and OpenStack still does this, OpenStack required all code changes be gated on their test results. This meant you couldn't merge a code change, you couldn't merge a code change unless it had passed testing. This was a requirement to get in. This means that you're limited and in the number of code changes you can merge based on how quickly you can run the tests for those code changes. If you can run them more quickly, you can merge more changes. And at the time mid-2012-ish we were running into scaling issues where the existing tools we had wouldn't allow us to merge code quickly enough because we couldn't keep up with the test demand to do so. Zule was built to address this problem and the killer feature at the time was predictive testing of future states. That meant we could generate multiple future states and test them all in parallel and as things failed we could kick them out and as things passed we could merge those future states into the code repositories. And this allowed us to scale massively the number of changes we could merge in a single day and still does. Since then the Zule community both with an open stack and some small number of external users have learned quite a bit about how to do this effectively. And a major output of that was the third major release of Zule which happened earlier this year which tries to incorporate a lot of these things that we've learned. And a big piece of that is basically getting out of the way of the user and allowing this continuous everything to be a good idea. This means that we can test job changes before they merge. So just like we're doing future states of the projects we're testing we can do future states of the Zule configuration itself. This means if you wanna add something new to the test or change some configuration whatever that can all be done knowing it will work before you land it into the globally used configuration set. Secrets management. So one of the things you often run into is the need to talk to some service or external resources that require authentication or maybe you've got some secretive object like a GPG key to sign artifacts. That sort of thing. We've built in secret management. The jobs are written in Ansible which is familiar to many individuals that deploy software to computers. And this was kind of an explicit decision made to rather than invent our own thing that people have to look up and learn kind of on their own use something that people are familiar with that already has a user base. And all of this stuff is configuration that's managed in the projects using Zule. So we've tried to get out of the way of having this centralized configuration which is where we were at in 2012 through like 2017. And now we've given a lot more freedom to the individual projects to kind of determine their own CI and CD destiny. Another big change was the addition of GitHub support. OpenStack itself uses Garrett for code review but GitHub is a very popular and well used platform for code review and collaboration. And so we added GitHub support to enable more users as well. As far as recent work on Zule itself we've added features and at Jim primarily did this work to do inline commenting on code changes. So say you're running a linter that says line 50 has this issue we can then post a comment directly back into Garrett saying online 50 saying this is the issue from the linter. Currently I believe this is only used for Zule to comment in Garrett on Zule configuration changes but there's work to expand this to say a linting or even potentially unit testing or other trace back related errors. There's been a bunch of improvements to the web dashboards. This was a big thing that Zule v2 kind of lacked. We shoved the dashboarding into other systems but Zule v3 which released earlier this year kind of assumes a lot more of that responsibility itself and we've continued to improve the dashboardings to help users see why things are failing why they're working, where status it changes and as it goes through CI and eventually delivery and deployment. And we've also spent a lot of time learning how to run the system. The OpenStack project has been running Zule v3 since about the beginning of the year. There are other users using it as well and this is a brand new thing for us and for them and we've spent a lot of time improving it based on what we've learned as we've gone. And yeah, I think that's my head for Zule. Awesome, thank you so much, Clark. And the place where you can hear about all these things, updates, projects is going to be in Berlin in November 13th or 15th. So I'm gonna hand it over to Aaron Disney to talk a little bit about some of the things you can expect at the Berlin Summit. Yeah, hi everyone. Summit is coming up quickly as it always does. We know our team's getting super excited. Schedule's been live for a few weeks. Wanted to first off thank our sponsors that have already signed up. We have five headlines. Canonical, Deutsche Telecom, Huawei, Intel and Red Hat. So thanks to all of them specifically as well as everyone else that has signed up. We still do have some spots available. So if you're interested, feel free to either reach out to me directly or email summit at opensack.org. Reg's prices did go up this week. Early bird pricing has ended, but we still do have until October 22nd before the next price increase. So be sure to register before then. Wanted to call out, we have a few pre-summit activities. So as you start thinking about booking your travel or your team's travel, wanted to call out some of those pre-summit things today to make sure everyone's aware of everything going on. Per usual, we've got our Upstream Institute training that's happening in Berlin and I believe that's live on the schedule as well as the OpenStack board meeting which always takes place that Monday before. The summits on a Tuesday through Thursday cycle this time. So pre-summit stuff is mainly happening on that Sunday, Monday. Something super fun this cycle that we're really excited about is this hacking the edge hackathon that the OpenTelecom cloud group is hosting the Saturday and Sunday prior. So that would be November 10th through 11th. That is live on the schedule. So if you're interested in that, lots of details out there as well as a separate Eventbrite to get registered for that. Something else we're bringing back. Some of you may remember the open source state content that we hosted in, I believe Boston and Sydney. We've had a few groups that have reached out that are interested in doing something similar in Berlin. We've got limited space but we've got one official group that's signed up and the ticket is available online. That would be Seth Day. That will be a full day of content the Monday before summits starts. That ticket's live on our Eventbrite right now. I believe it's an add-on ticket for $25. And then there's a couple other groups that we're working to get official. So as soon as that happens, we'll be announcing those and adding them to the schedule as well. In the meantime, if you know of any other groups that may wanna be included here, like I said, we have limited space but we'd love to be able to host whoever we can. So feel free to reach out to me directly at Erin at openstack.org or again, that summit email address will work as well. And we've got some other fun surprises but I'm still kind of getting a few things official but yeah, definitely a lot gonna be going on in Berlin. Awesome, and I know Allison has some updates about the sessions that we're gonna expect in Berlin. Yeah, like Erin mentioned, the schedule for the summit went live two weeks ago. So there's nine tracks. So there's a lot of tracks listed here as well as we brought back the tracks on hand on workshop. So there's a lot of hands-on learning experiences that you can sign up for. Those do require RSVP as well as the Upstream Institute and intensive trainings that we have on site. And some of you may have seen as well this week that emails went out around the forum. So these are collaborative sessions between developers and operators for all the different projects where it's not presentation-based, it is more discussion-oriented but the brainstorm process is open now. So I think that the ether pads are sent out to the mailing list. So the brainstorming process will go on for a few weeks and then the session proposal process will begin. So keep an eye on the mailing list as that process is underway but you can check out the schedule at openstack.org slash summit. And like Erin said, make sure you register before the next pricing. Fantastic, thank you, Allison. I do see a question about keynotes. Keynotes on Tuesday, Wednesday morning. Is that the plan? Yeah, that is correct. So Gary, the keynotes will be Tuesday morning and Wednesday morning. I know the last time we introduced this, the specific ones around edge and containers but these will be folded back into the all-summit keynotes. So both Monday or Tuesday and Wednesday mornings will be for all-summit attendees and won't be the smaller ones like we did in Vancouver. Thank you. So at this point, we'll open the floor up to questions. You can use the Q&A box and we can get those answered. This is also the first time that we have done this community meeting this way with all this content and we're hoping to do more of them and we really love your input, your feedback. If you say, hey, there's something I really wanna present, we'd love to hear about it. So you can reach out to any of the OSF staff for that or if you have questions you can drop them in the question box. Give people a few seconds if they're typing. Okay, I don't see any burning questions coming through. So thank you so much for joining us this morning. Oh, how to contribute to a project? That's a great question. These slides will be made available and each slide for each group has the website with the information of where you can find them on IRC and you can get involved that way for the OpenStack project is opensack.org slash community. That will take you down to the contributor portal where you can get started. And I know there's a new group that came up in the last kind of eight months or so specifically looking for new contributors. If you want to give an update on your work how do you speak in this meeting? Fantastic, yeah, you can reach out to myself or Allison. I am Ann, A-N-M-E at opensack.org or Allison at opensack.org and we can get you all set up. I think there might be a delay in Zoom webinar because now everything is flashing with questions trying to get through these as quickly as we can. And Muhammad has put the how to contribute wiki in the chat box for that question about contributing new projects. Thank you so much. And Eric has also put the contributing guide for Cota containers in the chat box. Fantastic. Well thank you again everybody for joining. We'll happily, like I said, take your feedback about this meeting, this format and if you want to present, please reach out. Other than that, a big congratulations again to all the OpenStack contributors for the 18th release of OpenStack and we will see you next time. Thank you.