 Yeah, I think we're ready. Hello everyone. Oh, yeah, hello everyone. My name is Mark Burnett. I'm giving the Airship project update. As I guess many of you know, this project was announced in Vancouver, or yeah, six months ago in Vancouver, I guess, longer, six months. Yeah, wow, time flies. So anyway, let's just jump right in. So Airship is a declarative infrastructure deployment and lifecycle management tool. This is something that lets operators have a lot of confidence when they're testing updates in large or multiple site deployments and be confident when they're rolling things out that they have tested a smaller version of the site with the exact same configuration and things like that. That's one of the big advantages that you get from a declarative approach. Yeah, so we started this project in 2017, sort of on the back of OpenStack Helm, which has been really useful for us for updating OpenStack and stuff. But we wanted a declarative platform for also managing the lifecycle of the whole cluster, not just the OpenStack portion of it. So it was announced last May as a pilot project. And we've been in the OpenStack community since then, trying to improve things and iterate and grow out the missing components that we have. So the current state of Airship, where we are today, what exists is that it has a pretty good resiliency profile. There are some areas that we know we want to improve, but it's designed not just for single node down resiliency, but in particular, we want to build it in a direction where if you lose external connectivity for the site, that you can still do basic maintenance tasks on the site. Everything is self-hosted on the site that's managing the lifecycle. So you can do fixes even with limited connectivity, sometimes an issue. We've made a number of security improvements over the last few months that are significant. We think it's got a reasonably scalable operational profile, I'm sure there's improvement there. We haven't deployed very large sites with it yet. But we definitely see the repeatable multi-site deploys. We do that day in, day out. We're redeploying test labs, updating them and so forth. Predictable upgrades. And so we have one deployment model for all of our software. This is one of the fundamental things that we do. Almost everything in the platform is driven from containers and helm charts. So this is the trade-off we've chosen to make. You can imagine layered approaches and so forth, but this is the direction that this project is sticking with as sort of a fundamental choice at this time. As much as we can, we're sticking to that model. So a single deployment tool for all software, which means a lightweight host image at this time and then containers for everything, including Libvert and OpenStack Helm and so on. So pretty much all in on that approach, if that makes sense. And we do now have a gated reference configuration in Airship Treasure Map, which is primarily a project filled with this reference configuration, but it also has some general project documentation, some CICD that's used to test out that configuration and so forth. It's really meant to be a sort of second step starting point where you can say, okay, I get the basic idea. I want to see a realistic example. We have a smaller example in Airship and a bottle that's like a single VM kind of deployment, very simple. This is meant to be next steps. Okay, if I really want to deploy this in a test lab, what do I look at? This is our best answer to that right now. So we have that, it's gated. There's a nightly job that tries to automatically update the versions of all the components in it from master, latest images, latest charts, including OpenStack Helm. And so that's been going pretty well. Additionally, we have tagged of release for November. November 1st is our first monthly, we hope to make it a monthly tagging process. And that's what I would say is our one.orelease candidate is that tagged version. So if you want something even a little bit more unchanging, a little bit more stable possibly, that tag is sort of a release candidate, but you should be able to reasonably look at master because it has been through a full system test at that point. Any questions on the state of affairs or anything? You may have seen this on one of the keynotes. We just have some nice quotes here from people either using or experimenting with Airship. I know for example, Ericsson is starting to experiment with some VRan applications on top of an Airship deployment. I, you know, SKT has been doing some work with Armada in particular. And there are other, you know, other companies and organizations doing work here. So that's exciting. And one of the things about this too, one of the things about this too is that, you know, as a release candidate, you oftentimes that means not production ready. And that's not the distinction that we draw. It's more that it's production ready, but it takes some domain expertise today to get started with it, you know, in terms of learning how to navigate and manipulate the manifests that drive the definition of the site. It just takes a little bit more domain knowledge and digging in than we would prefer for someone to get started. And that's a lot of the stuff that we're targeting for 1.0. Yeah, and I would say that those sorts of features are absolutely critical. And they're all sort of to the left of the site, right? The same sort of YAML configuration will end up being delivered to the site, drive the various components to do their work in the same declarative fashion. And really the issue is how do you write that big, massive YAML? If anybody came to the deck and talk yesterday morning, I mean, I think we have, you know, 40,000 lines of YAML basically. That's not all site specific. A lot of it gets reused, but that's a large ask, right? For configuration. There's a lot of configuration. So, yeah. So to where we're going, some of the YAML-like site specific stuff is very easy to generate. A lot of it's just, okay, I know what the IP address is for my site are gonna be. Here you go. Things like that. And then there's some non-trivial things like discovery, a discovery approach for this where the thinking is, and there's no spec for this. If you have thoughts on this, they're super welcome to like, even just submit a spec or just talk in the mailing list or an IRC about it. The thinking is that there will be a tool that goes and does discovery and generates documents that you would then commit. And the thinking there is that you wanna have your Git repo track your intended state. And then Deccan tracks it on the site side. What actual sets of config were delivered there? So you have the traceability on both ends. What's my history of intent? And what is my history of actual delivered configuration? And including annotations and so forth. So that's the idea there. But again, there's not a spec for discovery and we'd really like to get that in. So if people have thoughts and opinions or yeah, it's super open. We're gonna work on ironic integration. We've been using Maz. Maz was sort of chosen, I think it's fair to say it was chosen because it did all the sort of networking and storage configuration out of the box that we needed it to do. So we kind of grabbed it and started moving with it, if that makes sense. And it has proved to be quite awkward to operate in a containerized way. I mean, it's also a bit of a resource hog in our experience, but I think the main obstacle or hurdle that we've hit is that it's actually just not really built to work in containers very well. And so in particular, we would like to move to ironic because we feel pretty strongly that it will operate better in a container environment. So that's the idea. I remember this whole platform as self-hosted. So inside the control plane, we're running Maz in the future ironic. And so that's how we're provisioning new nodes if you add them to the cluster or reprovision nodes if you need to redeploy it and so forth. So it all has to work right there. So if you take down one of the control plane nodes, you want that Maz or provisioning workload to move properly. And that doesn't always happen. So that's the main motivation there. We hope there are some other benefits there, but that's the primary driver from our point of view. I already talked about auto discovery ties right into YAML generation. We do have sort of a prototype tool. Was it called tugboat? Tugboat, which basically was written because we had like many organizations, I'm sure have like an Excel spreadsheet that defines here's what your site looks like. And the idea was to take that and turn it into something that is sort of an airship definition. So we want to do more generic tools in that vein. And the next step there too is the plan is to sort of formalize that proof of concept as a sort of a pluggable interface that then different operators can pull their site definition from different sources. So that if you have an operator specific system of record that defines what your sites look like and what all the information for them would be, you could make a plugin for this to be created tool that we're calling spyglass that would pull the, pull your operator specific information in and turn it into the site definition. And then at that point, the proof of concept Excel based translator would be adapted into a plugin as well. And there's actually a draft spec up for this spyglass tool. There's an airship specs repo that we consolidate all of our various specs in. So that's the place to be a look for that. I don't know the reference offhand, but. 605-227 for the curious. It's gone through a couple of iterations but I'm sure more feedback is welcome. Another big item that we really want to get in is multi-OS support. We've had a little bit of effort on this, on the image front in particular, but there's more work to be done here. There's a few assumptions here and there about how things are done. They need to be generalized. There's definitely some assumptions and promenade about Ubuntu right now. Maybe it's unfair to say that there are assumptions in the way we're using MAS, but certainly we need to be more general in the way we're using MAS at least. But also the Move to Ironic will help facilitate that as well. But also Susie's support is in the roadmap. They've offered to help work on that. So there's work and a few components here. It's unfortunately not isolated to just installing the operating system. There's a few bits of work to do there. Any questions about where we're going? Today, the only thing that's actually supported is Ubuntu 16.04, yeah. Yeah, so far. Absolutely, absolutely. But that's gonna be relatively easier than also supporting Susie and CentOS and RHEL, yeah. You're sure? I'm sorry, the original question was be specific about what we're supporting. So that's Ubuntu 16.04, and then I actually didn't quite hear all of the... There's some relatively quick and dirty kinds of package installations that are literally just hard-coded to apt, get installed and stuff like that that need to be generalized. But yeah, absolutely, it's on the roadmap to support more than just Debian-based, yeah. That's right. Thanks. Any other questions? Okay. So, so far, we have a few integrations. I've mentioned OpenStack Helm a couple of times. We sort of make a mix of heavy use and heavy support for this project because we're running OpenStack Helm on top of it. But we also leverage the Barbacon chart, the Keystone chart, we leverage the Helm toolkit in some ways and so on. We're affiliated with the Acreno project, which is, I think there's only one sort of currently actively merged blueprint for the Acreno project. For those of you who don't know, Acreno is a project under the Linux Foundation that is looking to provide a set of codified references for various types of edge deployments. And so, Airship-based, OpenStack Helm-based deployment is the first blueprint in their toolbook or in their kit for that. I expect they'll continue to refine these and I know that they're working with StylingX closely as well as a major part of their toolkit. But yeah, so Acreno, we use Barbacon as specifically via Deckhand. So that's where we do our secret storage sort of obviously. We're making use of Oslo specifically for Oslo policy but also I think a couple of other bits here and there. And of course we're using Keystone for auth everywhere today. We're actually working now to integrate Keystone auth via webhook into the Kubernetes API for our operators to use to control that access. Right now we just don't give them access or higher tier support would get the admin key basically for Kubernetes, which isn't ideal. So that's the direction we're moving with Keystone. Otherwise that's a fairly simple integration for us because it's just application level and also it's deck level. So those are our main, I'd say integrations, is that fair? Yeah. Yeah, so welcome to come get involved. We have a project onboarding later today just down the hall I guess. And yeah, we're on airship it on FreeNode and we have weekly meetings. They're currently at two o'clock in airship it. And there's a, is it a weekly call and it flips that there's a time that flips. There's like a phone call that one of our colleagues Rodolfo hosts, but the time for that changes to accommodate collaborators in Asia and in Europe and we're based in the US. Do we have a link to the Wiki up there? There's a link to the Wiki. The Wiki is probably the easiest place to go grab the times for these different meetings from we keep that up to date. Yes, that is correct. And then there's a link here to airship treasure map just for you to browse a complicated realistic configuration set that includes like a full open stack deployment with SEF and I think it's what eight nodes or six node deployment. It's at least six nodes. So it's four control plane nodes and then two or four computes. I forget what on pretty realistic modern hardware that we test nightly. So yeah, I'll leave this up for Q and A. So if anybody has any more questions, I know we had some throughout. Any questions? So the current situation, or the past situation basically is that, so the system, you deploy a genesis node with minimal configuration and then you run sort of our bootstrapping script on there called genesis.sh. That creates a fully functional containerized single node control plane for airship that you can then use to provision the rest of the system. So that's what we're going to do. So we're going to go ahead and we're going to go ahead and we're going to go ahead and we're going to go ahead and we're going to provision the rest of the system. Then the intent is that you will then use that same system to redeploy that node. Mass is one of the major limitations or rather the way that we've been using mass in a containerized fashion has not facilitated reprovisioning that node the way we had hoped. I know that one of our cores, Scott Hussie, has worked a lot on making that work and been working with Canonical to move to a different paradigm that should support it better. I think he's very close on that work, but that is absolutely a blocker to truly reprovisioning that node. Aside from that blocking issue, which has either maybe been resolved in the last two weeks or will be resolved in hopefully the next month, somewhere in that timeline, there's probably a little bit of additional work that's not probably too scary. And then there's already a documented plan for to make sure that we expose this host redeployment API all the way up through Shipyard, but that work is very clear cut and not risky. It just needs to be finished, basically. But it hasn't been prioritized because it was blocked basically on this mass dependency. I'd say that's essential for the actual first release because it's fundamental to the platform that you don't have this variance. And if instead what you have is, well, the site is declared this way except for this, that's not ideal, right? And eventually in the, I would say medium to long term, we'd like to support a concept of a region controller that deploys that initial node via WAN booting or something and provisions it from the declaration so that you're in the beginning declaring that node in a meaningful way, not later on. So we don't have that piece yet, but we have a clear cut, I think clear cut path to this. And basically if we can't get as to work for that, which we think we can, it's just more pressure on the ironic integration because it's a critical piece of functionality. Great question, thank you. The ironic, it's targeted for 1.0, yeah, yeah. There's a spec out for that already that would be great to get a comment on. And we also had a great discussion with the ironic team in a forum on Tuesday where we talked through a lot of the details of what ironic integration would entail. There might be some, either some changes that might be needed to add functionality or can assumptions to break assumptions for how to, for example, reprovision nodes in ironic that we're gonna work with them on as well. So yeah, if you'd like to get involved with that anyone, it is a area ripe for collaboration. In case anyone didn't hear that or for the sake of the recording, the two things that we're gonna be working with the ironic team toward making sure are sort of satisfied are, one, the ability to target specific nodes for provisioning or reprovisioning rather than just treating the pool of infrastructure as a pool. And second is, oh, Rainfart, what was it? Administrative functionality. The administrative functionality. Changing BIOS settings and so on. And probably that's something that we can integrate without being fully there, because we can work around that by using Redfish or something else in the short and medium term. So that's not gonna be a blocker to integrating, so we'll pursue the integration before that's fully baked. Any other questions? I think we're out of time. Thank you all. Thank you. Thank you.