 Hi, everyone. I'm Drew Thorstensen. Today, we're going to be talking about contributing platform drivers to Upstream OpenStack. Before we get started, I'm Drew Thorstensen. I work for IBM on Power Systems. I'm Adam Resnicek. I also work for IBM on Power Systems. Kyle Henderson, good old. So what we were looking to do when we started our Upstream driver work. Since we work for Power Systems, we actually have three hypervisor or workload types that you can run on Power Systems. The first is containers. So this is supported by the Docker driver. It runs a variety of flavors of Linux, such as Ubuntu, REL, SUSE. We also have KVM. This is just like KVM on x86 chips. This is supported by the Upstream Libvert driver. That runs same type of Linux flavors, Ubuntu, REL, SUSE. You can run that directly on the platform. And we also have another hypervisor called PowerVM. This is supported by our new PowerVM driver. We actually brought it forward in the Liberty time frame. And this is actually a bit of a different type of hypervisor. It runs in system firmware. So that's different than KVM. And what it allows is that you can run Unix workloads, like AIX, also an operating system called IBMI, or, again, any of the Linux distros that you want. So what we were looking to do was to bring the PowerVM driver Upstream, create an OpenStack driver that would allow us to support PowerVM virtualization in an OpenStack environment. So the first question that when you start with OpenStack is, you have to understand why are you going to contribute Upstream. So I like to think of OpenStack as much like an operating system. It is the cloud's operating system. And in an operating system, you have device drivers. So when you want to introduce a new adapter or a new card, you need to create a device driver to allow that operating system to interact with that device. So OpenStack provides compute networking and storage. And you have APIs that someone can go ahead and develop against those APIs to bring that device into OpenStack. And what's nice about this is you have all of these different solutions. So for networking, you've got a wide variety of options that you can move forward with, lots of vendors there, and also just open source solutions as well. For compute, you have KBM, Zen, Hyper-V, VMware, LXC for containers and PowerVM, which is what we brought forward. And storage, you have, again, a wide variety of vendors and solutions that you can pick with this. And by implementing these APIs, you're able to interact with a wide variety of different storage and networking types for us, because we're compute. If you are networking and you wanted to bring a network device driver, you could bring forward your networking platform and be able to interoperate with all of these different workload types. So it's very attractive to be able to bring your platform to OpenStack because it opens up a wide variety of workloads that can run on your platform. So now that you've kind of decided, hey, maybe this is a good idea, we should bring our platform upstream and have drivers and support it there, or at least you're saying that, so Drew would go on to the next slide. The big priority becomes now what? How do we actually get from the point where you have your initial idea, you want to take this and bring it upstream to actually getting a project created and functioning and available out in upstream OpenStack that somebody could actually download and use in their data center, which is the real priority. And so for a lot of people, they go out to OpenStack, they go to the website, they might go and find some manuals documentation and it's a lot. There's a lot there, it's kind of confusing. There can be a lot of different bits and pieces if you've never worked with OpenStack before. Finding where you need to go to get started contributing code upstream isn't always a super easy process. So our goal here with this presentation is really to kind of talk up a little bit about what it would take to get from that first point in the maze from where you have that initial idea and bring it all the way through project creation, set up, working upstream, getting your code contributed, all these different bits and pieces so you can have that upstream driver. Sorry, gotta find out which way is the way. Actually. Okay. So in the presentation here, you'll see we've kind of, as we go through, broken this up into five different stages of the project. You've kind of have your planning and proposing phase, this is where you'll go and basically after you've come up with the initial project idea, you'll go do research, decide out things like, what do you want your project to be called, where's it gonna fit in upstream, all these kind of things. You'll go upstream, you'll work with the upstream infrastructure teams to go create the project and all of that. You finally get on the ground. This is the point where most people would think that the process starts is that you can finally start writing the code, doing development out in the open, but turns out there's a little bit before that. And then finally they go through and get your upstream testing in place. And this is a really important piece that I think as driver contributors, we didn't fully appreciate at first the amount of effort it was going to take to get all of those upstream tools working with all of the processes to test our drivers. And then finally, the long-term support and integration of this. So once you have it out there, how do you make sure you're following the OpenStack release cycle, making sure things are available, all of the different bits and pieces you have to do to make sure somebody can actually use your driver and it's not just a cool trinket that shows up on your GitHub profile. All right, so now that you've decided that making an OpenStack driver is in your best interest, the first step, as Adam said, is your planning and your proposing phase. And as you start planning your OpenStack driver, you have to remember that you're entering a very, very large community. It is very fast moving. So before you jump in and you start writing the code, a couple things to keep in mind is that you need to first research the existing code. Take time to bring in all of this information, dissect it, make sure you understand it. This requires a lot of listening. So you go and you maybe read the forums, you read the Wiki, you go in IRC, and you need to understand why things are the way they are. It's very nice that we have Git, which allows us to go back and see the commit history and the reasoning why things are the way they are. But it's also just spending a lot of time to understand. I know a lot of things when we were looking at the code the first time were a lot of why is it done this way? But being able to dissect that history really allowed us to understand how things have evolved. And OpenStack's very fast moving, so you have thousands of commits on these things. So it's very important that you take the time to understand why. You also need to figure out what exactly you're going to support. So if you have a Nova driver like we did, there is a requirements matrix of what a Nova driver needs to support. Each project is gonna be a little bit different about this. There are some required aspects, there are some optional, and then there's new development that maybe there's new function that you wanna bring forward. So you need to understand what are you gonna bring in? I'd recommend that you don't try to do it all at once. You can add function as you go. You also need to spend some time to figure out how you're gonna do your continuous integration. This is one of those requirements from the community, and we'll dive into it a bit more, but you need to be able to test your code in an automated fashion. And those tests need to be published with every single run. So you need to understand at this point what are your CI requirements gonna be your continuous integration, and make sure that you start to gather the hardware or you have a place that these continuous integration tests can run. Communications key before you dive in and you start proposing things, make sure that you talk to the appropriate individuals. Depending on the project you go in, there are, well, every project has a PTO, but there's also typically cores associated with each project. So you can go in and you can listen in the IRC, you can talk to them, either at IRC mid-cycles or here at the summit. There's a lot of sessions where the PTOs are speaking about their respective areas. And perhaps the most important thing that we've seen a lot of people get hiccuped on is that you need to remain committed to this. This is not something that you write once and then it's gonna work in open stack forever. You don't need to write any more code. This is something that's going to live for years and you as the driver owner need to invest in this for years to come and it's important that you know at this point in the phase that you're committed for years because if you're only gonna do it for six months it's actually gonna be a big debt. So you need to make sure that you're invested and that when you bring something forward you're gonna support it for the life that you expect it to be. And it's always important to remember just because you're a small fish in this big community and just because it's the most important thing in the world to you, you have to see it from the other people's eyes. This is a community supporting many different hypervisors, many different storage drivers, network drivers and then all of those layers on top. You're a small fish in the big community so try to keep that in mind as you're talking to others to be able to express why do you want to bring this forward? How are you gonna do it and keep that in mind? Otherwise, they're not gonna have the same priority as you. It's not gonna be as important to them. So that's again why you have to be committed for a long time. All right, so you're at the point now you have officially decided all of those things that drew outlined on the last slide you're good to go. You know exactly what you want to bring upstream. So now you're at the point where you can actually go out and create your project and really how hard could it be, right? You just wanna get repo to put your code in at the end of the day. I mean, you choose your project name, there might be a few other steps in there. Bam, your project's created and you're ready to go and start getting this out to your customers. Turns out that step two might need a little bit more definition there. So actually the first thing you would wanna do is turn upstream for your resources. And this is something that all kind of reiterate is important here but also throughout the entire process is being involved upstream, talking to people, using documents and if you run into problems with those documents, contributing back to future people creating projects can also benefit and not have to go through all of the pain you had to deal with as part of that process. And there is actually a full complete project guide that the infer team has actually put together upstream made on odox.openstack.org that I've linked here that will actually go through step by step for the most part, you know what you need to do to get a project created. And so that's a pretty useful guide. And so, you know, as you're working your way through that guide, there are a couple of key variables, key components that you need to think about as you're doing that, you know especially like you were talked about you where you have to know, you know exactly what you're contributing upstream is this driver going to be part of an existing OpenStack project? Is this something that would be, you know in neutron for example as a sub project because it's a networking driver? Is this a Cinder storage volume driver that you're gonna have? Or is this just a related project? Maybe it's just a set of scripting or something for you to be able to do deployment of your tooling, you know using something like Ansible for example. You really need to think about, you know where exactly does this fit in the OpenStack ecosystem? I mean there's been a lot of work that's happened to make the ecosystem very inclusive but you, it's a good idea to know upfront exactly how you expect this to fit in. Well and one thing to add to that is that each project is potentially different. So like neutron has sub projects where you can take your driver and you can have a separate repository for your neutron driver. Whereas I believe Cinder right now has all of the drivers integrated into the main Cinder project. So you have to learn a bit of the culture of the project that you're going into as you learn. You might not necessarily need to jump straight into a project creation. You might be able to contribute it directly upstream into say a Cinder. That's why it's also important to do that listening upfront. Nova for instance with us, we have a separate project and as our usage is growing we work towards integrating that upstream. And that's one of the things that we've learned through working with the Nova cores is how that culture fits in that project. And they're not, it's very important to know they're not all the same. They're all different cultured projects and you have to understand that. And they don't always all necessarily get along or do it the same way either. I mean we have, so we have our Nova driver. We also have our networking power VM driver which is a sub project underneath neutron where we have all of our code and that's required to put our driver and then we have another driver for salometer and that's a different type of project altogether. And so you're gonna, if you have a platform that might have multiple projects you have to be able to work with multiple sets of requirements, PTLs and cores to figure out exactly how all of your bits and pieces are gonna fit into all of those projects. But once you've figured that out then you kind of need to start thinking about what sort of upstream resources does my project really need? I mean you obviously need a place to put your code but beyond that, what are you doing for documentation? How are you doing translations? Do you need that support from the upstream community? All of these different pieces are things that you might not think about if you're just looking to find a place to put your project but they're really important to your end users because they're the ones who are gonna have to be able to go and install this and use this in their environment at the end of the day. Uh-oh. Yeah. And finally I guess that last piece is up there how is this project actually gonna get tested? And like Drew mentioned there's requirements on CI and there are multiple ways to do it. You can be integrated with the upstream gate. You can have your own independent third party CI that runs against all your jobs. You can have all kinds of pieces in between there, some out of both buckets but figuring out that right balance that is gonna be required and for each project if you have more than one is pretty important. There are some like NOVA for example is very explicit. You have to have third party CI for your driver to be considered supported. It has to run on every patch. Whereas other projects like Cinder or Neutron aren't quite as stringent. They would like you to have it but it might not be required at the end of the day. So just basically making sure you have a couple tips here but making sure you set yourself up to participate in the community. Just because if you're only your own separate project you're not on your own. You have to be integrating with the teams upstream attending their IRC meetings making sure some change they're gonna make in Neutron to the ML2 stuff doesn't break your ML2 driver downstream so it doesn't work, all these bits and pieces and then just frankly be patient. I mean like Drew was saying you're a small fish in a big sea here and making sure that when you wanna go and do a release for example working with the release team to make sure that they can go through and they have time to do that and everything among all the other projects they're also doing releases for is a pretty important part. So leading all up to that is when you actually start your development. So things to kinda look at as you start is look at a lot of existing projects that are out there. Go ahead and do get clones. Start looking at some of the tools that they use. There's a lot of great tools out there and I named a few of them here. Tox and Flake 8, Hacking, Bash 8 a lot of different tools out there and by looking at the projects you can not only see the tools that they use but how they're using them. Usually the documentation is pretty good on a lot of those tools but the examples that the projects provider even better it was kinda turned out as we were developing our projects we could look at the documentation but finding out how those tools were used specifically within like Nova or Neutron was even more beneficial than the smaller examples that were in the documentation. And that's especially important because the documentation might not always be completely up to date but by looking at these projects you can see what's in use at that time in that release. The other thing you'll notice is that there's a lot of common libraries in Oslo that are used across the different projects. So spend some time going out to Oslo look at all the different sub-projects that are out there and understand what they're doing, what problems they're solving because by adopting those types of libraries it really helps development and helps the adoption of your project just because they use these common solutions. And kinda what we say is be a chameleon as much as you can. Obviously your project is gonna be a little different otherwise you wouldn't be developing it but as you look at the other projects you can really see patterns patterns on the way they do directory structures, patterns on tools that they use, patterns on common libraries and that type of thing. But if you can't follow a pattern for some reason obviously you're gonna have things that are unique. Understand those uniquenesses and be able to explain them is usually a lot easier if you can explain them to somebody why you have something that's a little bit different or can't use the common patterns or common tools or something. And maybe along that point is as you're developing you have to have code reviews and depending on the project you might have different core reviewers and the core reviewers are ultimately the ones that will merge forward your patch. And being a chameleon really makes it easy for those cores to have a consistent experience as they're reviewing the code. If you follow the same patterns it's gonna be easier for them to understand what's going on and allow that code through. If you go and you do it completely different it's not gonna get approved. And if you're developing and say sender where every driver is in there the cores really need to be able to understand exactly what you're doing. Ultimately they hold the keys to whether or not your code gets in based off of whether or not they understand and agree to everything in there. Right, so those interactions and be able to explain things are really important. One of the tools that I kind of liked was this hacking rules tool. It's a Python tool for looking for unwanted code patterns in the actual code. And a recent example was that the log.warning was deprecated or log.warn was deprecated and everybody switched over to log.warning. Just a little bit difference there but there wasn't originally a rule out there in hacking to find that. And we found that as we were getting rid of stuff people were still, weren't quite changing their habits and still typing in log.warn. So putting a hacking rule out there that says looking for log.warn and putting out a message that says you should be using log.warning and actually flagging that and not allowing that to compile or pass validation was really good. So that was kind of neat. And you can also adopt a lot of those types of rules from the different projects. Nova had a ton of rules. So as we found hacking and started to implement in our project we adopted a lot of the ones that were in Nova so that we were very consistent. So that our Nova driver had a lot of the same rules that the Nova drivers did in the main project and just being a chameleon there, right? And there is a good reference on how to actually write a hacking rule. If you've got something unique to your project and you wanna make sure some pattern doesn't get into your code, there was a little webpage out there for how to actually write a hacking rule and have it be validated. I think Kyle made a really important point there again and on being a chameleon is that if your driver belongs to a specific project like Nova for example, our driver was a compute driver that fit into the Nova project, you would want your driver to be as close to the rules and the culture of that project as possible. If you're writing a neutron driver for your ML2 agent, you would wanna make sure that you borrowed hacking rules and style and all those pieces from neutron and the same thing for Cinder or Ansible or any of the other projects that you would be building a driver for. I think with that hacking rule, we think you made that log.warn hacking rule and then that was seen as valuable and we had somebody actually come into our projects and then start porting that to the other projects as well, which was really nice that something that we put in was then replicated across these other projects which is one of the benefits of being part of this open community is that it's not just following them if we have something like this that was seen as useful, we can then bring that to the other projects or they'll come and take it from us as well. Marvel is from us. Okay, so upstream testing and centering on testing here, good unit test is invaluable. We found that day in, day out and when we started our projects, we tried to write a lot of tests and it seemed like as we started out, we looked at what was out there and it's pretty common knowledge now but there's still a lot of tests out in NOVA that uses MOAX, MOAX, but it's been pretty publicized that everybody should be using MOC rather than MOC and so we did all of our unit tests using MOC but we also found that as we wrote unit tests, we did a lot of duplication, right? We mocked up a few resources, we did our tests and then we found, oh, we need another test to test this other scenario so we did a lot of cutting and pasting and one developer would create two or three tests and another developer would add some function to the driver and then they would copy those tests down and it turns out that we had a lot of duplication and then we kind of stumbled upon fixtures and fixtures are kind of the answer to getting rid of a lot of that duplication so if you study what you're mocking out for all these different test cases, study the resources that you need to mock out, you can set up these common fixtures that'll do all of that setup work for you upfront and you won't have all this code duplication but it does take a little bit of stepping back from the tests themselves to look at the resources and how they can be commonly mocked out and then maybe some cases where you set up a fixture of some common mocks and everything's not intense done and then you just have to tweak a little bit more for this extra use case so as you're developing use cases, make sure that you kind of take the big picture view of it's not just this one unit test that I'm coding for I really should be looking at the resources that I could create a fixture for and down the line I would use it 10, 20, 30 more times the last point is good unit test has to run fast because it gets run a lot and we gotta make sure you're not doing things like sleep there's some cases in the code where you might wanna do context switching and stuff so you might have a sleep zero but there's other cases where you might have some other type of something that waits for a retry or something like that but we gotta make sure that those types of sleeps and are not being executed they're actually being mocked out in the unit test and that because they'll be run many, many times in CI and even not beyond CI that's really important for even just doing local development or if you have other contributors to your project because if your tests aren't fast if some guy has to sit there and he wants to write a one-off patch for your project and he's never gonna contribute anything again and it turns out that your unit tests take two hours to finish running he's probably not going to be very excited about making contributions to your project anymore so I think one of the things that I found was that we actually spend a lot more time writing the unit tests than we do the code which is a very different mindset than what we were perhaps used to. In the past it's get the function out, deliver this and here you need to make sure that it's robust that you need to understand when something breaks you know almost all of OpenStacks in Python and it's a non statically typed language so a lot can go wrong these unit tests really provide the ability to know when something's broken so we'll spend up to three times the amount of time on the unit tests as the actual code itself it's very common you need to invest in these because they make sure that you don't spend you don't keep paying that cost over and over again because OpenStack is constantly evolving this is a very fast moving community so when APIs change things like unit tests and we'll talk about the CI in a minute they really help you understand when something's changed and you need to react. Right. So we'll now talk about the third party CI most projects require that if you're bringing a driver in that you have a continuous integration server so this is run with a project called Tempest Tempest is an OpenStack project that runs about a thousand I think now it's like 1400 maybe more than that functional tests against an OpenStack cloud and the way that this third party CI works is for every patch that is proposed to your project or to the project you're participating in you need to run the entire Tempest suite so you need to deploy an OpenStack cloud configure it to use your driver and then run over a thousand tests against it. This takes time. The reason for this though is really two fold. The first reason that happened was that the authors of the patches would perhaps be aware when they broke something. This is important say if you're developing against Libvert and you introduce a change for Nova and the Libvert driver maybe you find out you break something and you need to go fix it and maybe something along the chain's broken but for other drivers perhaps it's also important so that you understand when a big change is happening in the community and you can react actually before that change is merged into the community. So by running this with every patch that's proposed to the community you can actually understand hey this big refactorings coming in and I need to get a patch in my project ahead of time so that when that gets merged into say Nova or the respective project you react as well. And again part of that's kind of dealing with whether you're a separate project or integrated into the project even if you're integrated like in Cinder where you're a Cinder driver you still need to bring forward a third party CI. The unit test test, the unit of function this tests the global function across the cloud itself but this is a really big problem. This is a massive investment. The amount of patches proposed per day are on the orders of hundreds. And if you look at all of the projects across OpenStack it's on the order of thousands. So the first thing you have to understand which projects you need to run your CI for. We have a Nova driver, we have a Neutron driver we don't have a Cinder driver but we do have Solometer. And so understand which projects you need to be listening for events for. Now the average run time is about two hours depending on the project and the driver. We find it to be about two hours at least for our driver. And if you're doing this several hundreds of times per day you're deploying several hundred OpenStack clouds per day and running a thousand tests against each of them. That's a lot of infrastructure. So you have to think about this a lot. You have to do a lot of upfront planning. Can you share resources? KBM uses a technology called nested virtualization. So putting OpenStacks in OpenStack VMs. There are other ways that you can share this infrastructure. So you need to spend some time, think through that. I know that we had a big challenge here because we are a flat hypervisor, we run in firmware so there isn't that nesting capability but we found ways to optimize that infrastructure. Now this, albeit a big problem and a big undertaking the good news is you're not the first one to do it. In fact you're probably several hundredth person to do this. And everyone's been here, everyone's worked through this. The good news is that the community CI infrastructure is all open source. So it's all out there for you to see and for you to use. There's this really good OpenStack project called the Puppet OpenStack CI. What this provides is third party vendors a way to deploy a basic OpenStack CI infrastructure. It's set up initially for KBM and it's actually, I think optimized for Cinder drivers at the moment but it really is a great foundation. You do need to spend time customizing it to get your componentry within it and to make sure that it's testing your code and setting up VMs for that. So it's really good that you have these resources here. It's also important that you set your success criteria. We know that a lot of the CI infrastructures in the community are, there's the pet and cattle mentality. Many of the CI infrastructures are pets. They set it up once, they update it, they make sure that it never goes down. And since these CI infrastructures are voting on every patch set that happens, if that CI system does have a hiccup, you start putting a minus one or a disapprove on every code review. And it turns out the community is gonna get really mad at you if your CI systems off and la la land minus one in everyone's patch. So for us, what was really important was we recognized that stuff happens at times. So one of our criterias was that we would be able to tear down and redeploy our entire CI infrastructure with a single script. And we were doing this in case of something went wrong and it turns out that this has just been a really great feature for us to put into our CI script or our CI infrastructure, because if you just need to update everything we have, have a single point of execution to update everything. But it's very important to know that you have all of these resources here. You're not the first one doing it. And the community has really spent a lot of time to make sure that you will be successful here. Yeah, and I think I would say one important thing, even beyond the fact that the Puppet OpenStack CI project makes it easy to get started is that it makes it easy to get help if you mess up. If you go off and design your own CI system on your own, it could be the coolest, most complicated thing and you hit a problem. You are suddenly now the only one with that problem. Nobody can help you figure out how to solve that. But if you're deploying the common tools from upstream, if you're using Zool and NodePool and Tempest and pulling down all these upstream projects straight from what they're using in the upstream infrastructure, when you have a problem, you're not going to most likely be the only one with that problem. You can drop into the OpenStack Infra Channel or you can drop into the third-party CI channel and you can ask questions and people will be collaborating around the same problem. And you can actually contribute your fixes back upstream and that's something that, you know, in the process of setting up our CI, we've done is actually found an issue in NodePool, for example, and we resolved it locally and we pushed a stream, a patch back up to NodePool itself so other people who are working on building their own CIs down the road won't hit that same problem. And by building on that open source foundation, when something goes wrong, everyone can reference that source code. Everyone can see why this might have an issue. Whereas if you build your own and you go ask for help, you have to understand that they can't even see your system. So if you've built a proprietary CI system and something goes wrong, yeah, you're alone. And this is why it's so important that these tools out there, they make it, it's still a very difficult problem, but it makes it easier. So one of the things to remember is you are building a new driver. When do you actually get started? You can't just build it before you have a driver. So what we did is we took some time to get our driver actually developed, get baseline function in there, get the unit tests in place. And then once that was in place, we started to run Tempest against it. We found out that just running Tempest alone, we found several issues. After that, once we got Tempest running, we could then start working on building our CI. It's a bit of a chicken and egg scenario. You do need to plan for your CI up front, but you also need to know when you have enough function ready that you can get started on your CI. Now, before your driver actually gets integrated, do expect the community to say that this needs to be running for several months before it gets integrated upstream. That's been one of the feedbacks we've gotten from NOVA, specifically. Another thing to remember is, so developing the driver's great, but you also have to be able to deploy this driver. Get it into open stack clouds. Just putting, say, a Cinder driver in Cinder doesn't necessarily mean that anyone can use it. You've probably done hand set up and gotten it working. So there's two points of view that you need to have when you're thinking about deployment. The first is with developers, you would have what's called dev stack plugin so that you can be able to bring your driver into a development cloud. Dev stack sets up a development cloud. But also, and perhaps more importantly, is with operators. How do operators take your driver and integrate it into their cloud? And there are a lot of different tools that operators will use. Operators that are using distributions of open stack. Ultimately, you might have to find out how that distribution is deploying it. And you need to prioritize, obviously, for your users what's the best approach. There are potentially three or so open stack or open source deployment tools that are prevalent in the community. There's Chef, there's Puppet, and there's Ansible. What's really interesting about this I find is that dev stack, that's from one to two nodes. These other operator tools like Chef, Puppet, and Ansible are really for being able to deploy to thousands of nodes. You have to understand how does an operator handle this from the thousand node point of view? So yeah, I mean, once Drew was saying, you have this perspective, and once you get up there and you have to support this long term, you need different bits and pieces out there in your environment. So for example, you mentioned the dev stack plugins. And there are lots of ways to work with dev stack. And when we first started doing our development, we didn't, you know, there wasn't really this concept of plugins upstream. So we'd hacked this whole big wrapper around dev stack that was custom for our environment and did a lot of our development out there that way. And it turns out that that wasn't a super awesome idea because every time something changed in dev stack upstream, our tools would tend to fall apart and break on us. And so once you have things like the plugins, for example, you can really start iterating more quickly on a lot of your development and make sure you're sticking closer to what they're using in the upstream community. And then finally, you just have the broader deployment piece that Drew talked about really. I mean, you've got those Ansible Chef Puppet and you will have to be involved in those communities to get your drivers supported out there. I mean, you can write code and tag, get it used locally, but unless you're up there putting in specs and blueprints for working in that community to get your drivers in there, a lot of your users at the end of the day will have no way to actually use your project no matter how good it is. The cores of the driver team will only care about getting the code integrated into that project, but you probably have the point of view of, let's actually get this thing used. So you do have to work in these other projects to have this holistic view of how do people actually use this code that I've produced. Yep. I guess I'll take things to the last point. So I guess one thing I just want to wrap it up with is really just be willing to communicate with people. That's a really important part of this is just be out there, be willing to communicate, talking to people, watch the mailing list, be in the IRC meetings, go to conferences, mid-cycles and summits and be there and talk about your driver with people because otherwise they might not know it exists. If they are gonna put something in them that's gonna break your driver, you'll have no idea, but if you're there and you're present, the cores and the contributors to this project will tell you upfront, for example, the problem we had, hey, we're going to probably break live migration for your project with this kind of a patch that goes in and you can get a little bit of a heads up. So just be present. Yep. And consistently be present for a long period of time because this is something that you're gonna own for a long time and the community needs you to own it and for your driver to be successful, you need to have that commitment. I think we're at time, but if there are any questions, do you wanna come to the microphone, please? So I got one question. What do you think about the driver decomposition topic? Driver recomposition? Composition topic. Decomposition. Yep. It's interesting because some projects, so the questions, what do we think about driver decomposition? And the background on that is in some projects you have all of the drivers integrated in other projects, those drivers can be taken out and the authors or owners of that code can have their own separate project. And their benefits to both approaches. With a decomposed project, you own solely your future and you do have to fit in within the community model but you are able to turn code in faster and make changes more quickly. It's, I don't know, they both have advantages. It's really up to the core team to really have some insight in that. It makes it easier for code but you also lose a bit in doing so. Sometimes it feels like a bottleneck, having code upstream if you had that feeling. Yes, it is a big, you know, again you're integrating into a big community. So if you're in a fully converged project where all the drivers are in it, it can take several months to get patches through. Yeah, it can be really painful to get those reviews but the flip side is if you're completely, if you're not in there, if you're not upstream, it's a lot harder on your users at the end of the day and so you basically are going from shifting, you're basically, if you go the other direction, shifting the burden onto your users rather than onto yourself and that's usually not something that's good for the life of a driver, to be honest. Thanks. Okay, well I think we're at time. Thanks everyone. Thank you. Have a good day.