 Great it looks like it's time. I just want to go ahead and first of all make sure that everyone is is kind of aware of of The intention of this I have two different sessions The session that is Thursday at 11 o'clock will be for people getting started This session is for people who are interested in Contributing people who may be interested in importing their existing puppet solutions to the common set or who are working in Downstream and upstream and this is going to be very focused on How do we work better together? You know what things need to be fixed and and I hope that that we can start to say well Who can be responsible for certain parts of the code base and it's definitely intended to be a discussion I do have slides, but they're very very very More of a of an agenda So I think that I wanted to start by just asking who's actually using the the puppet modules that we've been working on Several people who's using them in production Several people a few people. Anyways Great, so I want to talk first about Who follows the puppet open-stack mailing list? So just so people do know there there is a mailing list and I'm trying to be more transparent And give people an opportunity to be involved The discussions around the development of things around Folsom and also going towards gridsley will all happen through this mailing list and And honestly if it doesn't make sense for it to have its own mailing list if the reality is that it should just happen on the Open-stack mailing list. I am I'm open to that suggestion Do do people think it makes sense to just have all communications around the puppet stuff on the open-stack mailing list? Or you guys think it makes sense to maintain its own mailing list Yeah, and I totally agree with that that Maybe if the modules do become part of upstream, which is something we could maybe think about in six to nine months Then it makes more sense, but I think for now It makes sense for it to keep its own mailing list and if people who are Monitoring the regular mailing list if you see puppet related questions If you could just help out by directing people to this mailing list that would be immensely helpful So I'm gonna start a little bit By talking about some of the work for Folsom for Folsom support that I've been working on Which you can find in this repository and the main things I'm gonna talk about are our new features I've kind of done a redesign of The way that everything structured which I want to talk about a little bit and also I have a vagrant based development environment that I've been working on and I wanted to at least ask people you know What do people think about vagrant as a way for us to code and have a repository so we can all develop on something very similar Yeah, and that's exactly my Yeah, you have a lot to say so you can you can just hold on to that Great. Thank you. Maybe need a little refresh Does someone actually want to keep notes because it'll probably get kind of boring if I'm trying to Unless you guys enjoy watching me make typos if someone can just get on that now We have this this etherpad. It's just etherpad dot open stock dot dot open stack org devops dash puppet dash upstream Hey, you guys are almost a computer-free bunch in here So I just want to start just by giving kind of a high level For people who had familiarity with the Essex code base about the things that are new and and you know Not surprisingly they they correspond directly to the things that are new and open stack Which is now there are quantum and and sender modules and the sender module is is pretty well tested in working the quantum module Honestly, I verified everything except the l3 components. So the the DHCP agent also the regular networking agent the for for OBS and I know that that Cisco also has a Some work that they've done on quantum that I want to take a hard look at because I know that that's actually pretty pretty well tested And actually one one question to ask who has their own fork of puppet modules That they're maintaining are these all different forks? Do you mind if we just go around the room and ask and have people say what their fork is or? If it's a fork or if it started from scratch and then who they're with yeah Our stutter from scratch it was before you actually had yours up. So we've kind of kept going And at a chance to look at combining the two yet You're already with nectar. Yeah nectar. Okay. Yeah, I think we've talked before yeah, yeah, basically same situation We wrote ours prior to yours being released, but we've actually moved a lot of yours into ours and kind of merged them Yeah, you guys use the the native types, right? Yeah, that's right, but have it ported the actual manifest yet No, okay, I actually wrote a bunch of modules that we're trying to contribute back actually at this point Good well one of the questions that I have you going for you going forward is some of the design changes that I've made I'd be interested to know if that that helps motivate you guys to get on or if it's gonna make it easier for you guys to Get on the same set of modules. There were a couple other hands. Yeah, so I'm Eugene Girupichov from RENTIS. So we have Forks which at high availability to open stack. They're not publicly open yet, but I've been contributing parts of that to puppet labs models, but That's kind of slowed down recently because we just had to deliver it to the customers But that will continue. Yeah, and we've actually been working closely to go through all those patches and start merging things in So so going through the modules just support which is which is fairly rough It's gonna be fairly rough for for quantum But cinder is basically the same as nova volume. So it's not that big of a deal The main change that I want to talk about going forward. Oh General-purpose modules. So there's now an open v-switch module, which is actually pretty cool because it has native types and providers For creating ports and networks with what is it OBS control command line tool? This is probably the biggest change for people who are familiar with the old version of the modules they had native types for managing individual lines of Nova config and I've actually expanded that concept. So now there are native types for every single I and I file That's being managed by open stack And the main advantage of this over what had been previously done is that you can override any configuration from topscope So there were really two main motivations for making the switch One is a lot of the files, especially the paste API I and I files are assumed to be Related to the version that the package is deploying So it allows you to automatically assume the defaults from the packages and then have native types Just to edit the individual lines that you care about so it's individual resources That manage individual lines that we care about and the other advantage is in the previous version of the open stack modules for Essex There were a lot of pull requests and patches saying hey, can you add this extra parameter? Hey, can you customize it in this way? But with the native types and I'm happy to show an example of this You don't have to do that. You can actually customize any configuration file that you want from topscope So all of the parameterized class interfaces are really just configuration interfaces for reasonable same defaults But any customization that you want to do you can do at in user space Preferably in a site manifest and I think that for for you guys in particular. How much easier does this make you guys to get on board? Yeah, that's a lot of the modifications that we made was basically parameterized the holy mess out of everything the original Manifests were too restrictive for our purposes. We're doing a lot of non-standard things having you know all sorts of other things So the more you can parameterize and make configurable the more you know Publicate consumable. I just want to show really quick just an example of what this looks like and I think it actually looks It's following me good. Just making sure if I can spell Just to show an example so that people understand what I'm talking about if we look at an example of Switch to my Folsom branch and we look at an example of One of the manifest save for configuring the API service Right, it's the configuration interface pretty much remains as it is we can see you know, these are the parameters we can use to configure Glantz API, but previously I was using concatenation to manage the whole files But now we just have these individual resources, right? Every line is actually a resource that manages that particular configuration file line So just like this file here is specifying the Glantz API config lines here You can do the same thing at top scope So that you can so that it's not managing the whole file So it's not actually restricting what people who are using the modules can do and I think that's gonna make it a lot easier to maintain going forward and also a lot easier for people to use and customize But I think one of the biggest complaints for people who had already rolled their own puppet solutions is it's just not flexible enough For for some of the folks who had rolled their own solution. Does this make it easier to consider this? Just I mean it seems way too complicated for we do. We just need to push out, you know a config file for Glantz We just have a file. It's a template and I mean, I guess we're all Python people and we're our work So we don't really do all the Ruby stuff. So I guess we get a little bit intimidated with them All this custom types and everything and you know when we all we need we know what we need to do We send a file, you know, so that's kind of From our point of view is why I'm a little bit standoffish. So so whenever you say intimidated, you know, is it I Guess once it works. Yeah, it's it's less intimidating. Yeah. Yeah. Yeah, I guess that's it's a bit of a step and We'd have to learn more about puppet to you know go into it more I mean, you know an operator who doesn't know anything about puppet can easily go in and go I need to this is the config file I need to just edit this one line instead of having to know a bit more about puppet I guess so I guess maybe we're I don't know why other people feel this way, but that's that's How we're kind of going at the moment. I mean insane that I haven't looked in depth at how it works So I need to do that more before I could comment more cool On the other hand, I love it I recently moved from an Essex deployment using the available puppet labs open stack stuff To Folsom and being able to bring in the same defaults from the new Folsom paste that I and I files would have saved me a lot of work. Okay, so you didn't use this you I didn't use this I used the the older Just curious how many people knew I was actually working on this refactor Not me. Yeah, again back back to the mailing list. Okay, but a few people know so I think going back To this Probably the main thing that's that's missing right now and all assume that this is a task for me Is to make sure that all these types support purging? Which will actually be fairly easy to do right? So for some people you can assume same defaults from packaging for upgrades and just override the stuff you care about But I think a lot of people especially people who are using this to actually launch products Would want to have the ability to control every single config file and purge entries that aren't explicitly configured So that's kind of my to-do is I still need to implement purging which I haven't quite done yet So the next thing that I wanted to talk about is is a shareable development environment Which is something that I put together. It's it's how I'm going to be developing at least initially The main motivation is really rapid development Being able to develop and build out open-stack environments on your laptop But also ensuring that as people are collaborating that everyone's actually using the same development environment Right that we have the same manifests with the same virtual machines with the same nicks in the same IPs So at least when we're developing we can develop fast and and make assumptions that everyone's developing using the same environment This is composed of just a few files The actual development environment is composed of these exact files So it has a puppet file and puppet file is is fairly new This is librarian puppet, which was released by by Tim Sharp and The puppet file gives you a list of all modules that need to be downloaded This is kind of what I was doing with the other repos yaml for people who have familiarity with the sx code But basically this is now a separate product that you can run Librarian puppet install on the directory. It's gonna be one second. It actually iterates through Something that looks like this so so it looks very familiar to the other yaml the other repos yaml So this supports both the puppet forage, which is where stable release code is gonna be and also github and even branches On github you can see some of this stuff is a fulsome branch out of my github repo So given this file you can just run librarian puppet install and it'll it'll download all these modules either from the Forge or from github and again this goes back to Recreatable environments right it's very similar to sub modules, but I think this is gonna be a lot easier to maintain than sub modules So if there's a tagged version or whatever that you want always to reference in other words, we're releasing The ref yeah, okay beautiful. Yeah ref can be used in this case. I'm using it to reference branches But it's really just provisions So so anything git knows about as a revision who has experience with vagrant Who's it is anyone opposed to vagrant? Okay, then not everyone's used it So vagrant is is fantastic Except for one tiny little detail that it only works with virtual box, which is horrendous But I will tell people for people that are just getting started as terrible as virtual box is Vagrant is so awesome that it makes it worth using can anyone else attest to that I heard Mitchell say something about that the for vagrant the He's moving away from trying to keep it tied to virtual box. It's supposed to be Broken out into the back end is supposed to be broken out so you can tie it into other VM environments, and I've heard things about VM fusion, which is actually what I want But just to show an example of kind of what virtual box is and in the terms of developing and deploying Open-stack environments with puppet. This is basically Models all of the things that I deploy For the case of just a simple to note installation You really care about this open-stack controller and a compute one and here is just specifying the IP addresses and how much memory They'll get and the main the the main things really to note here are that we're actually creating specific interfaces So especially when you're using nova network you have to specify your private your public interface So with this I can ensure that I know what the addresses of those things are We can ensure that all the virtual machines created have three interfaces and of course the most important part is this That it's running puppet three times Which I'm not extremely ecstatic about by but it's running first shell provisioning app get update And it's doing this because there's this weird chicken and egg problem with installing. What is it software properties? We're like you might have to update your repositories before you can install software properties Which you have to do before you can use PPAs I'd be happy if someone knows how to fix that if they fixed it But the other thing is first we run puppet and puppet sets up networking in Etsy hosts For all the VMs and also sets up at the one thing that I've warned people that want to use this Development environment by default it assumes that you have on the same network on on the 172 dot 16 dot 0 dot 1 that you're running squid proxy in 3128 and That's something you definitely should do if you're going to use this kind of deployment environment for testing and iterating on modules Is run a proxy so that you can run the iteration super fast, right? Just download all the packages in your proxy and then the rest of your runs will be super fast And that's just something to be warned by that this stuff assumes by default that you've done that and you can see in a host Which runs first? That it actually does assume it actually is configuring apt to use a proxy in that 3128 address And this is another one of the advantages of using this dev environment is you know Everyone can solidate figure out the best way to do fast iterations on the modules and then codify those things so we can share The question is do you have any problems detecting proper puppet puppet runs coming out of vagrant? You're you're saying determining if the runs are fail or pass correct From my understanding right now vagrant doesn't know the difference I think I'm not actually sure we had to do some some wrapping stuff to get see I to even know that it was kind of a you know Not a perfect run That's actually something that I want to get to is talking about see I right now for me This is a manual test and I haven't quite integrated this with tempest yet Which is which is something I want to get to and something I want to talk about So the answer is no, I'm actually not quite that okay, so the last thing of course is The site manifest which specifies The coded information that matches the environment of virtual machines that vagrant creates and it has instructions for creating Either like a like a only my sequel only keystone only glance but for the more general case for creating an open-stack controller and a compute node Which is here So the basic idea is that right in these node blocks and puppet the virtual machines are going to actually Based on the certificate name wind up using some of these puppet manifests So so I guess really just thoughts Does it make sense to try to share development environments? Do people I guess there's still a step of getting more people actively contributing to the modules But does this seem it's definitely useful to me. Does it seem like something that's useful to other people? I Think it would be it would be great. I Think it's something we definitely need even if we don't admit it even if we all like our own little sandboxes Even if even if it's just to have some place to go after you finish doing what you do in your sandbox Just to make sure it works wherever And I think as well even for people that say well, I actually want to deploy this in my data center I think this is a reasonable starting place to say we'll start here Make sure it works here before you deploy in your data center and then if it fails there You can say well, what's different between this environment versus my environment that's what I see it as Right now is it before you even put it into your CI pipeline before you even get it into a deployment ready state Is to have that first that first check at the development at the individual level my concern would be being too sticky around a particular deployment model particularly around networking and some of the physical configurations and so in a lot of our stuff while we Have been doing a lot with the other guys That's what tends to get in the way is that We make assumptions about what the deployment model is going to be and then having one environment leads us down to kind of being Not stuck in that environment, but surprised when we get there so my recommendation is that if you have those vagrant definitions to actually build a catalog of deployment models So for example, you define single interfaces for those things right a lot of times and a lot of the way we deploy open stack We do it with bonded interfaces right, so Maybe a vagrant model that shows how to validate that the bonded interfaces actually get passed through correctly An interface name show up in the right places and Nova and all that kind of stuff, right? Those kind of issues are what we found in dealing with some of our stuff around parameterizing Things right that those actions get you in trouble And I could see you know one potential solution for that would be just keep on expanding The the manifest site dot PP and keep on expanding the vagrant file to handle those cases, but I'm not sure I mean this is kind of leading towards I'm just talking about a consistent deployment environment that you can use for iterative development And I think the real question is automated testing and in particular automated testing of we'll call them verified deployment scenarios and I'm not sure I Would like for it to be the same thing that that vagrant is both the iterative iterative deployment tool and also the tool that specifies the Deployment scenarios for testing, but I'm not I'm not sure the Dev Stack problem Right arc Nova's great When built and tested with Dev Stack, but it does great when it looks like Dev Stack Right, and so we start finding rough edges as we deviate from what Dev Stack does and so my concern is by defining a development environment Right without necessarily providing some catalog method of doing it. We end up Validating one path very well that may not be necessarily applicable to how we're actually deploying and I guess my my suggestion is To at least build up enough information as we go along of deployment areas where things fail or get fuzzy Maybe you don't choose to run them all the time But at least have those available right and and I guess there's there's kind of two ways that that I can think of Immediately to do that. And of course, I'm really open for feedback one of those is probably The reality right now, but it's not the best way which is crowdsource it, you know People are gonna start here people are gonna deploy it I'm gonna take patches, you know, we all work together and the more people that work on it the more solid It is for for more use cases The real answer is probably codifying more use cases Which I think is I would file that for now under dot dot dot Yeah, and that's actually interesting because I could in physical infrastructure I could get away with two interfaces, but because of the way networking works in vagrant I have to have three or I guess more specifically virtual box Yeah, and there's no question that you know if if one of the things that people decide is hey What we need is a vagrant file that imports other vagrant files based on you know some piece of information Then then that's reasonable I'd actually be happy to say, you know, there can be a number of vagrant files, you know That support various people's use cases I just I just wanted to mention that I kind of ran into the same thing earlier this year where I set up a complete environment through Through vagrant and through virtual box and when I went ahead and deployed it on the actual hardware that I was deploying on I spent about two or three weeks just modifying and tweaking it just because of those differences but what I ended up finding and I ended up pushing back changes was that The changes that I did push back there. The two environments are now basically able to be deployed by the same manifest So I think I think a deployment catalog is a bit too restrictive I think the whole point of puppet would be to to take into account the differences in environments and things like that and Even though it would take a lot of work. It might be a lot of other cases, but I think I Think just generalizing everything enough so that it is deployable across a Large range of environments would be the the best hand or the more attainable goal. I think Yeah, sorry Right, so so in terms of networking being a problem with The networking really wouldn't be a problem with puppet then would it well, that's the point is something's got to do it And if vagrant has a hard-coded method of those IPs and interfaces in it You will end up needing a catalog of vagrant files not manifests So that's very different like our manifests work fine in vagrant fine on hardware. They work for both It's the same same manifests But the vagrant file itself which is just a single text file that he showed you which kind of defines the interfaces and everything Very likely we'll need a series of those some people will want, you know Different network configuration than another that's outside of puppet scope and it happens as part of the operating system creation So that kind of has to happen in that vagrant file and that the use for a catalog would be very One of the things related to that that I've you know We have kind of a play vagrant environment that does everything Using razor but adding bare metal provisioning adds some extra orchestration Makes things a little bit more complicated in terms of orchestration and especially data lookup orchestration Which I actually have branches and stuff that solve that using puppet DB But just for an FYI like this stuff. I am Hopefully getting kit soon, and I'll actually have Hardware so something similar will be running, you know on hardware Booting everything from scratch and it'll be tied to you know, whatever changes come in and that's that's something that I'm hoping to have done winter Sometime in winter Because I'll actually get I'm actually getting gear fairly soon So that'll be different kind of tests, you know this for development And then something that's actually doing everything from provisioning to setting up network I'm not sure I would like for those things to be as similar as possible, but it's possible that the vagrant file gets replaced by Something else and and maybe that's something is you know We have puppet puppet resources that model razor deployments or maybe puppet resources that model networking But then you get in the question of well, what networks does it work on what gear does it work on? And if people have have more questions about this we can go back But but I at least want to kind of start to go through What are the things that need to be done and I'll leave this pretty open in terms of you know, what do people think is missing I Want to start with some of my things and I know Yeah, honestly Folsom support which is also gonna be it's it's a refactor which is gonna be the same as grizzly support going forward I've tested and validated it on on just a bunch of precise I know that that Joe is also validated in his environment But honestly for people that want to get started for the non-essic stuff for the for the new design Folsom That's kind of it So I think really what I need is people that can bang against it I know also Derek has started looking at it for red hat, but you ran into Fedora 193 issues I don't know if you saw but I actually changed that code. There are certain puppet syntax Which is now incompatible with Ruby 193 in the latest version in the Folsom branch of Nova That's actually fixed. It's it's it's been changed to not use that syntax But that's definitely what we need now is is people to bang on it See if it if it meets their their requirements and I'm pretty happy to change things fairly fast now Since it's basically a new code base and I think now is a great time to get ideas in get code in But definitely I do need more validation on the newer stuff as well I'd like to do some consolidation on on HA and monitoring modules And I feel like I can point to various people who have their own HA modules that are built on top of this stuff Also have their own monitoring model modules And I guess I just wanted to talk to maybe people that that have those things and see if anyone is Interested in and contributing or or merging that stuff into core and I know that I've worked with at least a few people on that I know maybe for for monitoring. What what tools do people prefer for for monitoring is everyone using collectee and Nagios, I know you guys Nagios, Nagios, whatever Who has their own puppet based monitoring solutions for open stack? Okay, only the ones I know about. Oh, no and nectar If people are interested, I think that's that's a great way to contribute is you know, first of all Maybe on the mailing list, so let's figure out where those things go I'm assuming that a monitoring solution should be part of the of the actual puppet labs dash open stack module And I think out of everything that I've seen it seems like Joe yours is probably most compatible Because I know that you're testing it with the same based on the same solutions And I know that that for HA there's actually two things out there one there is a DRBD base solution which is for active passive failover of control nodes And I know a few folks are looking at that and improving that but I think that for HA There is a Kind of multi-host HA mode that it looks like Required patches into actual open stack that'll be in grizzly And I think from my perspective I would probably just as soon see you know that HA model go forward and be the standard Any anyone have ideas or questions or concerns around HA and monitoring modules. Oh So this I already said this is me adding supports out to I and I files all do that I just got to sit and do it right right right now you can create and manage things But you can't say only the things that I managed should be there and everything else should be removed from the file That's just something I need to do in Ruby, which I'll do for all the native types I'm gonna talk about testing a little bit We talked a lot about vagrant for building out environments I'm gonna be looking into basically what makes sense in terms of continuous integration It's like none of the open stack testing people are actually here But I'll be talking to Monty to see what they're doing I'm also curious what what what the crowbar team is doing for that to see if anything there is is reusable especially in terms of defining deployment scenarios or or a catalog as you called it who actually has Their own continuous integration environment that's built on puppet-based deployments Interesting you guys are using smoke stack I know that that Sorin's been working on on puppet-based open stack deployments with Tempest running And that's definitely what I'll be targeting and whatever we put together, you know right now we have unit test results which are published But hopefully in the next few months will also have a full continuous integration tests with Tempest that'll be published soon as well And I'd be happy if anyone wants to volunteer to help out with that process Or has a vision for all the pieces that are involved with that I I could definitely use help on it and the last thing that I want to talk about which is kind of the the title of this really is How do we go upstream does it make sense for them to go upstream? What are the barriers for for going upstream? And I think I would definitely like to see the modules go upstream But I've been talking to to various people here and there's some concern that right now the the development process around open stack actually is Not really based on packages and and everything that I've done so far is really based on packages So I'm not sure maybe the first question is do the puppet Manifest have to support source installations in order to actually be part of open stack core Should they be should they be installing from source or or should they even be part of open stack core? Or or maybe some kind of what is it smoke stack style, you know post gate testing? Yeah, in general we kind of just as a company have a policy nothing goes from source It's just a bad idea if you wanted to roll to production to it If it's not packaged we'll make packages for it. So it's reproducible Without that kind of packaging reproducibility and guarantee that it's a hundred percent stagnant reproducible. It's you know Introducing who knows what? And I think the the other question And I think from my perspective it seems like the main motivation for upstreaming would be if Developers that actually be interested in using something like this for setting up development environments Do we have open stack developers in the room? Maybe that answers the question I would definitely like to live As close to the project as possible And and that's definitely one of the things that I'm doing while I'm here this week is better understanding What are people's requirements? What are the barriers for that? And at least what steps can we take to just be as closely aligned with with the process as possible? One thing that came out of the quantum tempest Dev meeting upstairs in the last session was an open question How do we how do we test? Quantum the networking service in a distributed environment using existing tool sets So, you know, I can't answer the very specific question in the core developers want this But some of the stated goals for tempest may require it interesting And I'm not a developer, but I work with developers all day software development lifecycle. I'm a project manager Okay, she couldn't tell and And whether or not a developer necessarily once that I really think it's the way We need to go just to take away some of the pain that we have when you're working in a group in a product Distributed teams you put your code out there and you got no idea what it's going to do once it comes back down And it just makes it I see this is making it easier for us to have that extra level of confidence Interesting I can I can continue to reach out and and maybe just get a better understanding of what the requirements are and and actually I had a question for the for the crowbar guys and I know that that one of the sessions was about Installing from source and is that is that tied into a motivation of being part of of Open stack So some of the things you might want to consider was one yes You do want to pull from source because otherwise you're gonna have this whole which packages do I use when are they going to be available? So if you want to get closer to the QA for quantum of tempest then you need to be able to do that from source There will not be packages. Just I actually found though that that I mean canonical for example They're testing packages are pretty darn close to are they available for grizzly. Sorry, are they available for grizzly? That's my point the other part that you might want to consider is that you might want to rethink your approach for the paste API in the con files because That's one of the motivation that the crowbar folks included have gone towards pull from source where all of a sudden you have new Configure directives that didn't exist before if you're relying on the package folks to create the basis for you to modify That's not gonna be available when you go to the next To what are g plus one? I can see I could see I could see it being valid for both cases There's if you wanted to go upstream then you could create Puppet support that pretty much mirrors dev stack In which case you would be pulling from source But then it just exactly what he said where there could be new configuration directors that that aren't known or only known to the developers that aren't you know Aren't added to the the puppet manifest yet And I think the advantage though is that if you're if you're deploying everything from source And in theory the public modules could be part of the gating process. So those changes could never even get in right right And they would be more apparent because of course the builds would fail then if you're if you're pulling from source But in a production environment, then you have building from source is probably a huge bad idea Yeah, I was just about to say the same thing to very different use cases, you know for development You don't want to be packaging. It's a nightmare, right? If you're looking at you know developing for Grizzly and testing for stuff like that You really the pull for source is very very useful, but Yeah, again, you go into production. It's a whole different story, right? So do the money manifest support both development and production in well, probably they should Yeah, so I think if they I think if they the benefits Stably supported both Debian and Red Hat then it would only be like another step just to support source from that And it's just all in the back end really well it's a little more so a couple of many different points, so one of the things in order to be part of a QA process if you that's engaged trunk You don't want to introduce new steps into the development cycle So you might want to look at things we did is basically go and figure out dependencies out of existing code Out of existing resources within OpenStack, so it requires apps whatever is there around deployment in production from source source doesn't mean the wild webs and actually even if you have packages for OpenStack proper if you start looking at repose and Python modules and all the other fun things out there You need to snapshot a big chunk of the internet to actually get a reproducible deployment environment And so our approach is actually to go and create an ISO create an archive that includes all the dependencies out there And guess what the fact that OpenStack components happen to be there in the form of a Snapshot of Git repo and on the nodes themselves will deploy from source that we can later update based on production pushes That's fine. So we're over