 So, hi. I am Monty Taylor. I work for Hewlett Packard, sit on the OpenStack Technical Committee and also the OpenStack Foundation Board. So, basically I spend all of my time doing OpenStack, OpenStack, OpenStack and just say the word cloud a lot in a lot of contexts. And I'm Mark McLaughlin and I can pretty much substitute Red Hat for that introduction. I'm at Red Hat. I work in the CTO office there. I'm on the Foundation Board. I'm on the OpenStack Technical Committee. So, work with Monty and Offalot and both of us are here to share our wisdom. We have so much wisdom. We have so much wisdom we're going to give you. When you walk out of this room, you're going to be full of wisdom. We're going to chat a little bit about a few things. We're going to talk about distros, sort of what they are, what the theory is behind those. We're going to talk a little bit about the theory of continuous delivery and DevOps because I promise you I know everything about DevOps. Then we're going to spend a little bit of time talking about the differences between the two because it turns out they're dissimilar. And then ultimately we're going to talk about how the two of those have a home in, how both of those have a home in the OpenStack project. Things that we do to deal with that, challenges that it's got. And then ultimately I believe that leads to the wisdom that we've been selling you on. So, I think we'll be kind of going from talking about philosophy and kind of world view things to towards the end talking about some more concrete stuff and OpenStack specific stuff and hopefully that's where we'll get into some of the more concrete conclusions I guess. Yeah. Well, this time we actually have, last time we did this, I don't think we had conclusions. I think we just stopped talking. So, this will be much more exciting, a new and improved version. We'll call it the Ice House release. Cool. Okay. Well, I'm going to start talking a little bit about distros. I guess I've been working on Linux distros for over 10 years now. But it's funny, it's like with the advent of DevOps and continuous delivery and stuff like that now, and there's a whole lot of literature about the kind of theory and philosophy of those. It's kind of interesting to take a step back and try and come up with a concrete description of, say, the philosophy of distros, how distros go about trying to create a software product, I guess. So, I guess the first goal of a distros is to take this disparate set of open source projects. All these open source communities building these great projects and off all doing their work separately on different release cycles and stuff and bring that all together and build a coherent whole out of it. So, and we do that by applying a lot of process, a lot of rigor, a lot of repeatability, and it's kind of from all of that kind of work, that kind of repeated process of doing this over and over and over again, that users gain a kind of level of assurance about the quality of the distro. And one of the interesting things about distros, I think, that's really quite different is it's not like one software product that does one very specific thing in a kind of very fixed way. It's almost like a toolbox, right? You can take a distro and you can build a whole bunch of different things from it. So, there's pretty much an infinite set of use cases for a distro. And so, the distro itself can't take on the task of testing every possible way it can be used. So, distros rely heavily on their users to contribute their own testing to, you know, when a new version of the distro comes out or an alpha version of the distro comes out, users will take that and test their particular use case, test their particular configuration, and give that feedback back to the project then. And really that feedback is, you know, critical to making a distro work. And so, it's that feedback that allows, you know, the product to become mature, I guess. So, distros go through this process where they take the latest version of everything, bring it all together, it's all completely broken, and go through this process of gradually getting closer and closer to something that's, you know, good quality. But each of those phases of the distro going through its release process, different sets of users come in and try out their own use case and give that feedback back. So, users kind of get this choice of at what point in a distro's release cycle they want to, you know, get involved and give their feedback. So, continuous delivery on the other hand, and I get to talk about this because I'm currently running a production system, large portions of it happen to happen and deploy continuously, which some of you may interact with from time to time. So, continuous delivery as opposed to the sort of distro stabilization cycle and all of the work that goes into that is a lot about tight feedback loops. You have an idea, you implement the idea, you get it out into production, you learn what works and you move on, and then you iterate quickly and you want to make sure that the time from development to production is minimized. The longer that you take, between the time that you develop your change and the time that it gets out, typically in a process is also the longer that it's going to take you to respond to errors or problems in the system. And it means that it's a much longer time for you to be able to apply learning that you get from rolling things out. There's a heavy focus on automation, whereas there's a bunch of sort of human selection work that goes on in the building of a distro. To be able to roll changes and new features and new bug fixes out into production on a rapid basis, you pretty much are required to have automated systems. If everything was done manually, you would basically just have a team of people doing nothing but running install commands over and over again, typing, typing, typing, and it would be very tedious and extremely error prone. So the automation allows people to roll things out with a much higher degree of confidence. One of the things this leads to and that comes in here is that you get to consume risk in small batches. At the San Diego summit, for any of you that were there, Troy in the keynote, he was talking about the differences between how they roll out the public cloud at Rackspace and was discussing that if they're going to get from, or at the time it wasn't from Icehouse to, from Havana to Icehouse, because we hadn't built Havana yet, but if you're going to get from what, Folsom to Grizzly, then you have a couple of choices. You can wait six months, and then when you go to roll out the next release, you can roll out six months worth of changes all at once and take a really big hit in risk and a really big hit in what the differences might be, because you're going to have to roll out all the changes or you can roll them out the entire time that they're coming, which means that each one of them is potentially much more understandable. It's not just that you read the release notes and say, okay, well, these are the five new features, the five new features in an opensack release. These are the 20,000 new features in this opensack release. This one, you get them one at a time. And so even to the level of being at the patch size. So you can understand really specifically what its impact on your environment is and what things you might have to do to mitigate, to migrate. If you think about database migrations, a six months worth of database migrations is going to be a much longer and costlier downtime potentially to run as opposed to if you ran each one of them as they came along. The other thing that you get in a continuous delivery model is, or that you sort of have to have to make it work, is the shared responsibility across dev and ops. And this is, of course, our favorite word of the decade dev ops. But specifically if you're rolling these things out quickly, if you're not, if the developer doesn't share responsibility with the operator, isn't part of that tight feedback loop, then the developer's going to go off in the corner and write some crazy stuff and it's going to break the databases at midnight and the ops guy's going to update them and the next time what's going to happen is the developer's going to write a change and the ops guy's going to be like, yep, I'm not deploying that. I don't believe you. I got paged at midnight last time. We did this and I didn't really like that. And so you have to sort of co-locate and conjoin those operational duties so that the developer who's writing things has insight into what the effects are and actually sort of learn some of the pain and learn some of the joy of a lovely deploy that has absolutely no problems in it and gets to share that. And same thing with the operator, the operator, the person coming from the operational side in working with the developer on the changes, then also shares some responsibility, some culpability for what they're going to do and has an amount of, they can develop an amount of trust between them. And I guess what's been, what the intent behind all this is to practice what some people call hypothesis-driven development, right? The business driver behind all of this is you want to be able to have an idea, get it out to your users and test and see if that idea works well. And so, you know, all of this is important. Like the tied feedback loop allows you to get that feedback. The consuming risk in small batches means you're trying out ideas in small chunks. And the shared responsibility is, you know, both dev and ops sharing that goal of, you know, trying an idea out and getting the feedback back. Oh, looks like we're going to compete with somebody else for talking. That's exciting. So, in looking at the, in looking at the, the sort of, how these two sort of different world views compare to each other across a, across a few things, testing is a, is a good one. As, as Mark said earlier, in, in a, in a distro world, the testing is largely done. All this is changing some, but it's largely done at the, at the user level, right? It's the, the sheer combinations of things that you could install from a, from a distro means that you can't possibly test all of them. And so, you get increasing waves of, increasing waves of testing as it, as it becomes closer and closer to release, closer and closer to maturity, more and more people use it. And that testing the sort of, you know, stereotypical open source, many eyeballs, you know, bugs shallow on the continuous delivery side of the, of the page. This is all done, this is all done in highly automated fashion. In fact, because you're, you're trying to catch these things before they roll out. You also sort of have a constrained problem set that you're, you're deploying, you're deploying a cloud, for instance, you're not deploying 700 theoretical clouds. Although if you are deploying 700 theoretical clouds, I'd like to hear about that because that sounds like a lot of fun. It's like the, the approach to testing on, on the distro side or the continuous delivery side, it's driven by the kind of constraints, the environment that, what, what both are trying to achieve. So, you know, if you've got an infinite number of use cases, you can't hope to, you know, automate your testing of those. And if you hope to do, you know, rapid releases, you can't rely on manual testing for that. So it's, it's, it's a, and so I guess we've kind of covered some of this, this already. You know, in the continuous delivery viewpoint, in the kind of dev, dev ops viewpoint, you're typically building software and delivering software for a very, one very specific use case. You know, if it's going to be a service, perhaps there's only going to be one instance of that service, or maybe there's going to be multiple instances of them, but they're all going to be essentially configured the same way. Whereas in a distro sense, you've got this kind of almost infinite set of, of use cases. This kind of hard to predict. You know, it's hard to predict all the way that, that users are going to use it or what use cases are, are most important to users. Essentially there is, there's pretty much, you know, there's not, there's not 100 twitters. There's, there's one and they, they deploy it that way. The, and I mentioned this in the, in the, in the conditional delivery side. In, in, in conditional delivery you want to get a, a sort of almost constant or predictable low risk deployments. You want to, you want to make sure that you understand very specifically what the, what the risk level is. And it's okay that there's some risk. Like every, every deploy has a, you want to understand it because it's, it wants to be predictable because you're going to be doing it potentially multiple times a day. And so in, in doing that you, you're able to mitigate the, the fear. If it's a known quantity of risk, if the risk is, you know, if it's a known quantity then you can deal with it. Yeah. Whereas with the, with the distro the, the risk level is, is much higher. If I get a new version of the OS on my laptop it's possible that they might have decided to remove my clock applet that has my, my, my time and that's a, that's a big risk that I actually don't know until I upgrade my laptop and then I discover that half of the features on that way. We've shown the graph on the next slide. So here we're trying to, I guess, show what we mean by different risk levels, right? And the continuous delivery world is this, you know, there's, there's varying level of risks but it's pretty much a straight line. But in distros, you know, you get these releases happening which, you know, if you consume that new release you're consuming this massive amount of data with that. But as the release matures, the risk level goes down I guess. And, you know, in the, in the distro sense, in the release sense users can choose at what point in that graph they want to rebase the new releases so they can choose their, their risk level and they can choose to consume the alpha releases and know that there's going to be a large amount of risk with that or they can wait until, you know, maybe 18 months later when there's a point to release of rel or something like that for the user of what risk level they want to use. Yeah, so the, the, the idea of user choice actually feeds really nicely into, into the, in the idea of feedback loops. And it continues to delivery model the, the users or the operators of the system have, or of the software in this case, the, the deployers have, have chosen to, to sort of buy into being a part of the, of the feedback loop. It's very important for users to consume or not just to install a thing but actually to be part of the, the creation of that thing. And, and this is, this is one of the ways to, to get the feedback loop. On a, on a, on a distro on the other hand I might very well passively consume as a, as, as an end user the, the distro running on my, on my laptop and I may never actually even submit bug reports back even when my clock applet goes away. I, I might just complain to people at the bar which developers fix anything. It just, it just, it's just fun to, to bitch about over beer. And it goes to that choice again, right? With, with distros you have this choice of you know, at what level you want to start doing your testing. Do you want to do it at alpha releases? Are very stable releases. You're choosing kind of how tight you want your feedback loop to be based on, you know, what you're willing to do. Whereas with continuous delivery if you're doing that you're kind of almost explicitly, you know, you've got to be part of this tight feedback loop. There's pretty much no point in doing continuous delivery if you're not willing to be giving very, you know, rapid feedback back to the developers. Another form of, of feedback just over and beyond I'm going to tell Mark that my clock applet has gone is, is actual contribution. It's it's, it's essential in, in continuous delivery. You, you, you have to, you have to contribute to the, to the project to get the, the feedback loop in, in a, in a distro. You can totally choose what level of, of engagement you can go and develop on, on the distro. But by and large actually probably a larger majority of people are going to consume rather than, rather than, than participate. Yeah, and, and absolutely to be clear we're not talking here about contributing code. We're talking about contributing, you know, to docs or bugs or feedback in terms of, you know, sharing what your configuration is, you know, all of that kind of stuff. I think in the continuous delivery world you really have to be really in there and giving as much feedback as you can and as much contribution as you can whereas distros you kind of pick whatever you want to contribute. I'm going to let you take this one because it involves name-dropping people and apparently you're extremely bad about that. Yeah, so I guess Monty and I first gave this talk a few weeks ago at Red Hat's DevNation conference and one of the, one of the talks there was about DevOps and there's this, in DevOps there's this notion of, you know, stopping the line or the analogy is given of Tyota's Tyota's manufacturing line, they have this thing called an andon cord. So the idea is that if anyone notices kind of a defect in the manufacturing line the andon cord gets pulled and everybody stops and everybody takes shared responsibility for fixing whatever that issue is and so that notion is very prevalent in DevOps, right? If at any point in your kind of delivery pipeline say, you know, you've made a change and you're getting that out to production, if at any point that pipeline breaks everybody on the team has a shared responsibility for fixing that you know, in the distros world there's no sense of that, right? There's no ability for, you know, a user of a beta release of a distro I'm pulling the andon cord you've broken my post fix configuration everybody stops what they're doing until you fix my issue there's not quite that sense of that would be really awesome actually everyone involved in the distro please stop what you're doing my mail configuration is broken so this is sort of an image of the traditional pipeline, right? You've got some changes happening they go into into a dev a preprod environment and then eventually roll out into into a production environment at any point in this pipeline is exactly what when Mark was talking about if you have issues at these transition points, you have issues in these areas then you stop and you don't roll out to the next thing and you fix it and you fix it right then the interesting thing in the distribution world and especially as we start getting a little bit closer to talking about the combination of these two things is it actually looks a little bit more like this, right? You don't just have one consumer you don't just have one end point for where your software is going to go it's going out to many different people and those people are going to be consuming it many different points along its cycle so you might have an early adopter who's at a further along release of a person running a different one so you've got sort of a much more scattered thing so this is a little bit of the ludicrousness of any one of those nobody on any one of those lines outside of that circle actually has the ability to stop anything running from any of those other lines they're all distinct from each other so if it breaks somebody over on the right the people on the left they're also going to get broken it's going to be out into the wild it's already been released and they're just going to deal with it that way so it's entirely possible we should talk about OpenStack I mean it's kind of a weird thing to talk about here I guess this is the AWS summit, right? No? Okay, sorry, bad jokes so as an intro to this and part of the premise of this whole thing is that actually OpenStack shares many of both of these features which is particularly an interesting thing Yeah I guess to be clear that's what we're trying to get at here there's this kind of tension it's a positive tension almost within the project that we have two types of users right we have what we call trunk chasers those who are trying to stay really close to trunk trying to do continuous delivery but we also have the distros and the vendors and the users of distros or whatever the more traditional kind of distro model so we've got a release process and we're trying to get to the root of how that tension is balanced I guess so on the off chance that some of you don't know this we have a time based we have a time based release model here in OpenStack we have many projects it makes that little circle graph even more interesting if you think about that and we have a growing number of them we have a growing number of people who come to this just the sheer number of people here in Atlanta is boggling my mind we do keep a stable branch and this is actually was a we have not always had a stable branch in OpenStack I believe that the original conversation Thierry was in Boston but it was the Essex summit it was when Thierry told me I couldn't do it the distro fellows came in and said hi we would like to be able to actually because we're releasing something to our users like they're getting it after after it's been released they're finding bugs and they're reporting them back to us and we would like to put those bug fixes somewhere please and it was the first time that we had ever decided that we were going to actually maintain a stable release branch and it's been a disaster ever since part of that kind of notion that a project should have a stable branch is that kind of distro mindset of we've got a bunch of projects here we're doing this time based release process when we do a release that's not the end of it when we do the Icehouse release we want the ability to do as a project we want the ability to do kind of updates of that that release will continue to mature independently of the development branch that's going on and that's really analogous to how distros do things too but from the continuous delivery perspective or from just the trunk master developer master branch development development perspective releases their sync points right that they're not they're not an end they're not like I wish that it was time to throw a party and then go home and just take a vacation but that almost never happens we need to sync and we need to just keep the ongoing work so around that time we tend to taper the future work to take care of the distro folks a little bit better this gives us time to sync up on things like documentation or internationalization it's kind of tough to document things before they've actually landed sometimes and so we got to give the docs guys time to catch up with us we need to do things about marketing we need to let people know what's coming what they're about to get part of the idea with this tapering the feature work is the notion that okay we do a lot of automated testing but users use our software in ways that we don't do automated testing of so we need this period whereby just everybody slows things down for a little while and gives users the opportunity to try out their use cases that we don't have automated testing coverage of and get that feedback in before we do a release and that's the idea with tapering feature work the last bullet point here I'm currently finding a little bit funny because every six months of course as you clearly know we have a summit and it's the take a breather part that I find particularly amusing because the last thing that I'm doing this week is taking a breather I feel like I want to die from being too tired but in theory this is supposed to be a time to pause and reflect upon what we've done and then plan for the next six month cycle instead it seems to be the time where we get together and do piston parties but the reflection that does happen in these design summits are really important I mean if you go to any of the design summit sessions we had yesterday was a real meta issue about kind of how we all work together as humans and how we kind of have more empathy for each other when we're trying to contribute work that isn't appropriate whatever but that opportunity to have that kind of reflection is just really important to how this project works and if we didn't have these synchronization points we wouldn't have that opportunity we also wouldn't have the opportunity to see Josh McEntee dance in a gold lame body suit which I hope everybody saw so there's some cultural aspects to that we share you're supposed to do this we got our sequencing backwards there we were reviewing the slides before this here and we were a bit confused about the point we were trying to make here so the point I'm trying to get to here is if you do a lot of reading about DevOps one of the things you'll hear is John Willis talking about CAMS culture, automation, measurement and sharing and he always says it's CAMS not AMS it's not just about all this technical stuff around automation metrics and sharing it's also about the culture the culture is really critical to be able to have that kind of high performing organization that you're aiming for with DevOps and what we're trying to get out here is we have a good amount of that kind of culture that's required for that we think within the OpenStack project so we've got this emphasis on automated testing and it's so valued within the developer community and that's actually I think highly unusual in OpenSource projects this love of automated testing and the feedback from that and the insistence that we're going to stick to that even if the gate breaks and people can't merge patches for a couple of days or whatever we're willing to take that pain and I think that's a really positive aspect of our culture that's really important to being able to do some of this stuff the if it's not tested it's broken I think I mean it's not a complicated sentence but I think it was Russell that said that a little while ago and I think it's extremely apt and I think you can see this reflected in a lot of the content on people's various t-shirts that have been made for the summit this time but it worked in DevStack when it comes to mind that's the type of thing that arises out of a culture of people who are clearly doing these sorts of activities and these are the things we're sharing with each other about our shared experience of dealing with them. There's a point there that's really important too that's maybe not generally appreciated so I'll use the example of Rockspace right and this has kind of been true from the very very early days of the project you know these guys are trying to run trunk pretty much and they've built this kind of relationship and trust with the developers, the upstream developers not necessarily Rackspace developers such that if they try and deploy to trunk and find that it's broken they can very quickly get that issue reverted and that opportunity is really there available to all operators right it's to build that kind of trust and that feedback loop and the project that the developer team really has the culture to really embrace that and really respond rapidly to that kind of feedback. Right. So we just to give we have some challenges that we have to deal with in the OpenSec community or you can consider these what is it? Opportunities I guess that's what you call them not challenges so we have a very highly active developer base. This is a view of the last 120 hours like from I think I made this graph two days ago so the little spike on the right hand side is actually Monday and you can see that even there we've got developer activity on the Hersey summit even when people are here obviously we had a whole bunch of it last week but we've basically got a constant steady stream of activity happening. The green line at the bottom might look a little bit more like the continuous delivery risk line from earlier. Those are the new patch sets that are being uploaded into the system. The other ones are the code review comments so we have a lot of that going on but it's a fairly constant rate and actually the line below it is the fairly constant rate of patches that are getting merged so we're not waiting until the end of the week we're not waiting until the end of the month to do a big merge window although I know that's a fun game that some people like to play. We have to handle this at scale and we have to we this leads to numbers around the last time we measured it we're landing about 10,000 changes in a few days which is quite astounding and give some challenges to our folks following a continuous delivery model but also it's breaking them up and small enough that we can actually land those changes. Absolutely, I think it kind of shows that this volume of change happening you as a user are an operator kind of have two choices I guess to kind of follow the distro route of kind of choosing to consume maybe every six months and kind of dealing with it that way are you go the continuous delivery route but you know there's such a volume of change here there's kind of no halfway house of you know it's you've got to really go in for the continuous delivery route and be willing to kind of give that feedback upstream and really build that relationship with upstream are you go the kind of more distro route. We've also got so we've got all of those changes coming in our developers are producing 10,000 changes every 42 days but it's not just our software OpenStack is I guess aptly named with the word stack in it it is a stack of things there's a bunch of Python code we wrote there's a whole bunch of Python libraries I believe over somewhere around it's over 100, 150 that's non OpenStack there's also a whole bunch of other distro things so like your various C libraries your databases, MySQL, Rabbit, any of these types of things the kernel, Libvert whatever it is those things are going each of these things also has a life cycle right so in a way even though OpenStack is not just from a methodology perspective of how we're going to deliver OpenStack OpenStack itself is a distro in many ways because we're collecting in life cycles from other people so even if we were one cloud and even if we were just all operating one cloud in a perfectly DevOps manner we're getting changes from people who aren't us so we more frequently than I might like to admit do get ourselves messed up when an upstream Python library might make a breaking change without incrementing their major version number but this happens this is a real thing and just as much as there isn't a direct cord that can be pulled to sort of stop somebody at one of the people doing condition delivery among us there's another set of things that are feeding into this that are sort of also out of our feedback loop and out of our control other than doing direct tying cool I guess the constant theme this week I think I was only in a design somewhat session about upgrades maybe an hour ago upgrades clearly in terms of people consuming what we deliver if you're going to continue to deliver your route you have to have an approach to upgrades that allows you to consume these changes regularly whereas distros you're trying to consume do upgrades from release to release and I guess the point here is that there are attempts to support and facilitate both types of upgrades so we do have multiple people running continuously delivered clouds and the approach there we have to take we basically have to support commit upgrades we have RPC API versioning so every change that's made in the interface between components within some of the projects each of those interface changes are versioned to allow mixtures of versions and we also have this approach recently to database migrations we're trying to perfect this approach to database migrations that allows us to commit to commit no downtime DB migrations but I guess getting this right for both use cases is the challenge we have here because doing the commit to commit is all well and good but also if you've got somebody who's consuming from a distro then that person may be doing grizzly to Havana or Havana ice house so on every commit we've got to make sure the commit can get there from the last commit but also it's got to be able to migrate and get there from the last stable release as well and so we sort of have to do the double duty thing which is pretty exciting ultimately though a lot of the things that make these things work and a lot of the things that make a continuous delivery model work or the feedback you mentioned that earlier you want them to be as tight as possible you want to get the feedback from the people who are consuming this so that you can make your product better and you need this in the distro world as well the timing of it works a little bit differently so what really winds up happening is rather than there being sort of a clear and open view of the software delivering is actually sort of this little thing in the center here is can be a bit opaque not everybody in the world knows everything that we're doing inside of our development process at OpenStack even though it's all open and on the internets and you're all welcome to participate this doesn't mean that if you're consuming the software that you necessarily are and so if you're not participating in the feedback loop if you're not contributing back or talking back or spitting with us then the feedback loop is a little bit messed up so what you then kind of find is this developer community we have from the outside it kind of can seem a little bit of unpenetrable mass and this feedback is trying to get into this unpenetrable mass at least that's the perception the feedback we're getting we're talking about feedback I guess this is one of the big challenges we constantly have to embrace and we're trying to work through how do we allow this kind of feedback to get into the project and for the project not to be seen as so unpenetrable and opaque what we actually want is we want this if we can get these feedback channels working if we can get the people who are consuming this whether it's at the distro model or the continuous delivery model if we can get that feedback in here into the middle where the development action is happening then we actually have a better chance for reaction better chance to react quickly occasionally this works well like Mark's example with our public cloud operators being able to come in and say hey listen no seriously you just broke my cloud we have to fix that but this is available to everybody and this is we're really wanting to the place we're wanting to get and ultimately in order to do that it's about trust you've got to trust you've got to trust us to give us feedback we're giving you software but there's feedback we were just talking recently about trying to get logs and configs from our operators because we have some questions about what they're actually doing and for them to start giving us their production logs and copies of their production configs that takes trust to get that feedback loop happening I think this is one of the biggest learnings maybe the project has to take from DevOps and DevOps practices it's not a case of go read jazz humbles continuous delivery book and say okay that's what OpenStack should do but it's more the kind of meta question of in DevOps you're trying to build this sense of shared responsibility this trust between the developer side and the operator side and I think we're embracing that problem with OpenStack and we're starting to see some positive changes along there so I really enjoyed the operator some of this week just seeing some of the operators now being willing to really talk openly about their deployment architecture their network architecture the problems they've seen and share information between them but also kind of trust the developer community to get that feedback I guess exactly right and ultimately that in and of itself becomes a feedback loop the more the sort of trust happens the more you get us working together the more we get us working with the folks in the community and then ultimately we all actually can participate together in making sure that each of these releases and each of these commits we're landing is actually doing the things that we all needed to do and with that amazingly enough we are pretty much exactly at time and this is our big hope for the future this big hope of trust and we're seeing more progress on this and we want to continue to see more progress That's right. Anyway, thank you very much