 way the light looks like a aura. Of course it's gonna be a great aura. How was everyone's lunch? It was a really good lunch, I thought. I was like, especially after Paris, the food in Paris. I love Paris. It was great, but the lunches were not okay. Well, see, I prefer tequila. Hello everybody. I am Rania Moser, and you are here to hear about the service-oriented deploys at scale in an OpenStack public cloud. And this is part of what has become a series of talks on Rackspace learning to scale OpenStack. My name is Rania Moser. I do believe I said that. And up until about four weeks ago, I was a software development manager at Rackspace in charge of the team that does the build, release, and deploy systems. Just last month, I transitioned into a product management role on the digital side. So I'm building, I get to build stuff on top of the infrastructure, which is just like what I had wanted, which is the whole reason I got involved in OpenStack in the first place about three years ago. So I'm excited to be here and kind of give you some insight into what we were up to from a deploy point of view during the kilo cycle. So when I was trying to figure out how to start this talk before someone gave me the tequila line, I thought about pulling up the unicorn, the space unicorn, and the technical difficulty unicorn, which I used in Paris to kind of explain where we are as an operator community. We want to get to this shiny rainbow world of bringing goodness and light to all of the world with OpenStack, but it still got some technical difficulties to work through. Then I was, you know, watching Jonathan this morning grab a few screenshots from the live stream talking about compute network and storage. Now that I'm up the application layer, this is like even more crystallized for me. The 11th kilo release and software. Every company must build software to compete, starting to understand how the application layer is really working on the digital side with web content management, e-commerce applications that are needing to be built on top of infrastructure such as OpenStack. These are non-technical people. These are brilliant marketing people, but they're not technical very often, and so really abstracting this layer away from them is essential. But first, I thought what finally needed to happen was go to our story thus far. My first summit was back in Portland of San Diego that I joined three years ago during the fulsome cycle and helped us launch the public open cloud. We did not have PhDs, but it would have been really helpful if we did. And there's actually a great little Twitter thread going on with Mark Interante and HP and all of us that were back in the day three years ago reminiscing. There was much rejoicing when it launched. Then by February of 2013, a short six and a half, seven months later, our ability to upgrade the control plane of the public cloud was crippled. The scale we had gotten to above 10,000 hypervisors and computers and control plane nodes, we couldn't keep up anymore. And we were all very sad pandas, angry kittens, because we had to rebuild the whole thing from scratch again, basically. And we would do this not once, not twice, but three times over the course of the next two years, three years. By the time we got to the Grizzly release and the Portland summit, this was the first time I gave a learning to scale open stack talk. We had overcome some substantial challenges in the Grizzly cycle from our own deploy mechanism as well as some particularly interesting database migrations that made it quite difficult to continue. And we were able to get up here on a stage of the summit and share with a growing operator community, because an operator design summit was still a pipe dream at that point, was unable to share the triumph of it. Fast forward to Havana in October, the Havana release, October of 2013, things were going really well. Paul Vocio, who is now a senior VP, or actually a VP of engineering for the public cloud infrastructure, gave a talk in Hong Kong about where we were. And just kind of continued to go forward with our frequent consistent deploys. We were doing them with less than 30 minutes of customer impact. We didn't have an easy button yet, but we knew kind of the direction we wanted to go to get there. Continuous integration comes into play, like that was the theme of the ice house after the ice house release in the May 2014 summit. A colleague of mine, Brian Lamar, gave that talk that really kind of said, how do you do CI, continuous integration, from the upstream open stack down into your own pipeline? And that was really the first time I think we'd really, as a community, started to isolate and address that concept that upstream is a great CI gate, but we also have to have that downstream for those of us that are deploying it. We're almost there. Juno, last November in Paris, we had the realization that our release engineering systems and the processes that we were using were actually mature enough to put it on a slide and talk about it for 30 minutes and explain it to people. It was a great feeling. It was a really great feeling. And that's when I went and looked for another job. So, and that was the first time I actually mentioned microservices in the context of open stack. And so that brings us here today to the Keylo release, the Liberty Design Summit, and to talk about what we've done with the implementation of service oriented deploy in our public cloud. As a caveat, this is really, at this point, we're just at the project level with neutron and glance being separated. But now that they're separated from NOVA, further iterations will keep getting it smaller and smaller. So, all right, first, for those of you that aren't aware or familiar with the Rackspace public cloud have never seen one of these talks before, we have six data centers that are our production regions across four countries and five time zones. We also have half a dozen or more lower level environments for CI, test, and pre productivities. All of this together is well over 20,000 hypervisors on the data plane, well over 20,000 computes controlling all the virtualization, and then well over 20,000, excuse me, control plane nodes doing all the open stack magic. So I thought before we go into the details of how the services work, I want to give you a high level overview of how the Rackspace cloud functions. We start with the control and specifically with the context of what are we talking about when we're deploying or upgrading at this point. It's only taken me three years to be able to explain it. So we start with the control plane. This is actually a series of those 2,000 virtual machines that are running on an internal open stack cloud that we at Rackspace call INOVA. And these VMs have all the different NOVA services, the glance services, the neutron services, so the APIs, the databases, the registries, the scheduler, the cells. So all of those are on a cloud, an internal cloud. We marry that up with our data plane. At Rackspace, this is Zen server for the compute side, and OBS for the network side. Is it still called OBS? I think it is. Okay. And then there's open stack magic that happens. Now there are great talks, articles you can read that will go into the extreme level of complexity that it actually takes to go through all of this. We're not going to go there right now. But there's great walking you through all the different services and how they work. So now you have a cloud, control plane and a data plane. And a customer can make a request to that control plane to the NOVA API. I want to build a server. In this case, let's say a web server. So she makes the call to the control plane node through the API or a control panel. Open stack magic happens. And out pops a VM. And so this is the customer VM. And now she can access that control, that VM, her instance, SSH into it, install her applications, do whatever she's going to need to it. This is where my system comes in. As the build release deploy system. Incidentally, the acronym is birds. Which I really like, because build release deploy engineers are birdie and they don't like that. But this is what the actual system is upgrading. It's just doing the open stack services on the control plane. We're not actually touching the data plane at all. So there are systems and processes to patch the server, patch OBS, and do all of that. But that is not what we're talking about here when we're talking about service oriented deploys. So that means our customer's internet user who's hitting her website to shop for jewelry or learn about the latest gossip on the celebrities is not going to experience any kind of downtime or disruption when we do this open stack service upgrade. Now our customer, she will experience during the upgrade, some degradation of the API may be completely unavailable for her to resize her server, create a new server, do things of that nature. But that customer instance, if it's already there, that base instance isn't going to be upset during all of this deployment stuff that we're talking about. Okay. So now you know where the cloud is and how it's at a high level, how it works, and what the build release and deploy system does and what part it interacts with. This is the build release deploy system. If you go back onto the foundation's YouTube video channel, there is a whole talk that's basically dedicated to this diagram. I'm going to go through the eight different steps very quickly. Just to give you an idea of it. And if you want to learn more, it hasn't changed that much since November, you can go back and find that the Paris is learning to scale talk and get information there. So we start by pulling upstream code. The upstream open stack gate is absolutely central to starting our build and release process. They do a lot of great work, a lot of great tests, and we couldn't do it without them. We do, however, have to carry some internal patches and some changes. Sometimes this is in-flight feature development that is taking a longer time to get through the open source development process. Sometimes it's code that we need to make it work with an internal integration point such as a billing system that doesn't belong upstream. And so we're always going to have some patch management that we need to do. It is technical debt. It's discouraged. And as long as you use it strategically, you can manage it and keep it happy. We do pull in our project configs, which is currently split between Ansible and Puppet. We'll go into actually quite a bit of detail about how we're doing config management because the transition between those two platforms was a key part of service-oriented deployment success for us. We have a few different systems that we use for tracking issues. And we can cross-reference that through a very basic downstream change gate to allow us to keep things moving forward. We push it all up to an enterprise GitHub that's hosted at Rackspace. Package it all up. Virtual environments are compressed into TARS. And then we do deploy orchestration. Eight easy steps. And the deploy orchestration is all Ansible playbooks and Jenkins for abstracting away and making a button, giving people a button to push. So there we go. So that's the introduction part. That's our cloud, how it's architected, where it is, how we've gotten to where it is. I don't want you to think that we even remotely started here. This is actually maybe about halfway through, I think, our journey on operating OpenStack at scale. So one of the last thoughts I had at Paris was this quote from Martin Fowler, who works for a company called ThoughtWorks, and is one of the thought leaders right now around microservice architecture. And it's just really this idea of independently deployable services with common characteristics that are focused around a business capability. And then recently, as of December actually of last year, Sam Newman, who is another ThoughtWork employee, published this O'Reilly book, Building Microservices, and has given a great talk, which the link is on the bottom of this slide, on the principles of microservices. The book is great, I was able to get a proof copy. And so as we were starting on this journey of splitting up our monolithic packages into independent project service-oriented deployment, this is what we realized, because we found it about halfway through, that we were doing naturally. This is a kind of a natural evolution that you do as you stumble through and find it. Sam is great, I've met him, he's a brilliant Australian, but this is still a lot of theory. It's easy when you just have a spider diagram, it's like, yeah, do these things, and it'll work. So really the last half of this talk is going to be, what did we actually have to do to make the shift from a monolithic package with all of the open stack products, like here, Nova Glance and Neutron, packaged up together in what we call the rack stack, and actually start to split those apart, so that now instead we have Nova plus Neutron plus Glance able to go on their own, validate with each other, but be far more independent than they were previously. And what we found, looking back retrospectively, that we need to get to a service-oriented deploy, that we needed to tackle config management, that was absolutely the very first thing. We were using Puppet, we had to move to Ansible, so that we could use role-based definitions and then call those roles in the orchestration. We had to redo the packaging. Right now, everything was packaged in individual virtual environments, but then they were compressed into a single tar, and that is not the way, you can't, you couldn't deliver the virtual environments independently. The orchestration had to be revamped. Current orchestration assumed that the cloud was operating as one big hole with the Nova Glance and Neutron services, so we had to break that out. And then finally, it was a whole bunch of culture change. And really, the culture change is arguably the most important piece, and the piece, if I could go back in time and do differently, I would have handled the change management a little bit, just the organizational change and the process change management, a little more deliberately with intention than I did in letting it happen organically. All right, so configuration management. What did we do? How did it look going from Puppet to Ansible? All right, this is our Puppet-based config repo from about a year ago, and so we've got Puppet Masterless up at the top. The three services were all there, Nova Neutron and Glance. Within each project area was the manifest directory, and then I just have it called out for Neutron as that was the first one that we did. You have your DB for host access, the plugin for MVP, which is now NSX, which is the OBS implementation on the data plane of the network layer at Rackspace. Then you have your actual Puppet manifest scripts, the DB, the init, and the server. And so this was all one giant configuration directory that was all interwoven and connected and a pain in the butt to maintain and do any changes to it. So about six months ago, we tackled switching this, removing Neutron from the Puppet Master directory and switching it over to Ansible. And so now it looks like this. And it's actually, I mean, it's a lot more complex in terms of the directory structure, but the important thing to notice here is that this is just Neutron. There's no Nova, there's no Glantz, there's none of the other system level implementation details. So this is just for Neutron. So you've got your high level YAML files and playbooks for Ansible for the Neutron roles as well as the Quark roles. And Quark is a Neutron plugin that actually operates as a server agent at the compute level to do all of the routes and the ports and the connections at the compute level through Neutron. I believe it is open source. If anybody wants to take a look at it, it's on the Racker Labs repository. And so now we have a really great configuration management structure. All of the different items are defined based on roles. When it comes time to orchestrate later on, we can reference back to the role rather than having to wade through gobs of puppet code. And when it came time to actually add the Quark agent to this setup, because it didn't start off as a compute level agent, they were able to just drop it in with a single file, a couple of files, and then suddenly it was working. Whereas previously in the puppet way, we were handling things, it would have been a nightmare. You would have had to worry about, are we impacting Nova, are we impacting Glantz, how about all of some of the other internal things that we're using, and this way they were able to define their requirements and go for it. So this allowed us to go from puppet, which is on your left, to the Ansible definition. And it took about three months to make that conversion. And our lessons learned is to start with a one-to-one conversion first. Absolutely resist the urge to make changes in the functionality as you go. That's really hard, I know. You wanna tinker and say, oh, I could do this better, but it's really important to get your ads as baseline, functioning and working, and then you can iterate and improve. One of the things we have learned over three years operating OpenStack is that if you're trying to improve, just to get to the next level of maturity, you're also trying to change. They cancel each other out and you end up in a deadlock. So really, really go for change your technology, then improve your process, your workflow, however. Any other order, you will regret it. I have regretted it many times over the last three years. So, all right, packaging. This was the next big area that we had to go through. And so the process hasn't changed much in the last two years. Pull from upstream, code for Neutron, Nova and Glantz. Merge the code with your internal patches. Builds a virtual environment for all three and then bundle, compress all three of those virtual environments into, oh, tickling, into a tar. I'm seeing people taking pictures. They're gonna be up on SlideShare too, just so you know. So the service-oriented package world looks like this. It still looks exactly the same, except we're just doing it for one project. So we're still pulling, merging, building, compressing, but now instead of doing it for three projects, just the one project, which allows us to get to something like this in Jenkins World when we have the jobs that are driving the package automation. On the left-hand side, you can see there's just a packaging tab. We have all the jobs kind of muddled together as they all run at the same time, going through to the right-hand side where we actually have it just for Neutron. And on the screenshot, you can actually see the Glantz tab also exists because we have completed this process for Glantz as well. So now they have a nice little world, their own little domain, where they can iterate independently of Nova, independently of Glantz. Pull their own code down, merge, not merge. Build a new package, iterate on their current package for a little bit longer, and then deploy to their own QE environment, their own pre-prod environment, all of those things. So the lessons learned on packaging look a lot like configuration management. First, remove the configuration dependencies before you start packaging. You can do some of the work in parallel, but having your configurations, everything nice and tidy and separated will make a cleaner transition for your packaging. Again, keep the same baseline structure in place, resisting the urge to change your functionality, and then iterate and improve. We've changed packaging, gosh, so many times, and I'm pretty sure they're looking to change it again, away from virtual environments that are becoming unstable, because we've gotten bigger. And so it'll be a one-to-one transition, and then iterate and improve. So my orchestration slide is missing. I think it's hidden, sorry. So now we're moving over to orchestration. In the monolithic orchestration, we have a pretty just slam-and-done approach. Pre-stage first, which can take anywhere from 60 minutes to 180 minutes, depending on the size of the region and how many computes. You have to get that package onto. Then when you get into the deploy phase, we're stopping services across the board because the way some services can't handle a rolling deploy or a rolling upgrade, some of them can. When they're all together, it's very difficult to retain the good experience of a rolling when half of your services are just dead in the water. Apply the configurations, and then you re-enable the alerts, and then you go through the validation process from the quality assurance and engineering group. When you get into a service-oriented orchestration mindset, your pre-stage time gets dramatically reduced because your package footprint is smaller. Even if you still have to go to all the computes, there's just not as much data that you have to put on there. You're still gonna silence the alerts, but what having all of the separation in the config and the packaging allowed the neutron team to do is actually do rolling stops of the API through the F5 for the neutron API. So now, assuming that there's no update requirement to the quark agent at the compute level, there's no database changes or not dramatic data, no, there's no schema changes, they can actually do a middle of the day deploy to neutron with no customer impact, which we could not do three years ago. And that's really a huge testament to what can happen. They're expecting that by the end of summer, that team will actually be able to handle disruptive deployments even with database schema changes because they've actually started versioning their database. One of the neutron guys is here, if you wanna talk to him afterwards. Yeah, he threatened to heckle me, so I'm like reverse heckling him before he has a chance. With the quark agent, the restart of the agent is just on the agent, it doesn't require a compute restart, so it's not actually touching any of the nova services, which allows it to be completely transparent to the customer. So all of this together, config, packaging, orchestration gives us the deploy-neutron playbook, of which this is a small section of it. And that playbook calls out, has tasks defined for the pre-stage and deploy actions, which then call out to that neutron config repo that we talked about in the first section, and references the ansible roles for the neutron services. So that you've got everything now all in that one directory with the dedicated package and its own deploy orchestration, so now they can start iterating on how they do their work. So, or takeaways, at the end of all of this, convert to that role-based configuration first, separate your packages, and then you've got your playbooks that can reference the roles in the configs. Again, resist the urge to change the functionality as you're changing the technology or the process, get that as-is baseline and then iterate and improve, which the neutron team and the glance teams are continuing to do as they get closer and closer and closer to true zero downtime deployments in the public cloud. All right, culture. This was, this one's near and dear to my heart because we were a bottleneck for a lot of the innovation that, and by we, I mean my build, release and deploy team was a bottleneck for some of the innovation that was necessary. I had a very small team when we were doing a lot of this. I think at our largest, we were seven or eight people and I'm actually giving a talk tomorrow at 420 about that experience of building the system across three countries and seven time zones. So that one's at 420 on the community track. So we had the three different dev teams coming into this funnel, into our team, the birds development team, and then we were giving them tools. We were also, we were doing the deployments at this time, the monolithic deployments of all of the events, staying up really late, lots of sleepless nights, and also writing the tools and trying to improve, improve upon what we ourselves were using and it was extremely stressful. Now in our service-oriented mindset from a culture point of view, we've got the birds tools which the dev team is developing and that's going out to the different development teams, to the Nova Neutron and Glantz dev teams so that they can iterate, improve, adapt at their own pace and they're not waiting for us to do their deployments for them, improve the tooling. They're also feeling the pain which helps them be the pain of late night deploys, deploys that don't go well, services that aren't resilient to a restart and that's encouraging, encouraging innovation. Just so you know, back on October, my team was able to turn over a button, quote unquote, which was a Jenkins job that abstracted away the Nova deployment and the Neutron deployment and since then, the individual dev teams at Rackspace have actually owned their deployments and oh man, have they gotten better? It's amazing when you're the bottleneck and people are just assuming you'll do it and you can tell them, hey, this is awful, we can't keep doing this. They don't quite believe you until they have to actually do it themselves. So, the takeaways on the culture and these were, I was thinking about it actually this morning kind of finalized what I wanted to say in this topic as I have an audience of just prevent your bottlenecks and your burnout proactively. Look at your process, it's not a four letter word. You don't need to have process for the sake of process but look at where you can actually engineer efficiency by just looking at how people are working and interacting and communicating so that you're saving yourself headache and hope and rather than just hoping actually go forth and tackle it. If you at all can handle the head count, separate the deploy system development from the release engineering process and if you don't have enough people on staff, even just rotating, who's actually doing that release engineering process? Who's actually keeping track of what's coming from upstream into the downstream package? How are things clearing through the validation process and who's actually staying up at night if you're needing to do a disruptive after hours deployment? That can go a long way to keeping people from being burnt out. One of the things I think that a lot of operations, communities and engineering communities in general look at change and release management as a block, as something that's gonna stop you but unless you're truly CICD, for most of us it means continual impact and constant downtime and so embrace that change and release management process, look at that team if you have one, Rackspace has a great CRM staff, look at them to help you schedule your releases, communicate to your customers, whether they're internal or external and use that to be optimistic but works within the confines of reality. And I can say after three years if you don't, you will probably regret it. All right, one shout out before I open it up to questions, we're hiring. And even though I'm not the manager of this team, there are racks open for software developers and operations engineers and senior Linux engineers working on this exact problem space if it's interesting, there's lots of other opportunities and I wanna leave you with this last thought, people don't want a quarter inch drill, they want a quarter inch hole. This is from Clayton Christensen who is the author of the Innovator Solution and I've studied his work a lot about how we disrupt, how we introduce innovations that disrupt the industry. Jonathan talked about some about this with Uber and the cloud and how OpenStack is actually disrupting as well within the cloud industry. And as operators, we're the ones that are supposed to figure out how to get that quarter inch drill that'll make the quarter inch hole. And so I hope that this has given you a little bit of inspiration, that the way forward to the simpler OpenStack operations is nigh, it's coming, we've come a long way. And I will open it up to your questions. Thank you so much. Just like to know if anyone has any questions to please use the microphone for the sake of the recording. Okay. I'm like, I did not do that good of a job. Thank you. Yes, sir. I was gonna ask about your stuff. I would also, I'd also love to know who you are and what you're doing in OpenStack right now. That doesn't, oh, that sounded mean. I'm Mike Dorman, so we're primarily operators. I wanted to know if there was something fundamental that you guys identified in Puppet that wasn't going to work for what you wanted to do going forward versus just kind of like retooling all the manifests you had already. So we had re-orchestrated and re-engineered Puppet I think three times already. We had started with centralized Puppet with a Puppet Master. In February 2013, that's really what keeled over and we switched to a masterless Puppet at that point. And then later on, probably about a year later, we tried to introduce M-Collective into the mix to get some of that larger scale agent, real-time things going. It was honestly really difficult to get some of the engineers to work in Ruby. They just weren't as comfortable with it. And also, we could not figure out how to tune it correctly to get it to perform the way it needed to. And then also, because all of our Puppet manifests were from the original initial launch in August of 2012, there was a ton of, we have no idea what we're doing, coded in there, and it needed to be refactored anyway. So Jesse Keating was one of the engineers who was on this team, and he's a really great Ansible guy, and he was willing to do the work. And sometimes when you put all those things together, that's really how we ended up. And Ansible Works was extremely responsive to working with us to get the performance we needed at the scale we were at. So would you characterize it more solving the orchestration problem across multiple nodes versus config management specifically? Yeah, it was, because the Puppet config management was really helpful. I mean, it was great. We used it for over three years. But then when we're looking at the scale of the same problem over and over again, just having one tool is so much better for your mind, for your mental health. Thanks. Rushi from India Reliance. So the question is, what are the problems which you're facing with using virtual environments as packages, and what are the alternatives you're looking for? So the virtual environments are not, actually, the way we're using them by shipping them around and moving siblings is totally off the record, undocumented isn't supported. And what we're really starting to run into, I think, more than anything is just the time it takes to get the package actually created each time. I know there were more reasons. I haven't been involved in the most recent conversations. It's also partly just a holy war between doing a Debian package or an RPM and using a virtual environment. It's kind of going back and forth. We've tried the Debian route and had problems with dependencies and making sure dependencies were installed. So we went to virtual environments. But now getting down to small enough increments with a virtual environment for all the different services that we want to break out, it seems like a lot of extra work. So it's kind of cutting back and forth and I'll be interested to see how it turns out what they actually move to or if they stay with virtual environments for another year. Okay, yeah. Thank you. Thank you. Glenn Neely, I'm loud with HPE Education. And one of the things that as I teach, it's really exciting to hear about the rolling upgrades. Now is there, if I'm at Folsom, can I just jump to Ice House or is there some sort of, you have to be here before you go there? So that whole upgrade process is going to be really, it should be really well documented by the infrastructure team in upstream open stack. We've never done forklift upgrades from one release to the next. We've always stayed within two to three months of trunk. So I am not as familiar with those. I do believe there is a dependency map of how you have to go. And so Folsom, almost certain you won't be able to go straight to Ice House but I'm not familiar with what the... All right, thank you. You're welcome. Hi, I'm Ravi from Yahoo. How are you handling the DB migrations without any downtime? I just wanted to get some plan. Yes, so I'm going to encourage Jason Meredith who is the team lead and was supposed to be here to answer these questions, but couldn't make it. Going to encourage him to write a blog post for Rackspace to share this, but it has to do with, from what I understand, it's a high level, they're versioning the database schema, then queuing up the changes, if it gets to an old, if an API node has the new code and needs or the old code and is talking to the new schema. And then during, let me see, where is that? During the deploy process, they'll actually catch up the data changes. Any of the requests that have been queued up from an API will then get batch updated in all of that. I don't have much more technical detail. In fact, he was like, you shouldn't even mention it, because then they're going to ask you really technical questions that you're not going to be able to answer. Because that is like the Holy Grail. I mean, I can't wait for Nova to get to that level. And it is, they are working on it really hard. Okay, good to. Thank you. Thank you. Hi, Matt from IBM. So I was wondering, you described your bird's team. I think you also have separate teams that do more focus specifically on Nova or Neutron. How much of the code, the Ansible playbooks that you were talking about, how much of that is actually developed by your bird's team versus the Nova specific team? Up until about September, October of last year, it was 100% and probably 95% developed by our team. And then, I don't know, Ben, like, no, because you guys were doing other steps. No, I mean, probably about the actual, just the deploy orchestration. Now that we've said here, we've moved really to that service-oriented culture and mindset, my bird's team right now, everybody's kind of moved on to other things. So they're rehiring and rebuilding it up. So there's only one person right now. So it's now, any changes are totally the other dev teams, which is really great, which is really a great thing, which is what we wanted. So now the team moving forward is looking at how do we expand even bigger challenges now that we've got localized engagement. I'm also curious if any of the Ansible playbooks that you're using, you're publishing those externally. So I know part of the problem is, is that we haven't moved over to a full database-driven configuration for like the secrets and for the settings, the parameters. You wanna publish your password on GitHub? I believe they would, I think there's a desire to, I mean, it's just a matter of how we get there. You can also pick Jesse Keating's brain, JLK, on FreeNode. He's an extrovert and likes to talk to people and he's probably gonna kill me. And he's at Blue Box now, but he actually, he did a lot of the original engineering that led to the service-oriented deployment. Great, thank you.