 Thanks for coming to the Postgres conference. So the first speaker we have today is gonna talk about using Puppet to manage your Postgres deployments. This is very near to me, so I wish I knew about this beforehand so I can cause all the pain, but I'm looking forward to it. So Chris will talk about Puppet and Postgres. Thank you. Thank you. As Sai mentioned, my name's Chris Everest, and I'm here to talk about managing Postgres QOL with Puppet. I'm really excited to be here. We selfishly put this Puppet module together to help us with our jobs, and it's kind of like our one-click way of deploying Postgres, and we're fortunate enough that we were able to kind of release this open source and give this tool to everyone else. So just to kind of get started and give me a little bit of a gauge of the audience, can I get a show of hands of who uses Puppet at all? Okay, so we've got a few people. Anyone else using Chef or Ansible or anything like that? Okay, all right, great. So like everyone kind of knows what we're gonna be talking about here as far as configuration management. So, as I said, I'm from, well, I didn't say this yet, but I'm from Cover My Mads. I'm a systems engineer, and I work with all kinds of OS level stuff, application stuff, and I'm more of a CIS admin than I am a DBA, but I have very smart DBAs that work with me that help us put the requirements together for this tool and this presentation. So some resources here. We are hiring right now for a DBA. If you find yourself in the Columbus, Ohio area we would love to have a PostgreSQL professional. Another one on our staff. Scrippscribe.org is a blog that our company maintains for technical blogs and things like that. So if you're interested in any of the stuff I have to say, come and visit us. And also, you can find all of our GitHub resources at the Cover My Mads organization. So, and down here at the bottom left of the screen, if you wanna follow along on the presentation, we have some code snippets and things like that so that you can sort of browse around yourself on the presentation. So, Cover My Mads, we are a very unique organization that in short provides ease of use for patients to get prescriptions authorized. Most people don't know about this, but there's this, if you've ever had any sort of complicated health problems or a very expensive drug, sometimes it's really hard to get your prescription. So, our company enables electronic submission for prior authorizations. And my goal is not necessarily to give you the spiel about my company, but we have a really cool company and everything we do, we try to open source. So, not only are we helping open source community, but we're also helping people with their health and that makes us all very happy. So, just wanted to make sure I mentioned that. So, just an overview of what we're gonna go through here. I'm gonna give you an idea of why we needed this puppet management for Postgres, give you an idea of what our environments look like and then step through kind of our puppet module. So, as of right now, it's funny I wrote this presentation about a little over a month ago and I've been working on it over the past month, just fine tuning it, but a month ago we considered ourselves a service-oriented architecture and then somehow in the past month we've started talking about micro services architecture. So, that's a little scary, but if you don't know what this is, I mean it's kind of a big buzzword now, but it basically means lots of services that do very specific things and are all sort of separate and work together to make one entire service or platform, if you will. And what that means for us is tons of applications. I'll go through those counts and give you a little bit of idea of our environment a little bit later. The other thing that we're doing right now is we're migrating from SQL Server to Postgres. So, are there any SQL Server users in here? One, all right, good. Hopefully next year there will be zero SQL Server users in here. We'll be off of it and you'll be off of it. The, just one more point about the SQL Server, we have one gigantic SQL Server which is sort of a legacy institution from when the company started about, I don't know, maybe 10 years ago and the organization as far as how we have everything laid out, the service oriented architecture or now we're gonna be the microservices architecture is causing us to go from this one big giant SQL Server database to tons and tons of little tiny Postgres databases. So that's scary to us. We have two DBAs and four or five CIS admins. So we have way more databases than we have people to even figure out how to do it. And then finally, take all that, we have, take all that, lots of databases, lots of apps and we also have lots and lots of environments. So we have a development environment for every developer. We have test environments for test engineers and we have integration environments for customers that they can go to and get their own environment and do integration testing with their own applications. And finally, these things all have to change at a moment's notice when code changes, when scheme is changed, excuse me, all kinds of stuff like that. So production, we have about 50 applications. I wrote this a month ago, we probably have 55 applications this month. We constantly are deploying brand new applications. We have eight production databases. I would say we probably have 10 production databases since I wrote this. So we're moving quickly. Our integration environment, we currently have three. Multiply that by the number of apps, by the number of databases and you can see how things grow very, very quickly. And then finally, testing and development, it just gets way out of control. Does anybody here use Vagrant? Okay, so we also use our Puppet, we use the same Puppet configuration in production and all these other environments and we also use that in Vagrant. So we get to sort of test this path to production through all of our apps, all of our databases and everything. Starting with these giant numbers here are correlating to all the Vagrants. Every developer gets their own Vagrant. Some of them have multiples depending on feature branches that they might be working on, excuse me, in their code. Which this is our battle cry on the sysadmin side. It's like the Oprah B's meme. Everybody gets a database. So I'm not much of a comedian, but I had to put that up there to honor the team. So the folks who are using Puppet are using the Puppet Postgres module. Anyone? No, okay. All right, well that's cool. Then I'm not gonna like, I can say whatever I want and no one will even know what I'm talking about. I can lie while I want to. I'm not gonna do that, I promise. So Puppet Labs, for those of you who don't know Puppet, is the vendor that basically maintains Puppet. They have the best Puppet modules on GitHub. They open source everything they do. And their Postgres module is one that they consume themselves because Puppet itself uses Postgres as a data backend. So they have a really great Postgres module. And so you don't really have to be afraid of the Puppet module that Puppet Labs releases because it's as easy as this one line to create a Puppet server. So, barring all the complexity behind setting up Puppet, as Valentine said, there is almost a one click way to deploy Postgres. So this will install all the official PGDG Postgres repos if you have a Debian-based system or a Red Hat-based system. It installs a PGDG directory for the version of Postgres you're running. It supports multiple versions of Postgres running on the same machine. It starts up the database server with a Postgres database kind of like you would if you were just installing it yourself, but you just need that one line. And another couple of lines and you can have a custom database with the user and grant all on your databases. So what's wrong with that? Not every database you create, you wanna give someone grant all to. So I can't complain though because this is pretty awesome, super easy to do. If you're developing, yeah, you probably do want grant all on your databases. You can go one step further, create specific roles, Postgres roles, and apply very granular grants. Again, this details grant all for the role of app user to the database cover my meds. And yeah, great, it's totally easy. But what happens when you have to have 50 more application roles and you don't wanna give grant all to all the users that you create in your database? There's lots of work to do. And this is exactly where we started making this puppet module. We spent probably a good month just figuring out what we needed to do. And then we spent probably two months not doing anything and just saying, yeah, I think we could do it this way, but that won't work. And we noodled over a long period of time and then finally we spent another month sitting down and hacking through all this stuff and getting it working for us. And it's definitely a success story. Otherwise we wouldn't have presented to come and talk to you. So the cool thing about the puppet labs module is that those 10, 15 lines that I showed you earlier are super awesome, but there's way more stuff in the module that make it very, very flexible. You have to break down all the things you need to do like create app users, create databases, figure out what kind of grants you wanna give your users or roles in PostgreSQL speak and have extreme patience and understand how the puppet architecture works so that you can make PostgreSQL work for you. And the secret to all that, everyone knows the PSQL command. There is a PostgreSQL underscore PSQL resource in Puppet that allows you to do anything that the PSQL command can do on the command line. So you have to give it a unique name to identify it because in Puppet, by the way, if anyone has any questions, I'm gonna glaze over a lot of Puppet specific stuff, but feel free if I've glazed over something that didn't make sense to raise your hand, I'll be more than happy to dive in deeper. Give it a unique name, give it the command that you wanna run, the database you wanna run it on and a unless statement, which is very important, that says don't do this unless this already satisfies a condition. And that's to make sure that your Puppet things don't act over and over and over again. Once you have a user, for example, you don't need to recreate that user five minutes later, 10 minutes later, 10 years later. And then finally, you require the PostgreSQL server which we built in the very first step. So HiaraData is another very, very important concept of what we're doing here. So I'm gonna talk quickly about HiaraData. It's a YAML-based configuration file format that allows hierarchical organization of your data. So you can have a very baseline set of data configuration in which case, and I'll explain some examples as we go forward, but in which case your baseline config goes to everything. Or you can override at a per environment level, so maybe you have specific production configs that only go on production servers or host specific configs that only go on hosts. And HiaraData will allow you to organize that hierarchy and then pick and choose, Puppet will pick and choose which configs, which environments or servers or all servers get. So we have a really strong, well, I shouldn't say we have a dependence on it, but it makes what we're doing a lot easier with this Puppet module. So I'm gonna talk through the HiaraData portion of it as we go through here. And there's two important HiaraData structures that we use. One is a database server to application mapping. So this is, in human speak is which applications are gonna talk to which database servers or which databases. And then the second layer of HiaraData that we use is an application to DB role mapping. So in human speak, this is which Postgres role is the application gonna use to connect to a database. The reason we split these up, hopefully we'll become a little bit more clear, but we wanted to be able to detach these two configuration aspects from one another and we wanted to have the database server only really know about the databases on it and then we wanted to have the application roles only know about the applications and the databases. And we'll talk a little bit more about that. I kind of went through this, but the first one defined this at the host level. So our list of apps that are talking to a database or a database server will be defined at the host level. So in this case, using HiaraData, we have a database cluster named cluster one and in that host level of hierarchy for our config we define the number or not the number, we define all the applications that are gonna talk to it. And this is an example of that data structure. So at the top here is a YAML format. So CMMPG SQL is the name of our public module and DB list is signifying that this is our list of databases and they belong to cluster one. My database is a database and these are all the apps that are gonna connect to it. So at that point, all that host needs to know is what apps do I need to worry about. The second stanza down here is the equivalent data structure that is yielded inside a puppet. So if you sort of were debugging and looked at that data structure, that's what it would look like. It's a hash. It's a hash of arrays. And then the second day structure I talked about was the list of roles. So these are the list of Postgres roles, i.e. users that are going to connect to your database. And these are at the baseline config. So across anything in your environment. So everything in production, the user application user and password and the database that it connects to is gonna be available to anything in our app, any application, any database, anything at all. And then here's an example of what that data structure looks like. And this is probably chicken scratch to a lot of you but we have a very detailed readme that kind of ties all this together on our GitHub repo. So if anybody wants to go out and reference that later, it's pretty good. So this is our second application to role mapping. Now this is an app specific variable. So the namespace is application and we're saying that this is a DB config portion of the app config and then the application name. So we know which application it belongs to. And then it gets a list of databases and it gets a right handle in this case which gets a default right role which we'll talk about that later. That's not a postgres role. This is an abstract data structure here. This is a what kind of permissions role do you get on the database? This is the host that it's talking to, the database adapter, username, password. I mean that stuff's pretty straightforward. And again what this data structure looks like translated into Puppet, it's a hash of hashes three levels deep. And the advantage to both of these data structures is that you can expand them infinitely. So you could have 100 databases on a cluster, you could have 100 users talking to a specific database. And then a really important part of this, the right handle, notice this, we put this third layer in here so that we could define different types of application handles for all of our apps. So we can point an application at a right database, in this case like a master, or a read-only database where we wanted to do really quick read operations. That's a whole another topic on the application side but it's something that we wanted to make sure that we could support going forward. And now we're gonna go through, I kind of gave you the high level, we're gonna go through the Puppet module as we built it. So you can go on to github.com, cover my meds organization and go to cmm underscore pgsql. And I noticed this morning that we should have probably named this puppet-cmmpgsql to comply with puppet stuff but we'll do that some other time and we'll break everyone's Git repose when they check it out. So now I'm gonna get really deep into the puppet of what's going on. So again, if you have any questions, please let me know. So initpp is like the base manifest. Manifestor considered your source files for Puppet. Initpp would be like the index that HTML of a web page. It's the first thing that Puppet looks for when it's gonna operate on something. Our initpp kind of does like a quick little setup. It calls out a few other manifests and we're gonna go through those and then it also checks to see if the database host is a master or if it's a slave or if it's in the middle of a failover. So if it's in the middle of a failover, it does absolutely nothing. It exits and says something bad's going on. I'm not screwing, screwing with the database. If it's a slave, it does nothing because replication's gonna handle everything on the slave. And the advantage there is you can create one cluster, spin up your slaves, run a quick script to initialize replication. You can scale out numbers of hosts in your cluster very easily. So the next thing that's sort of still on the baseline, we do the initpp call setuppp and setuppp does some very, again, baseline things, sets up PG Top, PG Repack utilities that we wanna have on every database. It installs some monitors for that database which get exported to our shinkan slash Nagio system. So every database we spin up automatically gets monitored. And like I said earlier, it manages, it does things for masters or doesn't do things for slaves. Whoops. Oh, that's not a whoops. We are gonna talk about hierarchy there again. So in setuppp, we leverage hierarchy data a little bit more and we manage all of our Postgres config. So your Postgresql.conf file. And then we can, again, use our hierarchies based on host level, cluster level, environment level or baseline level or if it's even a developer machine running Postgres, we can apply this config. So this is another hierarchy data yaml structure. Again, it's referencing cmmpgsql and this key we're doing config so we know how to get to it. And anything that is valid in a Postgresql.conf file is valid in here. This is a array of key values or array of hashes, actually. So anything you can think of, this is kind of just like a one-for-one. Stick it in there in your Postgres. Well, one of the pitfalls of this, yeah, go ahead. What would you put in besides value? Value, actually. So you bring up a good point. I probably should have showed the other end of this that calls the config, but value is a parameter in one of the puppet labs resources. So there's a puppet labs resource called Postgresql colon, colon config and the parameter is value. So the name of the resource, it would be hot standby and then the parameter is actually called value and then you have to give a value to it. So that's actually a built-in but if it probably would be better to show that part of it, it would make more sense. So next, if we're a master, we set up some very important things like basically everything that our cluster is going to do. We create an administrative user for our sysadmins to use so that we don't operate on the database as the Postgres user. We use that for taking backups and handling schema change deployments, things like that. We create default permissions on the public schema and then we revoke create by default so that any app Postgres rules that we put in there don't get dangerous access to our database cluster. And then at this point, we look up the CMM PGSQL DB list config that I talked about earlier, which is our database server to app list mapping. And once we have that app list mapping, we invoke that entire data structure, which is a database and a list of apps that connect to it. And it will create a database for each one of those applications. Now, at this point, all we're doing is creating a database. And in the step earlier, we did some safe things to the default schema, the default public schema so that bad things don't happen. This is actually a little bit hard to read now that I'm seeing it up here, but the app DB right here, this is basically what gets called. So app DB, this is the name of our database, and then these are all the apps that will be connecting to that database. And next, we create a DB handle, which is utilizing our list of apps and mapping it to the handles that we talked about in the second data structure. So we basically created all of our databases and then Puppet says, okay, I got my database, what apps need to connect to it? We go through our list of apps and we create all of our database handles. Oh, let's see if I can get back over to that. Yeah, so in this case, this is the structure that's creating the database handle. And then finally, the app user is created after the handle was created, which allows us to manage very granular grants based on templating. So we have, I mean, I think we have three or four templates right now because for the most part, an application gets pretty standard access. We have a default write template and we have a default read template and we have a DB owner template. But the cool thing about this is we're going back and using the PSQL resource I talked about earlier, which is kind of like running PSQL on the command line, but you're not running it on the command line and you don't have to care about remembering what command you run or ran. So this is, I want to stop and talk about this because we've kind of went through all this data, which I'm sure everyone's kind of like swimming in their heads, not really sure what's going on. But in essence, what we're doing is we're running PSQL and we're generating a list of grants based on a template. We'll look at the template in a second. We're acting on the database that was initially in our list of databases running on the Postgres server. We're running as our Postgres user and group at this point. The path is not a big deal, but it makes sure that you don't run it on the wrong instance of your Postgres server and then this on less statement. So we'll go look at the template, but this on less statement is kind of the bread and butter and we probably spent, I don't even know how many hours discussing how we were going to manage this on less statement because there's two ways of doing it. We chose the easy way and hopefully one day we'll go back and do it the hard way and the right way. But the on less statement says if this user has a default ACL of read only access on this database, here's the username and here's the database. If this user has read only default ACL, read only access on this database, don't do anything. We assume what we've done, our template has already been initiated. Now we'll talk about the pitfall of that a little bit later, but it works really awesome because you run your grants once and only once and you don't have to worry about it ever again. Yeah, what's that? It does a select, so if you go look at the PSQL resource type, if you look at PuppetLab source code, they're running a select one on that subquery. Yeah, yeah. Well, if it returns anything that's like, I mean kind of like thinking like if it returns true in this case it returns a record something and it says, okay, this happened. I'm, yeah, yeah, exactly. You wanna make sure that what you're checking though is somewhat indicative of what you're trying to do. Cause if you do it. It's also where it falls down. Exactly, exactly. And then I talked about that. So at the top here, you can see that in the command parameter, there's a template and then here's how we design our templates. I did some line breaks on here. I don't think that would necessarily work in real life. But it's gonna run all of these grants with that username on that database. So this right here is our default right grant template. So pretty much every application that connects to one of our app databases gets at least one user that has a default right permission. So they can pretty much do anything except create tables. They can't delete tables. They can't alter tables because they're not owners of the database. In this case, the Postgres user is owner of the database. But then we have another template that is read only so we can restrict the grants a little bit. And then like I said, we have a DB owner which would create that user as an owner and give it owner access to the database. And so like we were talking about with the unless statement, the way we're doing it, and we'll go back up here, we're only checking if the user got default ACO access of read, which is kind of dangerous because a sysadmin could go in there or anyone with administrative access and change grants. And we don't really have a way to protect against that. But at the same time, we figured we're restricting all the other access to the database and we have to trust our team to like do the right thing. And if say you're giving a grant to a specific role then and you know it needs it, the proper way to do it would be to record that in a template and make sure that the next time that user gets created, it gets created the proper way. So that's kind of the pitfall of the way we did it and we went back and forth because the other way you could do it would be to create an unless statement for every single grant that you're allowing a role to have. And we kind of started down that path and we thought this is a really hard thing to do, not that we don't want to do it, but if we try to solve this right now we're never gonna get our databases built. But if you could imagine what we would really like to do one day is generate a grant template and then a corresponding unless template so that you could verify all the grants that were in the template and then maybe you would have like our super tight audit control of all the grants and all your roles. Maybe in like version 0.1, I don't know, we'll see. This is the output of the unless statement that we were using. And like you asked, as long as your where clause returns something, in this case, I'm running this query on the Postgres user and you can see what it returns actually, it's kind of, it's not cut off too badly. But it returned something because in this case, in our list of permissions, we do have an R in there. And then sort of to wrap it all up as far as how this data flow works, this is how the puppet module fans out. So we start with our two data structures, our server to list of databases and then our second data structure which is our database and list of application rules. And we start here in MasterPP, we're only looking at setting up masters because like I said earlier, slaves get anything that the master gets within Postgres reason, obviously. But we get into master, say in this case, we had three app DBs. For each one of those app DBs, we're gonna create our list of DB handles in which case we'll fan out to three application users and all the template grants that go along with those app users. And again here, we would do the same thing for app four, five and six on DB two. Look, I kind of messed that up, but for like DB three, you'd have app seven, eight and nine and those would also fan out. So you can see this can pretty much go forever. Obviously you're gonna have a point where you don't want 100 databases running on a single cluster, but one of the reasons that we really like the two data structures is one of the approaches we take is we get these new apps in our environment that our developers, we run a really fast development cycle. So developers will say, hey, I got this new app. It needs to be in production like tomorrow. Okay, well we're gonna give you, we're gonna put you on like the catch all database server because we have no idea what this app's gonna do and it could never even really make it in real life production. So the flip side of that is we get an app in production that like blows up and it's like super popular and for whatever reason gets into traffic, we can pair that off of a cluster, move it onto a new cluster on its own, excuse me, spin up a brand new database cluster with this database. We could spin up a slave in a cluster, promote it to master, rename the cluster and then bring up three more slaves that will all replicate it and we don't really have to do much but build servers and assign the data structures to that server. And then finally, dirty little secrets. We had a, we do these code reviews like lunch and learn code reviews for developers and one of the developers used this and I thought it was pretty clever because everyone who writes something knows all of the bad stuff that it does. So one of the things that's really frustrating about Puppet is that you can't loop yet. There's versions coming out. Well, I think in the last year, we're using Puppet Enterprise but the open source version of Puppet, the latest version has a way to do mapping functions on hashes or arrays but we don't have that yet. So we use a custom function called prefix keys and let me see if I can go back here. Well, I can't find the slide but basically prefix keys allows you to create a unique resource name for anything that you're doing. So if you get down and dirty into this puppet running, you'll see that by the end of a user's grants being generated, it has this really long resource name which is a concatenation of spit balling but a concatenation of the cluster plus the database name plus the app that's connecting to it plus the user which is really just a way for Puppet to know that it's unique in only doing one specific thing. Once we can use looping, we can do that a lot easier without hacking in this prefix keys function and then competed.com is a, I don't really know much about the company but they have this really awesome function so we stole it and we put it in our stuff. I figured they should get credit for that. And I talked about the other pitfall of the SQL grant templates and permissions and how we're really only checking that default ACL read access is granted. So I mean, if you're not really maintaining your systems right, you could have people doing bad things behind the scenes and your Puppet's not really managing it. And then finally, this module is highly custom but that doesn't mean that we don't want other people to use it or get other people's feedback. Like I said, when we started, we didn't really anticipate even releasing this open source but once we started using it and realized how powerful it was for us, we were like, we should go tell people about this. So we released it on GitHub earlier this week. It's got a MIT license. It probably won't work for you if you go use it today but we respond to our GitHub accounts very quickly and if anyone has any questions, put an issue in there, send us an email, submit pull requests, whatever you like. We have a few things that we know we can pull out of there that get rid of the customizations and we'll probably be doing that over the next coming months. I mean, obviously we're gonna have to maintain it for ourselves so you're gonna see pull requests on it from our team. And it does more than that. We also use it to manage our backups. That's like another customization that we have in there. We use it to manage our SSL keys because all of our client connections run over SSL. We use it to stand up our slaves so we deploy our slave creation scripts with this puppet module. So if you go check it out, you can even see some of the other utilities that we have in there. And then finally, we're using it to trigger off schema, a schema deployment tool that we're building in-house. So I'm hoping we can probably come talk about that. Probably not me, but one of us can come talk about that next year because we're working on a way to deploy schemas and track version control of schemas within our Postgres databases. So it's our Swiss Army knife and it's been working really awesome. In fact, I think it was Monday or Tuesday, we loaded an old backup onto a test server and completely broke our replication on one of our slaves and this puppet module saved our butts because we basically just blew the slave away, rebuilt it and it came back up and had everything it needed. It was great. So that is our little world of Postgres and puppet. Does anybody have any questions? Yes. It's all virtualized. So we're running everything in VMware right now and we have one single data center right now and we're building a second one and a third one within the next year. So we'll have a puppet master in each data center but we'll have the same puppet config in every data center and then servers will know which data center they're in. So the database server itself, we literally like we have another, there's another project actually on our GitHub account called MakeVM that uses some Ruby APIs to create VMs from kickstarts. So basically we go into our hire data config, we update the data structures to list out what apps that we want to go on a cluster or say we just create a new cluster and move it in there. So basically all we're writing is a YAML hash and then we basically run our script to create a new VM and it boots up, runs puppet and it gets Postgres. No, we have one cluster per VM and then we'll have like three VMs in a cluster. I mean I guess we call them clusters, it's probably not the real sense of the word cluster, it's basically we'll have, yeah we have a master and a couple slaves and some of them if we have databases that aren't important we don't even have slaves. Yes, I have no idea. I've never used AWS. So we don't use any cloud servers at all, mainly because we deal with personal health information so we're really gun shy about that. So I don't know if you'd like to, after this if you'd like to talk a little bit more about it I could and give me a little bit more information about how Amazon RDS works, I might be able to help you. I'd be happy to talk more about that. In puppet you mean? No, we don't. And for no other reason than I vaguely recall someone on the team saying if you're using run stages you might have something else wrong with your manifest but I would say that was probably a year ago and we've just never revisited it. I've seen other things use that and need it for certain things. For example, I know we've had problems when we were writing this, setting SSL to true and then not having the SSL keys there in time so I think there's some things that might have benefited from that. Is that what you're talking about? Like doing specific things in different stages? Oh, okay, no. I would say we use the Hiara data hierarchy instead of that. Did you have a question? For SSL keys? We don't manage SSL keys. We don't manage SSL key creation through puppet so it doesn't fit in there at all. We provide a few parameters, if I can remember correctly, we just define a few parameters that let you use whatever key you want. So it's not really managing expiration but that would be pretty cool. There's an open SSL puppet module that someone just showed me the other week that looked really sweet and I bet you could use that and use the resources generated from it and pass them into your, yes? Sure. So we have the same exact problem and we don't have a solution for it yet. Right now, as I mentioned, we manually edit our Hiara data file which I don't know how far we can go with the concept of not doing anything like manual but we're trying it. Instead of using Hiara data to manage the configuration, we're starting to refactor a lot of our puppet to use custom facts. So these data structures, in this case, again, I'm just using a blue sky example but in this case, maybe our data structures would be in a custom fact which is on the host and a little more dynamic. That way you're not having some script commit to get. You know what I mean? That kind of gets funky. But the short answer to your question is I don't really know. We're trying to crack that same problem and this DB schema deployer is one of the things we've been trying to write as sort of like a middleware that our applications can utilize to get the stuff that the apps need over to the database. So again, I'd love to talk more about that because we're trying to crack the Docker nut as well. Everyone loves Docker but we can't figure it out. Anyone else have any questions? All right, well, thank you. If anyone has any questions offline, feel free to hit me up. I appreciate everyone's attention. Have a good time.