 So yeah logistics. So we're gonna do a talk on scaling open scaling Postgres on platforms like OpenShift and we're gonna go into details on Postgres operator. People will probably find interesting. So who is CrunchyData? CrunchyData is a company that is Postgres consulting services and several Postgres committers work for Crunchy. We do open source Postgres. It's all open source and we also do containerization of Postgres and we'll talk a lot about the containerization side here but we're basically an enterprise Postgres support company. We also have a booth here. It's right behind the Red Hat booth over there. 603 I think is the number. So if I don't cover something in detail, stop by the booth and I will spend a lot more time talking about containerization and the Postgres operator. So if you're trying to do a Postgres as a service some of the things you'd want to look at is containers in general provide a sandbox around something like Postgres. So it makes it easier to install, implement and administer like large numbers of Postgres in a cloud environment. Some of our customers do this on on-premise, some in public clouds. So we've seen all kinds of different combinations, hybrids between the two as well. So some people will run say a Postgres replica on-prem or off-prem but maybe the primary runs some other cloud or something. So lots of different combinations but we try to provide tools that let you do your own Postgres as a service. Okay so why would you do this? We want to lower the cost of provisioning Postgres so the containerization helps a lot with that and some of the tooling around it. So like Postgres operator for instance makes the cost of provisioning really cheap. We also want to provision Postgres in a way where you have a very good control of compliance of how your Postgres is set up from a security point of view. Also we support different kinds of authentication and it helps you basically control where and how your Postgres is being deployed. It's all open source too. These projects there's two of them container suite open source and also the Postgres operators open source as well. Some of the things that make up this Postgres as a service a set of containers called crunchy container suite there's about 12 to 14 Postgres related containers in that suite from running the database to managing it, doing monitoring and statistics collection and things like that. We support OpenShift as a primary deployment platform so everything we're talking about runs on OpenShift. The Postgres operator runs on OpenShift as of 3.7. So a set of building blocks is this container suite and it runs the Postgres database open source database. It lets you do backups in a variety of different ways. We support today three versions of backup utilities for Postgres. Large scale backups as well using PG Backgrass so like our Postgres container includes PG Backgrass for incremental backup. It includes things like PG Audit for government or DOD requirements for database auditing and then we do things like metrics collection of Postgres and we let you scrape those metrics with Prometheus and again there's a GitHub link there at the bottom. If you want to go find more details about it documentation and a suite of examples you can just download that from and try it out. Images are provided out on Docker Hub. Those are CentOS based images so everything's free for you just to try. Some of the terms for containerization that come to play whenever you're deploying a database is clearly pods and services and deployments but most importantly is management of persistent volumes. So a database container like ours has multiple volumes that you can attach so just the data itself is its own volume but things like configuration files, archive logs, backups, all of those are volumes, represented volumes that get mounted into the container so managing the sets of large numbers of Postgres with large numbers of volumes is a management task. This graph simply represents as you scale up the number of Postgres containers or any container for this matter you get to a point where managing all of those becomes kind of a burden on the human side so the Postgres operator tries to solve handling large scale numbers of deployments and decreasing that burden so that's why we did the Postgres operator in the first place is people started deploying hundreds of Postgres containers and one customer in particular is deploying about 700 so when you deploy 700 database containers something like an operator is really useful to help manage that kind of you know number, sheer number of things to manage. So about 11 months ago we started writing this Postgres operator and its job initially was to enable easy provisioning of Postgres and also Postgres clusters so it lets you scale up the number of Postgres replicas it defines custom resource definitions that are centric to Postgres so there's things like PG cluster, PG replica, backups those are all custom resource definitions that the operator supports it's based on the Kubernetes APIs and client goes specifically so why would you do this on the right is a typical advanced or complicated Postgres deployment one primary in numbers of replicas with pvcs and claims and volumes and services everything on the right is basically automated by the Postgres operator from a deployment perspective but the management side is really where you get a lot of value from an operator because it knows about the bill of materials on the right it can let you interact with bill of materials type of concepts to a Postgres cluster as opposed to you having to individually manage small number of pieces of resources on your own or keep track of those so the operator applies metadata across everything on the right from a logical point of view you view it all as just a Postgres cluster we're going to show you just a real quick can demo of the operator has a client interface today and that client is nothing but a REST client that talks to the operators REST API so when you're interacting with it as a as a human you're interacting with a REST API its first command PGO create cluster RS1 this is how you start creating Postgres clusters essentially there's binaries for windows and Linux and Mac as well that we distribute so we just created three clusters there we're going to apply some metadata labels so if you're trying to manage hundreds of these containers you want to be able to categorize them in certain ways so here we're going to apply metadata label of environment equals test on just two of those and you're going to be able to search and query just based on those metadata labels so imagine hundreds of these deployed with several kinds of different categories categorization schemes you're able to view kind of your assets that you've deployed so using this PGO client means that you can you know examine what you actually have deployed out there from a Postgres set of assets that command there shows you the kind of the flexibility we've built into the operator it's really geared towards complex deployments of Postgres that particular command lets you place replicas on completely different storage classes on the primary it also lets you specify resource configurations for different pieces of your Postgres cluster and it's also targeting certain kinds of nodes for the primary so it uses Kubernetes node affinity to specifically let you place if you want to where your primary Postgres is going to be and then there's logic baked into the operator that's places node affinity rules that place the replicas on nodes that are not where your primary is running essentially so it gives you form of HA that's the configuration file on the server for the operator that defines all those configurations both for resources and for storage classes you can have any number of storage classes you want and you can do things like backup to specific storage classes as well so if you wanted your backups to physically be placed on storage classes like in another data center you can do that that last command just shows you how to view a cluster gives you information that's useful and then it basically a simple test that shows the status of the cluster PGODF isn't just showing you the data capacity utilization of your Postgres on a volume basis so that you know how much of my PV I'm actually using with Postgres data and this last command shows kind of overall operator status it's all command line driven today working on a web app to front this is one of our roadmap features that we're working on that command there just shows you kind of overall how many databases I'm running and what versions of databases am I running so if you're DBA and you have hundreds of these things deployed it'll tell you what specific Postgres versions you're running and how many of them are running so just another demo she's going to show a little bit different characteristic you see at the bottom there are those labels those are just user defined labels you can attach any kind of metadata to these you want and then you can query on those metadata just using any other kind of Kubernetes selector filters another thing while this demo is running is this is controlled with an RBAC mechanism so you can define different kinds of roles for operator end users you can define read-only roles or admin users so you can precisely define you know which features of the operator specific users want to use just uses basic auth and a simple RBAC mechanism there you'll see one with multiple serve multiple pods that we've scaled up you just run PGO scale and it creates causes it to create a new replica that's attached to that cluster from a debugging point of view whenever we print out information about a cluster if you have access to kube control or OC it gives you the ability just to do further diagnostics by printing out that information so roadmap people seem like they really like this project and we have some really large people testing this out and trying it out so some things that we're working on kind of phase two of this is to handle advanced backup management so if I'm creating thousands of backups what do I do in terms of scaling the management of those backups that's a problem we're looking at solving also they can thin cloning of databases using different kinds of storage technologies rapid data ingest is one where if we can apply operator scaling towards rapid processing of thousands of input files that's something that we think is interesting to look at a graphical user interface people would love to see that as opposed to this beautiful command line tool and then advanced security we think we can do things from a security point of view in terms of applying sequel security policies across large numbers of postgres we think we can do that with the operator actually it's already built in when in terms of a version initial version of that you can actually create and apply sequel policy across in numbers of postgres clusters with one command so if you have any interest in this topic at all check us out at the booth 6 3 oh you get a hippo you can talk to us ask us any kind of questions you want about crunchy postgres the operator things you can do with us these projects are both open source as well so they're very accessible for you just to go download and try out and play we sell support professional services and support around these and we also do training for customers on it for enterprises needing support and that's our business model but real exciting projects we think the operator technology really is is the way to go in terms of advanced automation and especially if you have hundreds and hundreds of containers to manage we think that's where the sweet spot is so things like dev test qa if you've got lots of databases that you need to provision and manage things like the postgres operator we think are where you can solve that problem and there's our contact information feel free to email us or paying us and we love talking about this stuff and working on it and also we can talk more in detail about big roadmaps for for what we're going to do down the road with the operator and thank you very much