 As you mentioned, my name is Gavin Roy. I'm the VP of Architecture at Aweber Communications. I've been using Postgres for nearly 20 years at this point, which is kind of crazy when you think about it. And I'm going to share a story with you today. It's not a cautionary tale, but like any good story, it's got ups and downs to it. It starts with the cloud. And I'm sure if you've been in the internet industry for some time using Postgres, you've probably heard this, seen this meme, right? That cloud is somebody else's computer. And that's maybe the cynical view of what the cloud is. On the corporate marketing side, the cloud is push button easy, right? Cloud takes away all the things that we have to care about as engineers or DBAs or technology leaders, right? We just push a button and everything is done for us. But like many things, there's a lot more to it than that. On one side, we have the cynical view that we're just using somebody else's computer. On the other, it's magical and easy, but there's a lot of gray area in between. So when I started thinking about how to use resources within the cloud, there are a lot of appealing things about that as I'm sure many of you have thought, right? There's this idea of hands-off management. You don't have to worry about your data center. You don't have to worry about infrastructure management, purchasing, provisioning, inventory control, dealing with deprecation of hardware, the monetary aspect. You don't have CAPEX to deal with. You're only dealing with OPEX. You don't have to deal with server failures, hardware failures. You don't have to call support when a dim goes bad or a motherboard goes bad. You don't have to deal with drive failures with storage. From a database perspective, this is really appealing, right? Having managed large quantities of large databases in a 24 by seven operations environment, the idea that I don't have to worry about taking a database offline to deal with issues that it's all done for me automatically in the cloud is pretty appealing. This idea of automation and deployment, I don't have to worry about the configuration management and the package management at the operating system layer. I don't have to worry about backup. That's done for me. I don't have to worry about upgrades. Upgrades are done for me. It's all magical in the cloud. Failover, if there's a hardware problem in RDS and I've set up a multi-AZ failover scenario, it's automatically done for me. And if I wrote my applications right, they just failover and everything just works. It's great. And then, of course, from a business perspective, anybody who's worrying about the bottom line, about service availability, about problems, about staffing, having technical support available with less than an hour SLA to being able to get a response to a problem is pretty appealing. You know, there's a lot that we do from a day-to-day perspective in infrastructure management as a DBA, as a technical resource system administrator, programmer, whatever it is that goes into managing a lot of the infrastructure side of providing the services that we do in our jobs. Whether it's figuring out why I'm having latency within my database, is it related to IO? Or did I provision, you know, did I purchase the right kind of DAZ or SAN or whatever the technology is? We spend a lot of time dealing with technology outside of Postgres itself. And so moving to the cloud and having all these areas covered for us is pretty appealing. So if it's so appealing, there's gotta be a catch, right? And that's where the story starts. So in my current role, when I was hired, I wasn't hired as a DBA. I wasn't hired as somebody to manage the databases from a day-to-day perspective. I just happened to have been working with Postgres in the Postgres community, pretty knowledgeable in the Postgres area for some time. I came in, there were DBAs that were on staff, people who had expertise in Postgres, who knew all the ins and outs, just like I did. People I could work with and tell them, you know, here's the path that I wanna take us from an architectural perspective across the board with the company. And we could go in that direction. So it wasn't my job to be on call 24-7. It wasn't my call to deal or my responsibility to deal with database problems until the day I came into work and they got a better offer from somewhere else that lured them away all at the same time. That really sucks. And being the person in the company who actually knew the most about Postgres had operational experience with Postgres, all of a sudden I'm on the hook for that. And that was January of 2016. So before I go into what happened, let me tell you about what we use Postgres for in this particular area, or the area I'm going to speak about. So we're an email service provider. If you're not familiar with Aweber, you may be familiar with Constant Contact or MailChimp or others in the space. Basically what we do is we provide a service that allows our customers to efficiently and effectively market their services or whatever it is that they're doing via email. They're not spammers. In fact, we have email deliverability experts, one of which co-authored the CAN Spam Act. So we feel very strongly about what email marketing is and how it should be done. As part of email marketing, we allow our customers to send out large quantities of email. And of course, people who do that care about analytics with regards to that. Just like somebody who's running a web property cares about their analytics and might use a tool like Google Analytics to keep track of their visits and engagement with the site, we provide tools for our customers to basically keep track of the engagement that the people that they're marketing to have with the content that they're sending them. And that's a lot of data. It accumulates depending on customer size pretty quickly. And so what had been developed over time was this sharded solution that ran on 10 physical servers or 20 physical servers running Postgres 9.0 at least at this time. And it was basically sharded to where there was a customer per database. All told, it was about 15 terabytes of data. And it was deployed and managed using Chef, which configuration management tools are great. But when you're dealing with failover, you're dealing with managing your Postgres infrastructure, it may be a bit too much overhead. If you're dealing with changing roles within Chef and managing things within your configuration management to make hot standby failovers actually happen. And when you have to change IP addresses on the fly, on nodes, it can be a bit unwieldy. Needless to say that that was a somewhat painful stack to deal with and the servers were aging. They were out of warranty, starting to fail, drive failures were starting to accumulate. And here I am, not up until this point in this job, I had not been on call at all, which for me was awesome because in my previous job, I was on call pretty much 24-7 for a very long time. Now I'm on call, I have hardware failing, I can make a recommendation, we're gonna buy new hardware, we're gonna fix this or we're gonna go to the cloud because as I talked about before, the cloud was magical. So I had a little bit of a problem, how do I do this? Right, I've got all this data sitting here in my data center on these servers and I've got to get it over to AWS in a way that works. I could use logical replication, I could put something like Sloney in place to copy all of the data across. That's kind of difficult because RDS, if I'm taking the RDS route, is fairly opinionated about what you can and can't do. And so I'm somewhat limited as to what my solutions are as far as how I could get data from my data center to AWS. Fortunately, we had built a system at that time for pipelining our data into our sharded databases that used RabbitMQ. So in our normal operation environment, we have these different data sources, things that do click and open tracking, collect the metrics of what messages were sent to who, those sorts of things. They went through distribution queues that went to the different databases within Postgres. So the migration was pretty easy. We basically flipped a flag for our customer and say, hey, you're inactive while we're doing this migration. All their data accumulated and then we would be able to move that customer basically dump and reload their database into an RDS database. And then once their data was in the new RDS database, we flipped a switch and all of their queued data that had yet to be written to Postgres got written to Postgres. So that was pretty easy actually. That was probably one of the easier aspects of taking all this data and moving it across. Now, it's not fast, right? There's a reason why, for example, Amazon has, I think it's Snowball where they send you a piece of hardware. You can plug into your network and you can load all your data in and ship it to them because it's actually faster to transfer large quantities of data physically than it is over the wire. So one month later, we're 100% migrated to RDS. All of our customers are up and running, the site's working, the clouds are parting, my stress level has gone from up here to down here. I don't have to worry because it's all managed for me, right? It's pretty awesome, automated failover. Some of the things that happened while we were migrating is we did run into issues, but it automatically failed over for us, right? They promised that, that's what it did. It was really easy to upgrade. We started out on Postgres 9.4, it was a point release, they came out with a new version, I went to the little drop-down box, I picked the next version and I hit upgrade. And what happened? It basically took a snapshot, copied all the data over, upgraded a node, failed over to that node and brought everything up and for minimal amounts of downtime, really minimal amounts of downtime, I had point and click automated failover, that was really awesome. The default settings just worked, right? I went and I looked at the settings and I said, this seems reasonable. I'm going from 9.0 to 9.4, they seem to have things tuned. They also are the ones that are presenting the service, so it, for me to go in and question why they have certain things set certain ways, probably doesn't make a lot of sense. Backups are happening every day. Boxes are being monitored, CloudWatch is basically making sure that everything is up and running the way it's supposed to be. But then there's some not so awesome stuff. How many people have actually used Postgres and RDS? Have you actually tried to get at looking at log data? You have the UI, right? UI kind of sucks. Getting at and reading log data in their UI sucks, so now you have to use their command line tool and their command line tool basically is a multi-step thing to get at your log segments. You basically have to find the logs. It's not like you're dealing with a file system. You're basically dealing with an API and yeah, getting at logs sucks. You wanna use something like PG Badger or PG Fooine to get at managing your log stuff and dealing with it in an automated fashion, not great. No real super user access, right? So the Postgres default super user, yeah, you're not the super user anymore. There's an RDS super user that supersedes your ability to do things and as the Postgres user, you can do some things. You just can't do everything. Role management, so in our organization, we used LDAP to manage all of our access and dealing with things. Yeah, you can't do that. pghpa.conf, if you're used to using that to put in your ACLs for who can get to what, how they can auth. I want this user to be able to auth untrusted because I know this host that they're coming from is solid using this role and I've got everything set up in a secure way that I don't have to worry and I want this one to have password based auth, all the things that we do in pghpa.conf, yeah, that's gone. Nope, can't do that. Monitoring's not Postgres specific. So yeah, monitoring's there but I'm fairly opinionated about monitoring. I want to know about index bloat. I want to know about locks. I want to know about the performance of what's going on inside Postgres and at least at that time, the monitoring solution didn't care about those things. It was machine level monitoring. You know, what are my IOPS? What's my CPU look like? How's my memory utilization? That was kind of a bummer. So it was like, yeah, I can have this over here for monitoring and CloudWatch for some of the stuff I care about and now I have to have another solution for dealing with monitoring for the other things I care about and dealing with the database. And there are a few of you here that actually know me and know that for me, giving up low level control of what's going on is really difficult for me. I have trust issues or something like that. So we made it through this period and all the things that were being promised were happening and we knew that there were trade-offs. We basically said, you know what? Okay, we don't have anybody here but me. I don't know if anybody ever tried to hire somebody in the Postgres space but it's kind of hard. Takes a lot of time and getting them to relocate, especially if you're not in a major tech area. So, you know, moving into the cloud makes a lot of sense. Then 28 days later happened. I was actually like in a good place. I'm like, okay, all the stuff has moved over. I don't have to stress out about this. Now I can look at our other Postgres databases which are our travesty and how they're set up and the schemas and the operational management and start addressing those issues. I can actually do my other job, my real job, the thing I got hired for and work with the various teams to start planning our next big product and how we're gonna implement it. And I got paged. And the page was that our cues for writing data to Postgres had started to back up and customer data for a particular server basically was stale. I got bit by the maximum frozen XID. Transaction ID basically, Postgres went into a read only mode because auto vacuum couldn't keep up with the velocity of data that we were writing to the database and I had used the default settings. Now I thought, man, I really screwed up here because maybe I got instances that were too small. These full auto vacuum stuff should work. I compared the settings on our auto vacuum usage before even going into this. I knew that this thing existed. What did I do wrong? Well, let me fix it. So I knew that the way that you fix this is you basically have to go in and take Postgres and put it into a single user or single connection mode and force a vacuum. You get the vacuum done and everything comes back online but I'm in RDS and I can't fix it. I don't have access to Postgres at that level. I can't SSH into the box. I can't do anything. Okay, kind of screwed. So it's time to open a support ticket. I've got business level support. It's 24 by seven, one hour. Okay, let's wait. So I get a level one support guy. I tell him what my problem is and he tells me, yeah, that sucks. I'm gonna have to escalate that. And so I gotta wait. I'm still waiting. Customers are not getting their data. My boss is saying, when's this going to be fixed? His boss is coming to me saying, did we screw up? You know, we haven't had customers who have not had analytics data for six hours at this point. You know, what did you do? So now I gotta pivot. I'm still waiting. The RDS support team comes back to me and they say, yeah, we can fix this for you. What we need to do is take your database offline and vacuum it and I said, thanks. That's what I said I had to do in the original ticket that I can't do because I don't have access to because it's a black box and you guys deal with it for me except for when it comes to actually detecting problems like this before they occur. As you can imagine, I was kind of angry. And I said, well, that's great. Okay, so you guys are gonna fix it for me. How long is it gonna take? Don't know. You got a lot of data in there. We can't forecast exactly how long that's gonna take. Okay, you know, and I'm getting a lot of pressure. And by the way, we've now moved from like day one, we're in day two. So do I wait for support to fix it? No, I'm gonna restore from a snapshot and I'm gonna vacuum and we're back in business. Awesome. Except for restoring from a backup takes a long time. And when I say it takes a long time, I started the process and three hours later, I said, you know what, can't do this. I would, you know, here I was going into it thinking ZFS style, you know, low level snapshots. They've got the EBS volume. Stuff's just gonna copy over and maybe it's gonna take 30 minutes, but I'm gonna be up and running now. And in fact, I actually left it running just to see how long it would actually take to restore the snapshot two days. So somewhere along the line, as you can imagine, I'm getting a little bit of pressure to get this back online. And what I decide the right thing to do because I didn't let the RDS folks take my database away was to stand up a new instance and go back to the process that I went to before and migrate databases off of it. At least it's read only, right? I can get at the data that's there. So I marked all those customers as offline and stood up a new instance and started copying data in. But before that, I wanted to make sure that I wasn't gonna run into this situation again. Now, I have to admit I'm partially to blame for this because I went into this going, you know what, they're monitoring the box for me. I've got a lot on my plate. I'll get around to putting in checks to make sure that all the stuff inside Postgres is okay as soon as I get through these other things on my plate. So we ran for about a month without internal Postgres monitoring, which, you know, that was my fault. I have to take blame for that. Had I been monitoring, I would have seen that that value just kept ticking up and up and up and up and wasn't taken care of. So obviously we learned a few lessons from this. Waiting sucks. And that loss of control thing where I'm forced to wait, that doesn't sit well with me. That was probably the most difficult thing about all of this. From a DBA perspective, giving up control and having to wait on other people where my personality type is I like to gain trust in the people who are solving problems for me by having them prove that they're trustworthy. It's trust is earned, right? And I'm waiting on Amazon and their Postgres experts, which I don't know personally, to fix a problem for me potentially. And I have no idea if they're going to do it. And to be honest, the guy that I was talking to at the first level didn't give me a lot of faith that they were actually going to be able to fix the problem. Since I had to explain to him what the problem was and why it existed and how to fix it. And that second is I need to be monitoring things. I can't rely on Amazon to make sure that everything is good. RDS Postgres is not magical, it's Postgres. I'm giving up control in certain areas, but I still need to be a good DBA. I still need to be a good systems administrator. I still need to make sure that everything is working the way that it's supposed to work. And so what this is, we're basically collecting this data, shoving it into a few different data sources or data places, one being graphite, which is what this is showing you. And this is basically tracking. I went back, unfortunately our retention policy dropped the metrics from the event by this time. So when I went to go make this slide, I actually wanted to show you guys this, but I didn't have it. A, because I wasn't recording at the time, but B, it didn't go back that far. So I had mentioned before about accepting the default settings that they had. As somebody who's run Postgres for a very long time, I'm used to going into postgres.conf. And when I set up a server, I go in and I do certain things myself. I know how to allocate memory and I know how to tune the configuration settings. And again, this is on me. I thought I'm gonna pick a sized instance, they've got parameter groups, they've gone through and done the work to figure out if Postgres is gonna run well in this environment tuned the way that they have the default tuning setup. And that's not the case. Maybe it works good from a naive approach for the small application, but when you're dealing with a transaction velocity that we were dealing with, databases at the size that we're dealing with, I actually needed to go and deal with those settings and it sucks. So you go into this page and you got a nice web UI for managing postgres.conf, except for you can't touch everything in there. You can touch the things that they allow you to touch. And you hit save changes. And once you save the changes, you gotta restart the database instances. It's not a reload, it's a restart and the restarts work by starting off with a snapshot and then failing over and they go through the same process they go through for upgrades or whatever the other DR failovers are. And so that's pretty painful when you're dealing with a lot of databases within there. Now, that being said, if I were better at DevOps, I could write some applications to do that stuff for me, but I was being lazy and just decided, I'm gonna do this one at a time and just wait. So I waited it out. And the last thing there was I needed to read the docs. So I went into this thinking, you know what, I know Postgres really well. Heck, I got the job that I had before this one just based on my reputation within the Postgres community and knowing Postgres well. And so what do I need to go in and read AWS's docs about running Postgres and AWS for? Turns out they have this little appendix and I say little appendix, I mean little appendix, it's not very large. It probably would have taken me 30 minutes of upfront time to read. But with my personality type and the way I deal with stuff, I looked at it and my eyes glazed over and I said, yeah, I know how to create roles. Yeah, I know how to get on it. Okay, I'll figure all that out. Common DBA task for Postgres in RDS is required reading because they cover some of the stuff that I've talked about. They cover really what the pitfalls are in all of this. And so I would say that start there. The other documentation is somewhat boilerplate and wrote but that's where the good stuff is. If you know Postgres, that's where the good stuff is. So one year later, we've been running in production with few hiccups. There have been times where I watched the maximum XID basically grow and I start to panic because database, you know, I've got everything tuned but I try and be patient because I've seen this pattern enough times now. So, oh, look, it's going to catch up. One of the problems with the architecture of how we currently have these databases set up is the logic that goes into picking where customers go can put and we don't know deterministically upfront if a new customer is going to be the 90th percentile customer that doesn't have a lot of data or they're going to be a large customer. We can pick a database and we can put somebody there but if all of a sudden they're sending millions of email per day, they skew one of the databases. So this is more of a Postgres architecture issue than it is an RDS issue. But I've learned watching the XID come up that it'll eventually get to the point where it gets cleared and we watch that pretty proactively. But it's worked solid for us. One of the nice things that we've seen with RDS is that their approach to it has not been to just put the service out there and let it sit. There's nice incremental improvement. So we first went out with Postgres 9.4 and as security releases were made around that time point release was made and they were pretty quick on top of it to fail over to that or give us the option to move to that. In fact, in that area, while I'm thinking of it, they've been so kind as to force security upgrades which is kind of cool. You specify your maintenance window and put everything in and when they have a release that they're gonna force you to, they force you to it. And again, it's been pretty painless for us. But soon after we went out from 9.4, they put 9.5 support in. And I think within the last three months they've added support for 9.6, so that's nice. They're keeping up generally with Postgres releases. Obviously they have lag time between when Postgres releases come out, at least major releases come out and when they add support for it. But by making sure that they're current, we're able to make sure that we're current and support for that is really easy. I don't have to deal with PG upgrade. I don't have to dump and restore. I pick 9.5 whatever from the list and I hit the button and it does its stuff and comes up and it's running 9.5 and 9.6 was the same way. It's pretty cool. It's been largely headache-free. So we've been running large amounts of data and it's been reliable. So I haven't had instances go away. I haven't had wide jumps in latency as far as application performance. It's been predictable which was one of my original concerns with moving to the cloud. If I'm on shared infrastructure with other customers, if I'm basically being time-sliced at the CPU level or I'm on network infrastructure that's being overloaded by other customers, I will see variation in either latencies for the same queries or I basically will see degraded performance over time. And so one of my concerns from a DBA perspective, from somebody who has cared about the performance level between the hardware, the RAID cards and the DAZs I'm using is how do I get predictable performance out of the cloud? How do I get predictable performance out of this? And surprisingly, maybe not because they have engineered a good solution, that hasn't been an issue. So I've been pretty happy about that. Unfortunately, it's the same pain points. So while they are supporting and making incremental improvements, the pain points that I have with RDS are the same. Now being said, they released a new feature about, well, within the last year called Enhanced Monitoring. And it's kind of weird. It doesn't solve my solution that I care about within internal Postgres monitoring, but what Enhanced Monitoring basically does is it takes snapshots of the running processes and more detailed OS level information than what was there before. So when you are troubleshooting, and you basically, it's a bit of a bridge between not being able to SSH into the box and being able to get operating system level information. So I can go in and use PGStat activity to find a troublesome query that's, or a query that's been running for a long time and use Enhanced Monitoring to go and look, oh, look at that, that one thing is pegging the CPU and it's problematic. So that's a nice improvement, but it's still not Postgres specific. It doesn't give me information about how long exclusive locks have been held or how many access share locks or that sort of thing are going on. So it's not one solution for me for monitoring, which I consider a pain point. But again, it's stable. One of the other interesting things is it becomes, you can put a lot less thought into how you deal with scaling because I'm not dealing with the OPEX aspect. So do I need to basically be able to provide, do I need more cores? Do I need more RAM for this box? I just go in, select, pick the larger size, hit save, it does its failover thing and now I'm running on newer hardware. So that's really nice compared to the thing of trying to prove out that HP's latest hardware platform is gonna give me the performance that I need or whether or not this SAN is gonna provide adequate IOPS in my infrastructure. So is it the right kind of solution for everybody? You know, I don't think so. I mean, you use the right tool for the right job. You give up a lot. You give up the ability to install extensions that you might use otherwise. You give up the ability to use lower level replication solutions that you would otherwise use. You basically are in a very prescriptive environment about how you use Postgres and if that prescriptive environment works for you, it can be a compelling answer to the operational aspect of managing databases. So we've been pretty happy with that. And with that, do we have any questions? Yes? Sorry. Go ahead. Regarding the moving of the Postgres database to the cloud, you have that I've written Q, you're just sitting, does it also move to the cloud or is it in your premise? We actually use Rabbit in multiple environments. So we have it in our data center and we have it in the cloud and we use it to federate data between the two. And so we actually collect data in the data, some of our data in the data center, some of it in AWS, and we use Rabbit as the centralized topology for how we move data around everywhere. So we're sitting in the same data center or sitting in the cloud? Yep. Suppose if you have a private empty running and you have a local data center. Sure. But that is... Right, so AWS Direct Connect is, for us, have been a good thing in that regard, in that instead of going across the internet, we're going across a private connection that we pay extra for. But tying that with their VPN for failover or in case the line goes down has been a pretty solid solution from that perspective. And it dropped the latency down for us dramatically. But yeah, latency is a big issue. Like I was saying, if you're in a larger database scenario, using their physical appliance that you plug in would be a pretty compelling thing. For us, the major issue was, how do I get this data over without impacting a large majority of our customers at any given time? And so it was okay for us to have a slower transfer speed on a per customer basis and have it take a longer period of time as long as we were able to provide an uptime experience for a majority of our customers while that was happening. Obviously, that doesn't work for everybody. In addition, we have a... So part of my job anyway is both like forward-facing architecture but also a bit of archeology and trying to figure out why the application is the way it is or what design choices were made, where and we have... And this is a pattern I've seen in multiple places. We have a big Postgres instance. Is the big Postgres instance. Usually it might be called something like DB one or Postgres one or something like that. And a lot of our core application was built in that over the last 18 years of the company existing. That is not a database that I can move to RDS. Just can't. The way that it's architected, the extensions that are used, the requirements for latency and how it behaves and the hard dependencies, it's not a good choice. Whereas these analytics databases which basically provide a... It's almost service-oriented in its architecture of how things are dealt. It's very customer-specific. Connection latencies are an issue. But by managing and intelligently caching our connections, we're able to get around that. So again, it's really finding the right tool. If we were 100% in AWS, it might not be as much of an issue. But again, the extensions that we use in Postgres that we can't use in RDS would keep us from doing that too. It's not completely downtime-free, but the downtime is seconds less than a minute. It's failover. So basically what they do, as I understand it, and they don't give you lots of detail, imagine there's probably a talk that somebody from the RDS team has given that explain how they manage what they manage. But from a user perspective, it appears that basically they snapshot. They do work in the background. They fill you over to the new node with the new data. And so it's basically a failover window that you see. And that's pretty short. It is writable, yes. Right, yeah, so absolutely. I mean, there's a, I'm totally blanking on the name. What is it, Peter, that Greg wrote? That's Perl? Check Postgres? Yes, checkpostgres.pl is a great utility knife for gathering that data. There's a, PgCatalog has a huge amount of data in it. So PgStatActivity, PgLocks, depending on how you're, you have to be careful about what data you pull out of there because that on its own requires a pretty special big database depending on how much you actually wanna have. Some of the trade-offs that I've seen there, people like to keep table-specific activity here, but if you have too many relations, too many indexes, that can be problematic. But, you know, you can basically pull out everything from, you know, sex scans and heap hits, tuple hits, all that kind of fun stuff right out of PgCatalog. And there are a number of tools that do that. That's what we basically use, checkpostgres. And then, trying to think, Telegraph, which is a newer collector written by Influx data. Diamond, and something that I wrote that I'm embarrassed to still exist in production that I wrote in 2007 called Stapler, actually still exists in this environment. So, I didn't put it in there, somebody actually downloaded it and installed it and I'm like, whoa, what's this? So, for RDS specifically, I ended up writing some Python apps that run queries for the things that I specifically care about and pumping that into Graphite. First, very sorry to hear the problem. It has led us to put it in some internal monitoring of all customers' transaction ID levels. Oh, no kidding. Well, not just your, because they've multiple customers in the last year who have, you know, run out of transaction IDs, have to do the same thing just a little bit on that. And so, we now track that internally and we get notified internally when customers' transaction IDs are increasing too fast and you reach out to the customers and help them tune their auto-backing. It's not automatic because it's a little hard to automate that. It's automatic if you don't like it. Not ready to say we should sell it to them. Sure. So, that's one thing. Another thing is, Graham McAlister in his session yesterday and RDS talked about a new extension when we were doing the same thing. Whenever the support for the new model versus the new model next week or two, that's called Log FDW, which will give you access to the logs. Inside of your instance. So, you can ease your access to the log-ins. Okay. Which you mentioned earlier on. Sure. How about shipping it to S3? Automatically. That would be a lot better. Okay. So, we're right now previewing a feature called Performance Insights. Previewing it together with the Oracle service. Which gives you an Oracle Enterprise Manager style viewing to what's going on inside of your instance. Could use specific SQL statements. What kind of load they're causing. That comes with APIs. So, it's not just a pretty picture. You know, pretty UI. You can also pull it out yourself. We will extend Performance Insights to all of the RDS engines. By the end of the year. Including, of course, Postgres. But others, you know, MySQL, Maria, etc. So, that's something to look forward to as well. So, like I said, incremental improvement. Yeah. You guys have been pretty good at that. Yeah. So, those are some of the things that have happened. Awesome. So, like I said, we have also started sending the onboard emails. If you're excited, the age is about one billion or more. So, are the part of 8MOS artists supporting you? And I'm sorry if you had the... You know, it's, nothing really to apologize for. Because, you know, you're going into an environment. Right? And for me, the key takeaway that I'm hoping everybody goes into this with is that they're trade-offs. Right? And, you know, I came from, or I am used to being the guy who goes into the box who runs iostat minus x. Right? And strace attached to a running thing to look at it. And I'm giving that up. And yes, you know, there was extended downtime and that sort of thing. But, you know, part of that is I don't have to care about how you guys are installing Postgres. I don't have to care about how I'm doing point release upgrades or even major release upgrades because I go to the UI and I click a button and I hit save. And my problem was going in thinking, hey, I got a black box. It's magical. And I don't have to worry about it. And the real thing is, no, it's not, you know, as a responsible DBA, I should take responsibility of what's going on inside Postgres and not rely on Amazon or Heroku or whoever the provider is to take that responsibility away from me. Sure, right, right. Sure. So that's pretty awesome. That's great. Any other questions? Yes. It involves spending about $150,000 in hardware and trying to figure out, because I was also trying to marry the fact that we've been actually from a company direction moving entirely to AWS. And in our new architecture that we've been implementing over the last two and a half years, architecting away from our current application completely. And so part of the trade-off was, do I really want to go spend this OPEX and CAPEX on additional cabinets and that sort of thing for something that I know we're getting rid of? So yes, the escape plan was there, but it was like a plan D. You know, I have the same concerns about EC2. It could have been, absolutely. But again, I care about how predictable are my IOPS and that sort of thing. And actually, I feel better about RDS than I do about EC2. So we run a large quantity of stuff in EC2 and we have nodes that just go away. You know, and we architected for that. It does exactly what we wanted it to do, but I would not, we're architecting towards an environment where we deal with cattle and not pets as far as not having that database really be important in the overall architecture. And I feel like taking and putting Postgres in EC2 just is moving where I have my pet. And I still have to care about it like it's physical hardware, whereas RDS treats it more like pets for us. We don't have to care as much. All right, well, thank you very much.