 So our next speaker, we're going to turn it over to Mark Porter. He's the GM of RDS, right? So AWS has been a fantastic diamond sponsor for us. When you come back during lunchtime, their booth area is going to be set up over here right in the middle. Find out about looking back at that convenience of getting Postgres. It is a very simple way of being able to get it and go quickly. With that, I'm going to turn it over to Mark. Hey, everybody. I didn't pay him to say any of that stuff. I think the message, though, was really, really on target, which is convenience. Everyone today wants things fast. Everyone wants things to just be something you can play with. And at Amazon, we think about that as the speed of experimentation. Now, convenience is one dimension of that. But the speed of being able to stand up something and then have it say fail and say that was a bad idea is part of the core tenants of AWS. But that's not at all what I'm going to talk about today. What I'm going to talk about today is some of the business environments that Postgres is living in. And for some of you who are new to the ecosystem and you're coming with your companies, I'm hoping this is going to be enlightening. There's a lot of technical talks over the next couple of days. This is not one of those. So AWS is incredibly excited about Postgres. Today, I'm going to talk about what's happening with relational databases with Postgres, how AWS is involved both from a product point of view and even from an open source point of view, which is a pretty hot topic for a lot of people here with AWS. Then I'm going to turn over the stage to Simone Pharr, who is the Senior Vice President and CTO of FINRA, the Financial Industry Regulatory Authority. That's quite a mouthful. And he's going to talk about FINRA's successes and challenges with AWS, RDS, and the cloud. And my slide is at some point going to change your area. So what's happening with relational databases in general? They continue to run the most mission-critical applications of all enterprises. I talk to customers all the time. And believe it or not, contrary to what the NoSQL people would have you say, they're growing their footprints. The market is dominated by four commercial vendors and three open source vendors. But things are changing. The commercial vendors are running scared. They're being disrupted by great open source technologies, cue Postgres right here. And that disruption is being accelerated, as you can see in the marketplace, by their own business-hostile, customer-hostile behavior. So they're defensive, as people tend to be when they're profitable. Enough so that one of them recently doubled their prices to run their product in the AWS and Azure Clouds. So think about that a little bit about what makes you do that. We believe they did that because they are trying to stop adoption of these clouds while they move on. I'd love that, because that might. Check, check. That's much better. OK, so things are changing. They recently doubled their prices. Are these vendors right to be concerned? The answer is absolutely yes. In fact, a friend of mine who worked at one of these big companies in a very influential position came to AWS and told me that in the six months after joining AWS, he had a bunch of meetings with people who he'd met in the six months before joining AWS. And when he sat around the table with those same customers, they were telling him how happy they were with the products and how they were going to renew. When they came to AWS, they found out that one of their largest strategic initiatives was to get off of those products. Why is that? Why the disconnect? Well, the disconnect is because of licensing fear, of auditing fear. And frankly, for this audience, because there isn't a technically viable solution for a lot of those customers to move their infrastructure to. So they had no choice but to lie. So what's happening with Postgres? First off, it's getting ready for the enterprise. And when I say the enterprise, I mean the mission critical workloads of the Fortune 500. 9.6 helped with important features in scaling. 10.0 has an amazing feature plan. As we talk to customers, the things we're really excited about in 10.0 are plan stability, more efficient materialized views, better PL-SQL compatibility, removal of the vacuum penalty, and great performance at even higher core counts. What's even more fun about Postgres is that C-level execs often tell me they've decided to move their database infrastructure to Postgres, meet with them all the time. I dig in with them and ask what they know about Postgres, and they know almost nothing. It's like, wait, why are you doing that? It's because Postgres has managed to move in the last three years from what I would call a developer brand to a business brand. These executives are actually sitting there going, we will pivot our entire infrastructure around getting off things we don't wanna be on and moving to Postgres. And there's another thing. I couldn't have queued up the slide earlier better. That 2,221 lines of code modified every day. The pace of innovation at Postgres in the community on that hackers list is greater than the pace of innovation inside the Oracle database group. I was in the group for a long time. I was at Oracle for 14 years. And it's not because those people aren't smart. They're wonderful, smart people. It's because when you're in a big commercial company like that, you just can't move fast. So what this means is that we're gonna start seeing Postgres innovate faster and faster and faster. And we're seeing that in adoption as well. Adoption has gone super linear with Postgres. In fact, the analysts we talked to tell us that they're getting more requests about Postgres than they are about either SQL server or MySQL. Oracle's still pretty high up there. Now here's the bad part. And here's the part where I want the community to think hard about it. This all sounds great, but it's not perfect. Enterprises need more than just an awesome piece of code. They need vendors who provide security and compliance. They need TCO, total cost of ownership that provide prove how much clearer, how much cheaper Postgres is. They need to ISV support. That's probably the biggest thing that's missing is that all of the world's most serious software packages run on Oracle and SQL server. Very, very few of them, with the exception of a couple big leading vendors like Infor, run well on Postgres. So we're not gonna get Oracle eBusiness suite running on Postgres. I don't think that's gonna happen anytime soon. But we need to focus on every other ISV in the marketplace in order to make this community succeed. Not only that, despite all of the awesome people in this room, there is a true shortage of professional services skills. We meet with enterprises all the time who say, if you could put 50 people on the ground to help us convert to Postgres, we'd love it. And we're like, we don't have the 50 people. No one out here has the 50 people. Let's make that happen. And if some of you are in that business, realize that your skills are in very high demand. The other thing is customers need 24-7 support. They need it to be run like an enterprise. And they love to use the concept one throat to choke. I kind of think that's negative, but that's what we hear from customers. And then finally, the community needs to accept that in order to become the most popular relational database in the world, large companies have their place in this sandbox too. This includes the Big Five system integrators. It includes companies like Amazon and AWS. We can provide services and confidence that augment the awesomeness of Postgres. At the same time, you have to imagine the Fortune 50 who have an average relational database footprint per company of over 100,000 cores. That's pretty shocking when you think about walking in with this community product. We have to build an ecosystem that works. So what's happening? Capitalism is filling the gaps. Feeling the lack of an ecosystem, one is being naturally and healthily created. For example, companies like AWS, Enterprise DB, OpenSCG, Data Agrite, Citus, we're all providing those services and it's happening. In addition, custom forks of the code are being built for people who need custom work. Amazon Aurora is one of them. Postgres, Excel, Redshift, et cetera, et cetera, are all forks of the code. And that's happening, that's a good thing. Some of these are gonna make their way back into the community, open source tree, some aren't. But they all raise the brand of Postgres in the marketplace. And lastly, everyone we talk to is in the migration business. It seems like there's so much migration work to do, people just need to go out there and build those businesses. So now let's focus a little bit on what's happening specifically at AWS. We start with our customers. They want Postgres everywhere. And they want on EC2, I'm not here to sell you RDS or Aurora, I'm here to talk about Postgres. They want on EC2, there's a huge footprint on EC2, probably larger than that on RDS. They're gonna want it on Amazon Aurora, we hear. If I can just get the product GA out there sometime. But customers have to get there and that's hard. So we developed the AWS Database Migration Service and the Schema Conversion Tool. With the Schema Conversion Tool, customers can run a simple assessment of their databases and know how hard it's gonna be to convert SQL Server, Oracle or other databases to Postgres. And because Postgres compares so favorably with the other targets they could choose like MySQL or Maria, every time we take SCT into an account we find that their adoption of Postgres goes up, which is kind of fun. In addition, we see that once you do the Schema Conversion Tool you can use the Database Migration Service to move over with minimal downtime. So we're helping people do that. Now here's a key topic here, which is compatibility and lock-in. Customers are just as concerned about lock-in from AWS as they are from anybody else. Lock-in is just so last century, I like to say. Those vendors who promote lock-in are not gonna succeed in the new world. Because of that, both of our Postgres solutions, RDS and Amazon Aurora, are 100% application compatible with Postgres. Sure, you spin them up a little bit differently, but the applications run the same. Just like EnterpriseDB, Citus, BDR, all these other forks, we forked the code for Amazon Aurora. But the difference with a lot of the stuff we've done is that our fork is 100% compatible at the application layer, and we intend it to stay that way. So you can read the screen while I'm talking. The schema conversion tool helps you move your objects like tables, indexes, constraints, and procedures, like from Oracle to Postgres in this example. And what this did is this actually unblocked a customer. They found that they could move databases, they had no clue they could move. Now to be really clear, they also found they had some databases which were really hard to move, and they had to focus different effort on that. AWS charges nothing for the schema conversion tool. You can all download it and use it today. You can take it into customers running Oracle. You can run on your own databases. You can figure out what's going on and get the assessment report. Now I'm not gonna go over the screen in detail, but basically this is how easy it is. You have an Oracle database on the left, up there is the Oracle word, you have Postgres database on the right, and in the middle it goes through and gives you a list of every single place where your database is not compatible moving from Oracle to Postgres, every single one. It then has an editor if you'd like to edit it, and you can edit the text right there and create new procedures. If you have a couple hundred procedures, you'd use this tool. If you have 10 or 20,000, you'd probably do your editing somewhere else. So one thing I wanted to just mention here is Postgres has an incredibly healthy fork history, and most people are surprised when they see all these different forks. Of the 42 forks that Postgres has on record, 23 of them are registered as being available, which is kind of amazing. So now let's talk a little bit, since the audience, I'd actually pulled this slide for my presentation, but when that survey was done earlier, I threw it back in. Let's talk about managed Postgres. Since 2009, we've been building RDS to help customers manage their relational databases. We launched RDS Postgres in 2013, and as was mentioned earlier, RDS Postgres Redshift in Amazon Aurora, the three fastest growing services at AWS ever. Why? Because they're convenient, because they do the job, and that's due to a lot of the work of people in this room. So what else does RDS do on top of Postgres? It'll let you scale your storage, it'll run your backups, your patches, it'll do point-in-time recovery to any second in the last 35 days if someone maybe forgot a where clause in their application. I know that's never happened to any of you. RDS watches all of your instances, and automatically rebuilds them if something goes wrong with the hardware or the network, and has capabilities like texting and all that to let you know what's going on. Moving forward, we're gonna be offering even more high performance options and cloud-native for those of you who choose to move to Amazon Aurora. So a lot of you have never seen RDS, it's pretty simple, it's just a pane of glass with your databases in the cloud, from all seven database engines we support, in any of our regions, single pane of glass. You can do updates, backups, patches, see the status of everything. You can monitor what's going on on your databases in real-time, and here's a key thing, this all goes into a data store, really large one with the number of instances we have. This all goes into a data store, and then you can parse it via API, or we partner with companies like Datadog, who are awesome, and you can integrate it into your infrastructure. Here's a sneak peek at a new feature we're working on. Performance Insights will launch in Q2, again, I need to get back to work and get it launched, and we'll let you look at every SQL statement done against your database, with 35 days of free history of every SQL statement run, how it worked, what was going on on your database at the time. So, for those of you who don't know why your database is slow, but can look back in the past and notice that at Wednesday at 2 p.m., something changed, you go look at your pipelines and you go, wow, that's what happened, it was those application guys who deployed a new version of the app. Performance Insights is gonna first roll out on Aurora for Postgres, and then we're gonna be rolling it out across all the other databases in the RDS fleet over 2017. So let's get to an important topic, which is how does AWS contribute to Postgres? AWS wants to help the Postgres community. Sometimes we do it in the traditional ways, and sometimes we don't. First, we focus on building the Postgres brand. With two of the fastest growing services based on Postgres, we talk about as an amazing piece of software all the time. We lead with it in executive discussions, because one of the rules that we use is if you haven't moved your databases to the cloud, you haven't moved to the cloud. If you have moved your databases to the cloud, you have. That goes all the way up to Andy Jassy. We directly contribute by sponsoring conferences like this one, all over the world. Grant contributes his wisdom by giving talks in pretty much every time zone he can all the time. We contribute bug fixes. We've even found a couple security problems that we've contributed back to the community. And as you can see from the footnote, Amazon is working, well, maybe you can. Amazon is working hard. It says, and more open source contributions are coming, and we're working hard on that together. We also contribute credits, and right now we're doing a really, really small thing there where we are actually contributing to the build farm. This just stood up about a month ago. There's a build farm animal running on AWS. And we're working with the community to figure out how we can contribute to additional resources. We'd be happy to run all the testing for the community. We'd be happy to run scale testing. We'd be happy to do whatever. Come talk to us afterwards. So thanks for your time today. I hope you've seen that AWS is very passionate. Now it's time for Simone to come up. And he's from our partner FINRA. What is FINRA, by the way? It's an independent regulatory company authorized to protect American investors by making sure the broker dealer industry operates fairly and honestly. Wow, that's quite the mouthful. Simone is the senior vice president of technology. He has a bachelor's in engineering science and a master's in engineering and computer science. And in his degree, he was focused, not very surprisingly, on real-time data processing, which I think is pretty cool given what he's doing now. He led the Open Text Corporation to an IPO. He worked with over 100 venture capital firms. And at FINRA right now, he leads all of the development to protect the financial markets, which you guys benefit from every day. Let's welcome Simone to the stage. Let's try it with a lavma. Can you guys hear me with a lavma? Yes? Yeah, barely. Is this better? Okay, so maybe to start with a little bit of an aside, because the first speaker was talking about the ice trucks and so on. And I was reminded there was a multi-millionaire that made his money essentially following the ice truck. And actually, oddly, I was thinking about it a couple of days ago, sort of dozing on the train. And these trucks would go around and they would deliver the ice to people's houses. And then electrical refrigerators came out. So what this guy figured out was that if he followed the ice truck, he could make sales calls. Because wherever the ice truck went, they didn't have a refrigerator. Now, the capital purchase of a refrigerator was a lot, especially in those days. So what he did is he figured out a leasing plan for, you know, so he'd go to the house and after the ice truck left, he'd say, I'll put a refrigerator in here. Here's the lease costs on it. If you don't like it, I'll take it away and the lease payments work out to what you're paying, less than what you're paying the ice man every week. So as I was thinking about that, sort of an obvious analogy there to software as a service and where we've gone, if you think about license fees versus pay as you go, I mean, Richard Stallman figured that out years ago and spawned the open source movement. You guys have figured that out in Postgres and AWS has made that real and alive for all of us. So it was kind of an interesting talk from the metaphor from the first speaker. So let me maybe first a little bit start talking about what we do. And as Mark said earlier, I mean, there are gonna be lots of technical talks. I'll talk about our use case, what we did, why we chose Postgres at a broad level and what the organizational dynamics were around that. That might be of interest also to this audience. So FINRA, Financial Industry Regulatory Authority, where the largest US regulator of securities markets, briefly on the parts that might be interesting to you guys, is we get feeds from all of these various exchanges and dark pools and so on and so forth in trading venues. All these feeds come in and then we do data integration analytics on this, looking for fraud and somebody cheating somebody or market manipulation. So depending on the volumes and the markets, we get up to 75 billion records a day coming in. So this is a larger volume than you would see in IoT or telco or anything like that because in most of those applications, you keep roll-ups or outlier events. We keep everything and we keep it for seven years and run the analytics on this. So that's, and then the other part of our use case, the major part of our use case is that we have a force of examiners that goes around to various broker dealers to make sure that the public is being treated fairly and they gather data. So net of it is it's a very data intensive data integration and analytics environment that we have. So what we did is we just finished a program last July and to re-platform the bulk of our systems, the critical mass, 90% of our data footprint and the applications on it on AWS. That, so that began correspondingly two and a half years, call it four years ago. By the time we did the POCs, the business case and then two and a half year programs to get the project to get there. And it's been very successful and this is, it really exceeded all our expectations. And if you Google, you'll find there's, we've been very public about it in terms of presenting at conferences and so on and so forth. So you can find different information about that out there. The, a couple of points on, because I understand that there would be a lot of people here that are post, sort of traditional database in-house towing the water on Postgres. Should I go to Postgres? What does that mean? And then what does it mean? Cloud, no cloud, internal, all of these kinds of, there's hand wringing, I think. I see some head nodding in some organizations and so forth. Tell you what our calculus was on this. We wanted to go to the cloud. Why the cloud? And specifically the public cloud, not private cloud, in-house, quasi-clouds or anything like that, because essentially those don't make sense. It just doesn't make sense. And we can go through why and if you catch me offline afterwards, I could talk about that. But to get the main drivers for going to a cloud was to, and we looked at them all and went with AWS, was that at a high level, we wanted to redirect our spend from things that don't matter to our business to the parts that are closer to what our business value is. So in our case, that's data integration and analytics, versus data centers and cabling and the commodity parts of the infrastructure. So that is a redirection of our spend and a redirection of energy and focus to where it matters more. That was one of the big motivations. The second big motivation on this was, well, the clear ones you know, resiliency in the cloud, much better. And the ability to be more automated and more responsive, which are very difficult things, impossible things to do in a traditional data center, by having a software-defined data center gives you a lot more responsiveness to the business needs and to market volumes in our case as they go up and come down. So to give you an example, the elasticity of our environment in AWS is, on a Sunday, we might have 5,000 nodes up. This is across Hadoop and all the various things, not just Postgres, it's everything. And then on the Monday, we might go to 50,000 nodes and then come back down. So it's a highly elastic environment with all of the resiliency and automation that that requires. So cloud was the way to go for broadly those types of reasons. And as we looked at this and we took our big data systems and so on, we were in proprietary appliances in-house. Those went to open source solutions or the open source family, which I'll include redshift in that. I mean, it's that those broadly. So we have Hadoop, we have Spark, we have Presto, HBase, all of these types of things and running in AWS. So that was a major part of it. Again, the idea being to go to commodity where possible. And the other one is that there is a world of innovation going on across multiple industries in data analytics and databases. We didn't want to be locked into a particular vendor's integration spend or innovation spend. So the amount of innovation that happens to support all of the, across an open source community that has traction is way more. And we talked about the acceleration of innovation and Postgres and so on. We've seen that. So four years ago, we would have a point vendor pitching us, oh, we've got these features and it beats this thing along these particular performance dimensions. I was like, no, no, wait, let's just wait. And sure enough, for example, it was Hive back then. I mean, now we have other technologies. The next version of Hive comes out and it beats that. So that is a big deal for us, along with the commodity and avoiding the lock-in and so on. So open source is big for us. So we went with those. And then in the relational database part, which is obviously of extreme interest here. We have the very oracle-intensive shop, big user. Our relational database use case is leaving out the big data things and so on, but specifically relational database is a heavy data integration hub. So we have hundreds of various sources. We have a replication-based data movement and data integration that happens to present data up for analytics. We also do transactions for assorted reasons against that integration hub also. So it's a very complex environment that has to be resilient and up all the time and all of that. So in looking to move this and re-platform this in the cloud, the first rule was no proprietary databases. This is, no, wait, why? Well, we'll get into why. It's important because that comes with its own, well, one is the numbers get all out of whack if you're paying these license fees for proprietary databases. Other aspects are important because let me skip for a second to RDS. The rule that we had, everybody has different environments, I understand that, is we are not going to run post, we chose Postgres and we looked at them because the complex SQL support, we do a lot of the CTEs and analytic functions and so on. We looked at them, that was the way to go for us. And we wanted it on RDS. That's where we decided to be. Why? Because we didn't want to take our old processes that we have in-house to the cloud. So we didn't want to be dealing with running an relational database farm whose ever database on EC2. Because if we're doing that, we're taking all of the SysDBAs, I know there's some of you here, I'll tell you what happens to the SysDBAs in a second. It's, we don't want to take all of those process, all of that stuff to AWS and replicate our internal environment on a different, essentially on a colo facility. That didn't make any sense. We're not taking advantage of the automation, we're not taking advantage of the multi AZ that RDS gives you. And essentially the hassle free just works and it'll scale up and we don't have to worry about that. That all goes away. We still got to worry about all those things. In our case, our decision calculus, if we're running that farm ourselves. So now you can start to see the multiple dimensions that lead to why no proprietary databases. We don't want to take those processes, costs, and anchors with us that hamper our innovation forward. So what we did is, so those were the decisions. Postgres, RDS, and Aurora, same calculus, same reasons. So what happened then in our organization, and maybe I'll touch on that for this audience, is what happened in this was that the need for operational DBAs and CIS DBAs goes way down because a lot of that is handled by RDS. So what did we do? And three things. And some people didn't want this. They said, look, I'm an expert in this proprietary database and I'm gonna be the last person standing in the world on that and I don't wanna do this RDS postgres stuff. Okay, fine, no problem. But for the rest, it was either they went to become app DBAs, which a lot of CIS DBAs wanted that. Oh, look, it's a commoditizing thing being at that layer in the stack. So up to the app DBA, move up the stack, that was an obvious. A major place that people went were to the DevOps group. Because of everything that we do with postgres and RDS is heavily automated and typical DevOps people don't know how to do that. They don't know enough about databases. So to be able to automate all of those ticket-based processes so that they're running in the DevOps pipeline, so a significant number went there and they were quite happy. It's because you can't find DevOps people anyway. You gotta hire somebody and train them. We had somebody in-house that was up to speed, dynamic and could learn it. Well, why not train our own people? And they had a career lift, now they're DevOps experts out there in this area and it was a modernization, an update of their skill set. And then the third area that people went to, people who knew how the optimizer worked and so on and what was going on in the database, went to working on our big data systems where it's fantastic. If you know how an optimizer works, you can quickly understand how Spark works and how query optimization works and structuring in any of the big data systems because that's a special black art to know that. So that's what happened in our workforce as we went there. And generally everybody got an uplift in terms of the modernizing to that new world. I'm mentioning that because I know that that can be a particular challenge in organizations of what do we do with our people and that can be an attractor for keeping with the older technologies because people are not sure, well, if we cross that ocean, where am I gonna be on the other side? We're not going there unless I can see where I'm gonna be. Well, that's where the three places we re-landed and that was very positive. So let me wrap up, I'm getting the signals here. The hook is coming out. So that's our story at a broad level. And that's the environment that we have. We've got a very dynamic environment running with Postgres. Very happy we made the move. I didn't talk about security, the enhanced security that we get running on AWS with us more than we would have otherwise and all of those technologies, but I think you guys probably know about those. I'll leave it at that. Thank you very, thank you very much. Thank you very much.