 Alright, thank you everyone for coming on a Saturday morning, nice and early, at the end of Singapore, so I live on the other side, so it's a bit of a journey coming over here. So my name is Blaire Layton, I work at Amazon, at AWS, and I've been working there now for about three years, and I think it's ten months, which in Amazon years is a very very long time. So if you go to Amazon and you talk to people and ask how long they've been there, a lot of people are less than a year, so I'm like a part of the furniture of your life. What I do is I focus on our database services from a business development perspective. So that means that yes, I do selling, but I also help architect solutions, I work back with headquarters in terms of talking about what have we seen in the APAC market across Korea, ASEAN, India and Australia and New Zealand, and then say well these are the priorities of the roadmap that we're getting from our customers, we need these features in the services and products or new features that we don't have, sorry new services we don't have. But what I want to talk about today is obviously Postgres related because of the Postgres events that we're attending, and we're going to be talking about Amazon Aurora and the new edition that we're introducing that is now in preview. I'm going to go into quite a bit of technical detail about how that works and how it helps a Postgres run very well. So why would we go about building this anyway? If you look at traditional relational databases, you've got this wonderful stack over here with SQL at the top, transactions, a caching tier, some logging and then of course storage at the bottom, and this is a monolithic stack. Now if you were looking at this and trying to scale it, traditionally most people just get a bigger box, right? Now then there's a few other approaches that people have come up with to try and get around that limit because you can only got so much money or you just simply can't get a bigger box than what you actually need. Then this charting approach, so this is typically what you see in a startup environment where traditionally Facebook and so on would then take a database and charge multiple copies and split the data across that. Then you need to put all of the application logic there to know where the data is across these different databases. The next one is looking at shared nothing, where you're going to take the data and put that into different nodes of the database, but the application is going to be able to know at the SQL layer that it's going to be able to execute a query and the SQL node where to get the data. So the first one is the application, the next one is the SQL and then here you've got what's called shared storage. Does anyone know of a vendor that does this approach? There's two really. So one is Oracle Rack and the other one is DB2 on the mainframe. This definitely does this as well. Yeah, so DB2 and Oracle are basically the two that do this approach. This is what's right? But I think the way it works best is on the mainframe traditionally. Yeah, that's where the histories come from. If you look at over here, there's many different vendors doing this one and this is really up to you. You can do this with any database. Now, if you were going to do a relational database today, would you take those same approaches? And the answer is clearly not really. So what we would go and do is look at a service orientated architecture for a database. And essentially that's what we are building with the Aurora platform. So if you look at the logging and storage here, this is the main focus of what we're going to be talking about today, where we've actually created a storage service for databases. And that's a very different approach to how databases work today. And then we're backing that up into Amazon S3, a simple storage service. And then over here, we're leveraging a number of Amazon services to build a control plane, which manages it all. So we'll go into more detail about some of these different approaches. Now, we did, as some of you might know, in 2014, we launched Amazon Aurora MySQL Compatible Edition. And then what we're doing now is we're adding Postgres compatibility to this engine. So now you can choose whether you want to do this on-prem, whether you want to use our EC2 service, install it yourself for Postgres, or you want to use RDS and just run exactly the normal Postgres code that you would expect. Or here, you can use a different platform with Postgres compatibility to give you superior performance and reliability. So what does this Postgres compatibility mean? Well, we're starting with 96 code base, and then we're going to add Amazon Aurora Cloud Optimize Storage. So it's Postgres working with a different storage engine, fundamentally. Now, the performance we're seeing is up to two times better, which is quite compelling on the same hardware. Fail over time of less than 30 seconds, and we hope to get that down even lower. And then durability, six copies across three availability zones. So that's six copies of your data in case something goes wrong. And then we also have read replicas that are going to be operating off the same storage tier. So very, very low latency in terms of propagation of data to those read replicas. So single digit, milliseconds often. So cloud native security, because security obviously for Amazon is the first job. That's the thing that we concentrate on. Otherwise, we're going to have issues with our customers not wanting to come back to us if they have security problems. We want to make sure that it's easy to manage for RDS as a platform built in what you use today as part of Aurora as well. So easy to launch, easy to modify, and also backups taken care of, being able to do patching and those types of things all done for you. Easy to load and unload. Remember, this is Postgres compatible. If you're not happy with Aurora, you simply unload that and put it back into Postgres either on RDS on EC2 or even on-prem. We're not stopping you once you move into this platform from taking that out and putting it back to where you would like. Or you could even run multiple environments, some on Aurora, some on RDS, EC2, and all that. It's all up to you and exchange that data across there. Now, this is the most important one. Fully compatible with Postgres for now and the foreseeable future. So we have taken the code base, as I said, and it's literally just the storage engine underneath that we concentrated the effort on. So durability and availability, this is the storage architecture. If you're looking at the storage across three availability zones here, then we're replicating that and putting that across those three availability zones. We also have a primary database node at the top here and a read replica or secondary node on multiple availability zones as well. So what happens if something goes wrong over here? We're going to be able to fail over to one of these read replicas, which remember is up to date within milliseconds. And the storage is also consistent. So you're going to be able to get very quick recovery times here. Now, the other thing as well is that, notice there's no slave here, which is doing nothing. Traditionally, there would be a slave that's sitting there waiting for a failover in RDS. So this is a different thing as well, and we'll go into a bit more detail about that. So this is a very different approach. And a key point, so I should say, it's log structured storage. We'll go into exactly what that means and how that works in terms of the architecture later on. So in terms of the storage engine overview, as I mentioned, six times across three availability zones being backed up to S3. And then continuous monitoring of those nodes and making sure that, for example, there's no hotspots that we've got a good even workload across that. If this goes wrong, then we're able to repair that and make sure that we're constantly monitoring that storage tier. It works in 10 gig segments, so you no longer have to worry about database storage. Basically, it can go from your starting point up to 64 terabytes at the moment, and you don't have to worry about provisioning that. If you start using the database and it needs more space, we're just going to add 10 gig segments and keep going. Also, as quorum for read and write, and we'll go into what that means in terms of the failure modes, it doesn't store writes if there's not enough people, there's been some failures. So we'll talk about how many we need in the next one. And then storage volume, as I said, can grow up to 64 terabytes. So here's the failure mode. If we get segment failures on the disks, then obviously we've got multiple copies of that across multiple availability zones. If we get a node failure on a particular storage, then we've got all copies of that even in the same availability zone, but also across others. If we get an easy failure because of network or whatever's happened in that particular availability zone, by the way, for everyone who's not familiar, an availability zone, you can think of it logically as a data center. Sometimes they are groupings of data centers, but you can think of them logically as a data center. Then if we come across here, we can see that four out of six for a right quorum. So we could actually lose two and we can still write to the database. So it's very, very durable and three out of six for read. So you can still do reads when there's a massive problem with the storage service or a large outage, which of course we hope will never happen. So read replicas, you can think of these as dual purpose. They are read replicas in the sense that you know and love them now where you can read off them, but they are also fail over targets. And you can actually prioritize which fail over target you want. So if you've got a series of them here, as we do, you can say, right, well, one of those is going to be my analytics workload where the buffer cache is going to have completely different data to what the application would typically use. I don't want to fail over to that. So I put that at the lowest priority. So you can determine what will happen in the event of disasters. Now, in terms of the continuous backup, the storage engine itself has got these different segments. And then there'll be backups of these segments constantly happening and being put out to S3. So this is happening in the background and not having performance impact on the database at all. Now it allows us to also, because of the way that we're storing these segments and at their log records, to accelerate recovery from these backups, which we'll talk about right now. Here you can see on traditional database that you're looking at a check pointed data, and then your log or well, and in this case for Postgres. And then what you have to do is you have to recover and read that in sequence. On Aurora, we know at the time when the system had a failure, and we can recover to that in parallel, asynchronously, and very, very quickly. So this changes the whole recovery process for databases that we're running on here, MySQL and Postgres. So now this is kind of like the failure modes that you would typically have. The top one here is on RDS with standard Postgres. And what you've got is database failure. There's a failure detection period, right? We don't want a false positive. So there's some period of time where we have to detect what's going on. Then you've got DNS propagation to make sure that we're getting the DNS entry into updated into the DNS for the new server that's now the master. And then database recovery happens until that application is available. Now database recovery in a sequential mode could be very quick if nothing much was happening on the database, but if you were doing a massive batch job that wasn't committed and there's been no checkpoint, then there's going to be a lot of data that needs to be recovered. Over here, you've got a failure detection DNS propagation recovery is very, very quick. And you've got to know if you've got an application that's aware of replicas, you'll be able to recover ideally in three to 10 seconds. But traditionally, you can say 30 seconds. So we've basically halved this down to a maximum of 30 seconds. And that's kind of what we're going with with GA for the Postgres edition. So this is pretty compelling. When we get down to our target of 10 seconds, then we're getting into Oracle Rack territory, right? Where we'll be able to go and talk to customers and say, Hey, this Postgres database is pretty cool. And you can also get the recovery times that you're traditionally looking for a very short recovery time. So performance, if you're looking on the left hand side here, we've got a Postgres database with traditional storage running on AWS. So these are EBS volumes with 45,000 IOPS. And then we've got a M416 large database. And then up the top, we've got a C48X large. Now, if you're wondering, well, what does that mean? 16 large is a beast. All right. It's got, I think, 256 gigs of RAM, at least, if not more, and plenty of cores. And then on the C3, sorry, C48X large, these are some of our biggest boxes in these particular families. Then if you come across here, we're running exactly the same clients and database instance, but with the Aurora storage and platform. Okay. Now, here are some of the results for the performance improvements. Now, keep in mind, this is also on the preview release that was done, these benchmarks were done, I think late last year. So hopefully we'll even get better as we move along in terms of performance and scale. So if you're looking at the blue line, that is Postgres. And if you can't see these little numbers here, this is 512 connections, 768, 1024, 1280, 1536. And then the throughput is on the left-hand side there. So you're looking at a very, very good improvement of at least two times from standard Postgres. Now, if we come across here and look at this particular benchmark where we're looking at write-only throughput, you can see that Postgres peaks at this point here. And then Aurora peaks further along when we got a higher number of connections. And you can see that Postgres is starting to trail off here as we get more connections through. So we're getting three times at the highest throughput and two times from the Postgres peak at the same point. Sustained suspense throughput, about 120K,000 as 120,000 writes through that database per second. And then this is really impressive, right? The start-up time where we're copying the data in and then doing vacuum and index build on Postgres versus the vacuum and index build that happens on Aurora. So we're getting huge improvements in terms of this massive IO operation, right, to do all this, that the Aurora storage engine is really accelerating this activity before we kick in the benchmark here. And this one shows you consistency as well. So we're looking at response time here under heavy load. And you can see that Aurora response time is essentially flat. But with Postgres, we've got this kind of pattern happening. Can anyone think of what that pattern is? Yeah, down back. Exactly. So what's going on here is that you're having to do a massive amount of IO for a checkpoint, which is then stalling a lot of activity. And then it's starting to catch up again. And then it does another checkpoint. So you're getting this consistent pattern of checkpointing across Postgres. But that is not happening with Aurora, essentially because we're doing different types of storage. And the way that you're doing with traditional Postgres is you have to get the blocks and you have to write those out to disk. Here, the storage engine itself is just taking the database changes and building the blocks at the storage tier itself. So the database no longer has to worry about that. They can just keep on going. Also, when you're looking at consistent throughput, you can see here Aurora at the top there with higher throughput and much more consistent versus the range that we're getting on Postgres itself. This is also good information for the community on where to help and try and focus and improve things. Actually, there is a lot of work which has been done in 905 and 906. Simon is here just now. In controlling when the checkpointed purpose are flushed to the disk. So a lot of that has already come in 906. There's a slide on that too. Okay. So we'll get to that. But I think even if you look at the performance difference, the consistency is for enterprise applications is very important, right? So that's something that we need to take into account. So looking at as the database grows, so this one is a 10 gig test and this one is the 100 gig test and you're seeing the difference from Aurora scalability here that as the database gets larger, we're going from 1.5 times to 3 times in terms of performance gain. Now in terms of recovery and this relates to the checkpointing sizes, the configurable sizes here, right? So we don't have checkpoints. So that makes things much, much quicker on the Aurora side of things. The first one here is 2.1 gigs. The next one is 8.3 and then 12.5 gigs of data before a checkpoint keeps in. Now you can see that the rights dramatically increase when you're doing the checkpoints as you would expect and therefore the recovery time also takes a lot longer as you've got more data that needs to be saved as part of that checkpoint process. So really in terms of recovery, you're going to be able to get up and running much quicker. You're going to be able to have more consistent throughput. So just a quick summary by the numbers. If we look at PG bench about 2 times faster. If we look at Sysbench up to 3 times, data loading up to 3 times, a response time about 2 times faster, throughput jitter 3 times more consistent and then 3 ported scale up to 3 times faster and then recovery speed up to 85 times faster. So looking at the performance architecture, now how do we do this, right? It's quite a major improvement to take an existing database and then accelerate that by a factor of 2 to 3 times. So primarily doing less work, you know, anyone who likes to be the lazy mathematician, I certainly was one when I was at school, if you do less work, then you'll be able to get through and do things quicker because you're getting the same results, right? Do fewer IOs, minimize network packets and offload the database engine. Those are the core principles of what we're doing. Also try and do things asynchronously instead of having to rely on doing things synchronously, reduce the latency path and then unlock, sorry, use lock-free data structures and put batch operations together. Now, if we look about the IO, then that's fundamentally what databases or especially relational databases are all about where you want consistency on disk. So that's where we focus all of these efforts. So this is a slide describing what happens in a normal RDS database running Postgres with what we call multi-AZ. So this is with a primary in one data center or AZ and then a slave or secondary in the other. Now, what is happening here is that IBS itself, our storage has replication internally and then we are replicating also from the primary to the secondary or standby another set of those operations. So there's a lot of IO that is going on when we're committing data to disk in a standard RDS environment. When you change this to Aurora, it's very, very different. So here you've got the primary database node, which is sending only the changes that the database is actually made, not the physical blocks, just the little changes, the log entries, and then sending that to the storage engine, which is then updating that across all of those availability zones and then building the blocks as required for when the database is needing to read that back. Now, those read replicas are operating off that same storage. So they don't need to receive the data and then do the writes again. So that's a huge improvement from having the standbys there as well, not needing them to do all the writes again. It also means your read replicas are not having to apply and do any writes from the well. They are essentially 100 percent there for reads, except for some minor data, which is basically cache consistency information about what's going on. So you're really getting massive improvements. Now, in terms of how does this work at the lowest level? So for people who are not so technical, this is about as technical as we're going to get today. So if you look at what's actually happening here, on the left-hand side, you've got the primary database at the top. It's sending a log record of changes in the first step that's going into an update queue. As soon as that is persisted in the update queue, then the database gets acknowledged that write is done. So we're basically having a very quick write and acknowledge back to the database. Then what happens on the third is we're starting to do some sorting on the data here and then doing pair-to-pair communication across the storage nodes. We're also then going into coalescing on the data blocks to build those data blocks and doing some garbage collection, some scrubbing on that, and also bringing that into S3 for backups. So you can see that all of this stuff after point two happens in the background and the database is merrily going along. So it's a huge performance increase in terms of what's required for the database to care about and also by making sure that the storage engine is just having to deal with logs instead of dealing with blocks. So the IO traffic for read replicas here. So you can see again on the traditional Postgres, so this is what I was talking about where you still have a 70% write workload on your database. The read replica still is going to have to do 70% exactly the same of those writes compared to the reads that it's going to have to do because you have to get that data across there somehow. Whereas now what we're sending across in Aurora is that green line up here, the page cache update. So saying, okay, what's changed in the cache? Send that across and update your cache. That gives you a server which is now doing 100% reads and very consistent with low latency of that storage tier below it. So this changes the effectiveness of read replicas quite dramatically. Now this one is really cool. If you have a database server and something goes wrong with it, if you have to shut down and restart your database, then when that database comes up, where's the data? Is it in memory? It's on the disk, right? So as the application starts coming in and the users start connecting, all the data has to be read again off the disk into the buffers. What happens here with Aurora is that we've taken the cache and we put it in a separate process. So if we need to restart the database for whatever reason, whether it's planned maintenance or unplanned maintenance, or there's a failure of some kind and we need to move across to a read replica, then that cache is going to be warm on the read replica or even if it's just a master, that cache is also going to be warm if we're just restarting the database. So you're not going to get the brownouts that traditionally happen where the performance gradually gets better and better as the data starts coming into the database. So this is a very, very good thing for the platform. Now there's something that I've had feedback from customers from the corporate world who are looking at Postgres, and this was mentioned yesterday as well, that the database itself is pretty good. The features in terms of when you look at some of the enhancements that have been made over the years, whether it's now JSON support, you've also got pretty good Postgres support there for spatial data. People look at this and they say, hey, this is coming along pretty well, and then we've got parallel query, seeing good progress here. Where they're not really happy is with the tooling, and there was a comment yesterday about GUIs are not really the core strength of Postgres at the moment. So what we're looking to do here, and this is what we're starting with Postgres, but we're going to be doing this across all of the engines on RDS, is enabling more detailed monitoring and giving you a nice GUI with that as well. So our first step was what we call enhanced monitoring. So traditionally, if you're managing the database and you're looking at it yourself, whether it's on EC2 or on-premise, you can go in and run IO stat, VM stat, you can do top and all that kind of stuff on Linux. Now if you want access to that data on RDS before, it was like a black box, you couldn't really see that. So then we enabled metrics down to one second and plenty of OS data, including process lists and those types of things that you can now see on RDS. So that was step one. And then what we decided, okay, well, that's the operating system level. Now we need to focus on the database tuning side of things. So RDS, of course, is all about managed database. Essentially what we want to be able to do is get to be the best DBA you could ever have. So the customers want easy tools to be able to manage their databases. They may not have deep knowledge about how to do tuning and diagnostics as well. So if we can put that into the product, give it a single pane of glass, then that's what we're aiming to do. So this is an example of a database load from performance insights. And you can see things here like concurrency, lock issues, IO, CPU, et cetera, and where time is being spent in that database. Now this is something that is going to be, as I said, made available on Postgres and the Aurora preview that's available now. And it is actually available in the preview. So if you sign up, you will get access to this today. But it will be coming across the other engines. So here you can look at specific statements and see what statements are doing inside the engine as well. And you can see what contention is happening based on those statements that you're concerned about and drilling down into that as well. So beyond the database load, we're going to be looking at lock detection, execution plans, API access. It's going to be, as I said, included with RDS. You'll get 35 days of data retention. And then by the end of this year, fingers crossed, we should have that across all the engines that we support in RDS. So that's MariaDB, MySQL, Postgres, SQL Server, Oracle, and I'm probably forgetting one. But that's all I can think of right now. So Amazon Aurora, looking at the roadmap. This, you probably can't read that tiny writing at the back. But fundamentally, these are the features that we're looking to launch at GA later this year. Now, the key things if we want to look at security, encryption at rest is supported SSL connections. So in transit, we're securing data and then VPC. So inside Amazon, the virtual private cloud, which means that when you create a database there, it's not accessible publicly. You can only access it through either a VPN or direct connection. On the performance side, we're aiming for two times performance, up to 64 terabytes of data. Over here, we want to have all the Postgres SQL features as per 9.6, all the RDS Postgres SQL extensions, and then all DMS support. So DMS is a database migration service to help you get data into Aurora Postgres Compatible Edition as well as out. Yeah. And you can take that out from AWS back to on-prem, or you can take data on-prem, or you can take it from the clouds into AWS as well. So lots of flexibility there. We want to be able to read replicas, about 15 read replica targets, and then also have instant crash recovery as part of that. So these are kind of the highlighted features that we aim to get at GA. So customer quotes. Here is a company in Australia that I've been working with quite closely, Technology One. So they're the largest ISV for enterprise software in Australia, and they have software across governments, health, and all sorts of different sectors, highly regulated in terms of some of the environments that they operate in, and they've been looking at getting away from commercial databases, moving to an open-source platform. Now, you can see the quote from Ian here, and you can read through all of that, but the key point here is the one I bolded. In our test, we're able to use it with zero changes to our software oil schema, and it's so fast we'll be able to operate at a level of at least 10 times faster than SQL Server Enterprise Edition, and the next bit is the key one, and without 50 pages of license agreements. So this is probably a very unique use case where they're able to get very good performance over and above SQL Server Enterprise Edition, but it shows you what is possible, and these guys are very committed at moving to Postgres and looking at moving their development environments, switching them from SQL Server to Postgres very, very soon. Looking at a case study. So going through a customer, sorry, let's put that in the wrong place. Sorry, the Aurora database family is all the core features around the actual Aurora platform. So secure, high performance and scale, compatible with Postgres or MySQL, convenient because of the managed services that we offer around it, the core RDS features plus the ability to move data in and out with CDC support with the database migration servers, three replicas, automatic failover for availability and durability as I mentioned. Those are like, if you want to take away what we kind of covered, this is the slide that covers off all the core features about Aurora. So this is where the case study was supposed to start. So UberFusion is a partner of ours, and they did a migration of a single sign on system that was on SQL Server to Postgres for a media company. So I can't tell you who the company is because I didn't get the PR clearance, unfortunately, but you can possibly guess. So it's an ASEAN media company. They've got 4.7 million residential customers and roughly about 66% penetration of the TV households. And they've got quite a few channels that they offer on the platform as well. So the number of applications that were integrated was around 37 when they started this migration. So you can imagine the dependencies on the system very high, it's very important, business critical. The current server was that they had before they migrated was on SQL Server 2008 and it was enterprise edition. It wasn't a big system, so about 102 tables, 55 gigs of data, and the server was 128 gigs of RAM. But here I think the last point is the key one, 24 cores. So you can imagine licensing SQL Server enterprise edition on 24 cores and what kind of price that would be. So 2 million or approximately registered single sign-on users at the point when they were doing the migration. And the ideal state is they wanted to be in AWS and operating there to manage the scale that they were looking for specifically around some of the events that were coming up. So what you can see across here is limited scalability. So the on-premise machine was getting limited in terms of the hardware. The costs of the hardware was expensive and of course the software as well. And then the shared infrastructure, the on-premise database infrastructure was shared across a number of different applications. So when there was peaks and troughs and things it was a bit difficult to manage. So they had a new mandate for the single sign-on system and they had urgency because of Euro 2016 and the Olympics which they were delivering that service across their to their TV subscribers and then enabling them to sign in to the single sign-on service to get the entitlements that they're allowed to see because of course you sometimes sign up for some channels, right, not others, and some events, not others. So the migration to the cloud, the decision was made to go with Postgres running on RDS, so the relational database service on Amazon. And this was worked out to be around 11.5 times cheaper than using SQL Server for this platform. So challenges, you can see very small writing over here, but they really had to worry about zero data loss. It had to be transparent to users and then they had a very short migration window to get this across. Lots of legacy data that had to be looked at in cleanse. So these are some of the issues that they faced doing this particular migration. So the solution was with about one hour of time that they could afford for downtime, that they were going to re-architect some of the application to include things like Redis as well, and then the data size they wanted to reduce it to about 1.5 gigabytes. So it gives you an idea of how much data was actually in there in the system that they could have got rid of because of all the legacy processes. So there's a hint for application writers in the room. Make sure you know what you're doing with your data and how to get rid of legacy data to make sure that databases can be smaller. So they used tools to get the data out, and they leveraged things like Kibana with Elasticsearch so that they could take some of the logging and move that out of the system into more modern tools. And then they were leveraging their .NET application, and then they were doing that with Redis as well as Postgres to accelerate the outcome in terms of performance. So the steps. So on SQL Server, first of all, they disabled the SQL sign-on service. They ran a script to export the data and flat files according to a new schema because not only were they doing a migration of the existing application, they were also making some changes to the schema. So they had to make sure that they tested this very thoroughly so there was no data loss as they were changing the data formats. Then remove the headers from the exported files, input that into Postgres, and test and verify the integrity of the data, and then go live on the Postgres system. Now, obviously they tested that many times to make sure that once it was doing the live migration, it was all good. So benefits. And this is really quite compelling. So I have to call these out because you can't read them. Page load times improved 50% in terms of, you know, looking at the page loads and what was happening. They've had 500 sessions, 500,000 sessions in June, and the transactions per second increased four times over what they were doing before. Now, that was probably attributed to using caching in front of it rather than relying on the database itself to carry out the load. So that was a huge and important thing that they got out of that. And not only that, I think the biggest one is obviously the cost that they were able to get the massive cost reductions. And then on the infrastructure, they're now able to quickly provision environments once they go to AWS for development or copying their production environments, doing testing and so forth, taking from what they had to do previously, which is more than five days on premise to less than one day on AWS. So that's a business agility thing that they got by moving to AWS as well. So question times. Just before we get to that, the timeline, the preview is active now. You can sign up for it at that particular link. And it now has the read replicas and performance insights available. And that's just within the last few days. So it shows you that we're making progress towards general availability. So if you want to get on there and start testing this, you can. And then there's some facts in terms of how all of the questions and things that you would like to get answered are available on our webpage. Going to be from Michael.