 Good morning, everybody. Welcome to the second day of the DevOps track. This morning, first speaker is Narayan Newton. He is a lead sales admin for Drupal.org. He works at Tag1. And everybody seems to think I run with Drupal.org. It's a little true. He is. I wanted to say this. Yes. So my name is Narayan Newton. As I said, I am the lead sales admin for Drupal.org. I started as mainly the DBA for Drupal.org. And I work as a performance engineer for Tag1 Consulting. Today, I'm going to talk about the title of new developments in MySQL. In reality, it's more of an overview of what has happened since the fragmentation of MySQL after the nesting egg-style acquisition of MySQL. So after MySQL was acquired, first by Sun, then by Oracle, there were a lot of forks of MySQL, and forks, different versions, friendly forks, some not-so-friendly forks. And basically, it went from a case where you had people choosing between mainly 5.0 versus 5.1 to four different versions of the Oracle daemon, four different versions of various forks, different versions in the forks. And while this has been fantastic for competition, it's been fantastic for innovation in a way, certainly new development, it's made it incredibly complex and confusing for users. So a big goal of this talk is just to make it a little bit less complex and a little bit more clear what the differences are between these versions and make it more clear that some of these forks actually have a focus or seem to have a focus. So I really dislike presentations like this. It doesn't make it not useful, but they're very difficult to do without being incredibly boring, and they're very difficult to do without standing up here and basically reading really snips. Everything boils down to differences in patches, differences in features, and unless you're really focusing on why these differences exist, the different focuses that each of these forks or versions have, you could get the same information from a spreadsheet. So I'm going to try very hard to focus on solutions to problems and why these various forks exist to solve problems that people have had with MySQL. I expect to do moderately well at this to fail at times. There's a lot of information in the presentation, but if people are not clear as to why something is important, I'd love you to raise your hand. I don't mind being interrupted, and if something is really not clear to a majority in the room, I'd love to make it clear and skip something not important instead of have people confused. So to start out, I'm going to go through the versions and forks of MySQL that are focusing on vertical scaling, locking, reliability, performance fixes. You could classify, I'm going to cover two basic versions, and you could classify them as generically better in one case and extremely focused on scalability on the other. The first one is MySQL 5.5. This can probably be thought of as the current flagship. It's an Oracle version. They worked on it after it was clear that there was a lot of work going on in performance that was not reflected in the 5.1 branch of MySQL, and they needed a release of MySQL that was very focused on getting that work in. Because of this, they merged the inodb plugin, which is the new version of inodb. They really, really improved the locking on a number of areas. They introduced a performance schema that if you haven't looked at 5.5 and used, you really should. It's very useful for going through and finding counters of index usage, table usage that you wouldn't normally see. Just countless little fixes for performance that honestly probably should have been done a while ago, but this was the time that they sat down and focused on it. It really paid dividends for them. If you're going around as a consultant or working with a bunch of hosting companies, you'll see MySQL 5.5 deployed pretty generically at this point. It's very widespread. It's probably the default that people go to. Honestly, after the acquisition, Oracle did a really good job on this one. The other one that can be classified in the general scalability performance area is Prokona Server. Prokona has an interesting relationship with MySQL. They're a consultant company. They run the MySQL performance blog, which for a very long time has been a go-to for performance information for MySQL. They created extra DB, which is a fork of the INODB plugin. Basically, it was taking the INODB plugin, which included a lot of scalability work, and taking it one step farther. Breaking up the locks, adding lockless algorithms to where they could, adding a lot more instrumentation and a lot more configuration to be able to actually take the INODB plugin and run it on, say, array of solid state disks and have it actually use the IO bandwidth that you have available to you. Before which, INODB really couldn't do that. It would scale some of its algorithms, but a lot of them just didn't scale to that level of IO bandwidth. There are configuration options like that that they put forward, and eventually they went to an entire server. They basically took the idea that they used with the INODB plugin and advanced that into the server. The server has an immense amount of instrumentation. It's actually really impressive to look at the core version of Oracle MySQL and then look at Precona server and see how many different things you can pull out of it to find pretty in-depth usage, pretty in-depth performance information that wouldn't be available to you. Also, it has a lot of configuration that is very focused on vertical scalability, and by that I mean MySQL has historically done pretty poorly at scaling up. It scales out okay. You can have reed slaves pretty easily, but moving to 30 plus cores, 64 cores, hundreds of gigabytes of RAM, solid state disks, big IO backends, it originally just wasn't designed for this and it hasn't been very good at it. Precona server is very focused on making it good at that, and they've done a pretty decent job. They are not working alone. They certainly pull in changes from Oracle and together they kind of buy their powers combined. They've made something that scales pretty well vertically, but Precona has stuff that Oracle doesn't. For example, one of the interesting things that I like to point out is the INODB data dictionary, which people, 99% of people don't know about and never think of. The data dictionary is like the table cache for MySQL if you've looked at that, except it's INODB specific. Every time you open a table in INODB, there's an entry in the data dictionary for that table, and that data dictionary is never flushed. If you have a huge number of tables and a growing number of tables, for example, if you're running a shared hosting company and you're going to keep putting databases onto the server and keep adding automated tables, or if you're running QA on a massive scale and you keep running simple tasks, keep creating tables, eventually you're going to end up with a data dictionary that is many gigabytes of RAM just because nothing ever flushes. This allows you to set a maximum for that data dictionary and it'll start flushing after the max. It doesn't seem like a big deal, but this is one of those cases where they have aggressively tuned for the niche case of vertical scalability and found that this is a problem for people. It also has interesting features like the ability to dump and restore the buffer pool. If you have a server that has over 100 gigs of RAM, your buffer pool is massive. It's serving a huge amount of traffic, and it's one of the only servers serving a huge amount of traffic because that tends to be the usage case for these types of servers. If you restart the server, it can be days before you can actually put that server back into rotation on the production website because basically that server with an empty buffer pool is a down server. So Prokona has the ability to actually dump and restore when you restart the daemon. So it'll come back up with the contents of the buffer pool when it shut down. Again, it doesn't sound like a big deal, but when you run something at the scale that they're targeting, it's basically life or death for the website. And this is their good example of why the confusion after fragmentation is bad for users, but why the fragmentation can be good because this is a focus that could never happen for a generic release of MySQL. You're never going to have an entire team focused on just vertical scalability, but they do. The next focus is more generic. These can be thought as kind of the next versions of the flagship. They bring new features. They're competing heavily, so they're going to look into redoing some subsystems that haven't been looked at in a very long time, some things that MySQL has been historically weak on pretty much forever. The first one to talk about is Oracle 5.6. This is the next version. It's in development right now. It will be released at some point. It's interesting because it's going to bring some changes to MySQL that are going to kind of redefine how MySQL is viewed. The optimizer for MySQL, the query optimizer, is terrible. It's been terrible for a very long time. And they're finally looking into it. They're adding, for example, features called batched key access and multirange read. Multirange read is actually a storage engine feature, but it is used by the optimizer. Basically, what these two things allow the engine to do is instead of joining two tables on a row by row basis, they join the tables on a block of rows, a block of keys. And what this allows the storage engine to do is actually optimize the disk access because before the optimized service doing basically not helping the storage engine at all get consistent disc reads, consistent blocks. Whereas in theory, that should be easy to do. These are indexes. They're in sequence. They're likely in sequence on the disk unless you've done something interesting with your disk. And it just wasn't lending itself to that. It's possible that I know to be because it's actually pretty good at this, could reorder it. It's possible that the VFS system could reorder it, but that's a possibility and not an assurance, certainly. Also, on the same vein, they've added index condition pushdowns. So instead of my SQL saying here's a where condition, I'm going to enforce this where condition on the storage engine. It allows it to actually pass the where condition to the storage engine and say you enforce this. This is the where condition. You just deal with it. What that allows the storage engine to do is, again, optimize this disk access. These two things I'm actually going to come back to because it doesn't seem clear right now what a big deal these things are. But I'm going to come back to it a little bit after the next instance and show you what a big deal they are. Also, subqueries have finally gotten some optimization. So there are materialized subqueries now. There are semi-joins, which allow you to allow the optimizer to, in some cases, convert a subquery to a semi-join. These are optimizations that many other database systems have had, but my SQL has not, which historically is one of the reasons why subqueries are pretty thinly used in my SQL. There are also some interesting features, like a memcache interface. That's there. There are some great features, like multi-threaded slaves, which is actually a really big deal. If you've ever run a single breed slave or multiple-read slaves, the fact that it's a single-threaded slave can, in some cases, mean that it's just never caught up, and it's pretty much useless to you. The fact that it's actually multi-threaded is a big deal for actually using replication in high traffic positions. Replication checksums are actually built in. I don't know how many people know of the importance of this, but there are tools out there to allow you to check some of your masters against your slaves, and honestly, that's something that more people should be doing, because replication, depending on the condition of your network, can drift. The fact that my SQL, by default, has never done any validation on replication or the contents of replication has been a problem. Global transaction IDs. There was a patch from Google three years ago for this, but it never really went anywhere. Basically, what this means is that every binary log and every transaction in a binary log has a unique ID across the cluster. What that allows you to do is have a daisy chain of servers, where it's server A, B, C, and server A is replicating to server B, and server B is replicating to server C. And then when server B fails, you can actually hook server C to server A, because server C knows where it is in the binary log of server A, because there's a global transaction ID. It should be noted that you can do global transaction IDs with some third-party tools, but this is the first time it's really going to be built in. So MariaDB is one of the not-so-friendly forks. This is a fork by Monty, who was the original author of my SQL, or one of them. And it brings a few things to the table, and it's interesting in comparison to 5.6, because there's a lot of concurrent effort that went on in this. At the top of my list is actually enhanced testing, because one of the reasons I actually like Maria quite a bit is they've done a lot of work in increasing the test framework, and that's actually paid dividends for some people. It's useful. It's not something that a lot of new projects would do as a first step, is enhance the test framework, so I like that. They did a lot of the same optimizer enhancements in 5.6. This is not particularly a coincidence. They both started with a back port of work that was done in the now-abandoned MySQL 6.0 project. They also do blocky access. They also do multi-range reads. They added some interesting things. They added a new join type, hash joins, which are interesting, and they added some early table elimination. So in queries where a table is not going to be used, and it's just a mistake to have that table there, for example, some view queries, it will just remove the table and not use it at all. They have a lot of smaller changes. Plugable authentication. So you can actually have a third-party authentication tool do your authentication for MySQL. They do a segmented key cache, which is really useful for locking. Microsecond resolution. Bracona server also does microsecond resolution. Basically, this means that if you're in a situation where a query taking one second is a huge problem, you can actually have the slow log report in less than one second. So queries taking 500 milliseconds, queries taking 250 milliseconds. You can set your slow threshold to those values. And it has a lot more. It also has some rather unique features in virtual columns and dynamic columns, which are going to be fun to cover, and I will cover those in a second. But at this point, I want to go back around to the optimization part. For both 5.6 and MariaDB 5.5, the optimizer got a lot of work, and this is the result. This is a big deal. These are IO bound queries, which is important to note. But you can see that MySQL 5.5 in comparison with all of these versions takes a lot longer on these queries. Okay. So the large block is MySQL 5.5. That's without the optimizer optimizations. The two larger blocks there are MySQL 5.6 and MariaDB 5.5. They are roughly equal there. You can see that they're a large drop from 5.5, and then the smaller ones, the much smaller ones, are various tunings of 5.6 and 5.5. For the purposes of just showing what these optimization changes, the impact they have, the important thing to note is that 5.6 and MariaDB 5.5 are fairly similar and much less than MySQL 5.5. Does that answer your question? One thing to note, and again, this is probably too small to see in the back of the room, these are query times with the cache warm. The thing to take away from this is that actually these optimization changes have a slight penalty when compared to 5.5. The smaller bar in this graph is 5.5. Now the penalty is like in fractions of a second or seconds instead of hundreds of seconds. So the penalty is definitely worth it, and I'm sure this will get some optimization in the future. But it's interesting to note that in the perfect case, these new algorithms are actually a little bit worse, and in the not perfect case, they're incredibly better, which is interesting. Moving on to MariaDB virtual columns, which I said I'd come back to, this is an interesting concept. Basically virtual columns are the idea of defining a column in a table that is based on the other columns in the table. Basically this is the last step in taking a relational database and making it excel. You can define a column that just uses the base functions in MySQL and calculates a value based on that. For example, this example, which again I'm sure that some people in the room cannot read, takes a VARCHAR column and takes the, I think it's 5. Yes, it takes the 5 most characters in the VARCHAR column and puts it into the virtual column. This seems utterly useless, like you could do this in a select statement. But what's interesting about this is you can make the virtual column persistent. What persistent means is it's actually stored on disk, it's actually stored in the table. That allows you to index it. So you can actually use virtual columns to take a calculated value and index that calculated value. It's a really interesting concept. And it could be useful in a variety of ways. It's going to be used with the next thing I'm going to talk about, which is dynamic columns. But it can be used in just optimizing a query that is not working particularly well and doing some of that, doing some of that work ahead of time and then indexing that work. Dynamic columns are kind of similar to virtual columns, or at least they work together. Dynamic columns are basically some new functions in the core of MySQL. You could say it's MySQL's answer to schema-less databases. It's not really, or if it is, it's a bad answer. But it's interesting, nonetheless. What it allows you to do is take a blob in the database and treat it like an array. So you can take that blob and index into it by a number, just like an array in C. And any index into that blob can have a random type. So in the blob, you could have a string in position one, or you could have an integer, or you could have anything else. It doesn't matter. Basically, it treats it as a cast. It indexes into it and then casts whatever is there. So what this allows you to do is have a column that doesn't have any set type, that has columns inside of it, and can allow you to really create a schema that is absolutely insane. But, you know, that's sometimes useful, especially for content management systems, especially for things that are somewhat dynamic, and sometimes you have to do something that you would rather not. How this works with virtual columns is that you cannot index these because they're blobs. I mean, there's nothing there from the index's perspective. So what you do is you can use these functions in a virtual column, and then index the virtual column, which is why these two came into MariaDB at the same time. So moving on to something very different. Everything so far has been somewhat related to performance, somewhat related scalability, some new features. But what's next is two products that actually focus completely on clustering. So replication or clusters of replicated servers or sharded replicated servers. The first one is Prokona cluster. You can also call this Galera, I guess. Prokona cluster is based on Galera. They took the Galera open source project and modified it slightly, tuned it up, basically polished it, and created Prokona cluster. You can also call it true master master replication. This actually gives you what a lot of people have wanted for a long time, which is the ability to have, say, five servers set up with Prokona cluster, and have HA proxy in front, or IPVS, or any other load balancer, and just direct traffic between them. Any server can handle it right. It doesn't matter. There are no special servers, no real masters. How this works is basically there's a transactional heuristic that at commit time looks at the transaction that was just committed on another node, and checks if it's going to commit cleanly on this node. So when I connect to, say, server two, and I have five servers, and I start a transaction, I'll insert three things, update a row, then I'm done. It's going to transfer that transaction to every other node. And then all those other nodes are going to run this verify step and say, okay, does this transaction deadlock against anything else? If the answer is no, it sends a, this is okay, message back to the master that I was talking to, and returns to me. And we're good. The difference between this and standard synchronous replication is you're not actually waiting for this transaction to commit. You're just waiting for the verify. In most cases, the verify is going to take far less time than actually committing the transaction to disk. So this removes the rather huge performance penalty of standard synchronous replication. It's a fairly big advancement forward and makes it actually useful to have somewhat synchronous replication and somewhat a true master environment. There are problems, of course. It's incredibly dependent on network throughput, as you would imagine. Every transaction is being transferred to every other node, and you're not getting a return as the application until that's done and the verifies run and all the acts have come back. Also, it's dependent on the slowest node. So if you have a cluster of five and one of those clusters is the Pentium four, the performance of your cluster just went to hell. Also, it does commit time transaction checking, which is an interesting concept, because I know DB doesn't really. So as Drupal, when you're talking to I know DB, if you run an insert or an update and you hit a deadlock, you expect that insert or update to fail. That doesn't happen for Bacona cluster. The deadlock fails at commit. So that's actually a pretty big problem because Drupal doesn't really check the return of commits. So if you have a deadlock, the application layer isn't going to know. And actually a lot of applications do this. So at the moment, Bacona cluster requires some application changes to actually work, because you really need to check the return of the commit, because if there's a problem with another transaction, that's where it's going to come out. Also, it's not a write scaling solution. Everyone wants master master to scale out writes, and it's just not going to happen without sharding. For fairly obvious reasons. If every write that is received on any node has to process on every other node, that's not a way to scale out writes. In essence, it's taking your entire cluster and making you have the right throughput of about one node, maybe one node and a fourth. You would use it because of the management, the lack of management overhead, basically. The fact that you can have a load balancer and just send everything to everything is actually really convenient. Also, because, for example, Drupal, usually you would have to do a bunch of application changes to take queries that you know can go to a slave and send it to a slave, and then you have to worry about slave lag, and you have to worry about when someone posted a node or a comment and what they're going to see, and if they should be sent back to the master, there's a bunch of logic in that that isn't in the default contrib modules. This allows you to bypass that. You're still completely screwed on the commit thing, but once that gets fixed, it'll be cool. No. The next thing to talk about is something that actually somewhat addresses. I'm sorry? Precona cluster? Oh. I'm going to talk about how it's been useless and the depth to which it's useless. My SQL cluster works very different from everything else in that it has these, as you said, NDB data nodes, and the data nodes are basically in a cluster and are separated from the SQL nodes. The SQL nodes are the ones that actually get connections from the clients, actually process the SQL, actually run the query. The data nodes just hold the data, and not only hold the data, but have your data sharded across them. Historically, this has been, as he was saying, the most useless product in existence for 99% of people. It's actually usually called carrier grade MySQL because it is only used for telecoms, almost exclusively. It doesn't really help with scaling. It doesn't really help with performance. All it really does is allow you to build a cluster that, in theory, will never fail until the management tools fail. Up until then, you're cool. Really, it's been used by AT&T, for example, at big telecoms. MySQL cluster is 7.2, and to some degree, 7.1 have changed things a lot, which doesn't necessarily make it usable for the 90% case, but makes it worth talking about, because they're verging on usability for some people. Basically, the improvements were across the board in a number of areas. Some multi-threaded improvements on the data nodes. For example, they are now multi-threaded. Some locking improvements on the data nodes. They now have locking. And some query planning improvements. There is now a query planner. Also, there's some cross-DC replication stuff, but that's really only useful for people that are currently using MySQL cluster. The big ticket changes are adaptive query localization and extended index information. Adaptive query localization basically means that instead of what happened before, which is I send a query to an SQL node, the SQL node looks at my query and sends a bunch of requests to the data nodes on a per-row basis if there's a join. The join literally has to go across the network. It's basically taking what would happen running MySQL on a single machine and extending it to a bunch of machines, and you're just trading SATA for the worst backend in the world from an IO perspective. Instead of doing this, adaptive query localization can take the join and push it down to the data nodes. So if your data nodes are sharded in a way that fits your queries, you can actually have a join run semi-performantly because you're pushing it down to a data node and having it actually run on the data node. In testing, AQL has actually improved some queries by a factor of 70. They're literally 70 times faster. Now, given they were running really slow before, so 70 times faster doesn't necessarily make them usable in many cases. But with 7.2, it's the first time that MySQL cluster is at all in the conversation for some people. Extended index information is similar. Basically, it allows the data nodes to send index information to the SQL nodes and allows the optimizer on the SQL nodes to know that there are indexes. Before, literally, you would have to put an index hint for every single index you wanted to use. Not exactly usable. The last thing I want to cover is drizzle. Drizzle is incredibly hard to talk about. It's really exciting. They're doing a lot of really cool things. They're turning MySQL basically into a microkernel, making everything pluggable. They're updating a lot of core routines. They're removing subsystems that very few people understand or just gotten incredibly crafty over the years. They've removed replication. They've rewritten it using Google Proto buffers. They've removed the query cache. They have now a query cache plugin that actually uses memcache as the query cache. They've done a lot of really cool things. And with 7.2, there's an actual stable release out and two, maybe three companies that will actually support it. Unfortunately, there's basically no documentation and no benchmarks. And no one really talking about it. And very few people talking about using it. So it's one of those things where if there's a really cool product that's released in an empty woods, does it matter? It's an interesting spot. And it's actually a really interesting spot for anyone that wants to start contributing in MySQL because if they had anyone writing documentation, it would help them out considerably. I mentioned some of the advancements. They rewrote replications using Google Proto buffers. It's multi-master. So you can have multiple masters for a single slave. It actually supports IP6. It's UTF-8 throughout. It's entirely 64-bit. They're doing really, really cool things. And it's kind of sad that it doesn't really matter at the moment. It will matter soon, though. Very different from the subject of forked MySQL versions. At the same time as all these forks came out, there have been a lot of tools developed around them. The Bracona Toolkit, it used to be called Matkit, but now it's rebranded because Bracona is trying to take over the world, is one of the most comprehensive toolkits that you could have for MySQL. It includes the checksum tool I talked about earlier that allows you to checksum a master against its slaves. It allows you to produce slow log reports that are better than the dominant tool before this. It allows you to automatically kill problem queries. It allows you to automatically restart replication when there are replication problems, which is a little terrifying, but it allows you to do it. It allows you to sync tables between a master and a slave based on the results of the checksum you got. And it allows you to do online schema changes. It has a tool that actually will install the triggers on a table to basically copy a table, do schema changes on the table, and have triggers sending updates on a live schema conversion. A lot of these things are really terrifying, but they can be very useful if you know what you're doing. They can be very useful if you have a Bracona support contract, and they're freely released tools. It's very cool that all of these are there and just available to everyone. Bracona playback, everything except the clustering ones. Bracona playback is actually really interesting. It allows you to take either a slow log or anything in a slow log format or a TCP dump of the MySQL protocol and replay those transactions against a database server. Basically, it's a really interesting way of taking production traffic and running it against a staging site. By far the easiest, definitely. Extra backup is another tool that was released by Bracona. Extra backup is basically I know backup or I know hot copy that I know based releases. It allows you to take a binary hot backup of I know DB tables. It's what you used to be able to do with MyISM, and lots of people can't do it anymore. It seems to be not as known as it should be. The open arc kit is basically another version of Bracona tool kit, except it covers a different set of tools. Both of those are things that you should probably at least Bracona tool kit and the open arc kit should be looked into, because they're just utilities that every DBA should have. I successfully got through this in enough time to have time for questions, which I really wanted to do. I went through an incredible amount of stuff there. Do people have any questions about it? How to use it? What they're used for? Any unrelated questions? And you can come up and use the microphone. Commit time checking would be good. It depends on what you want to use. A lot of these things are, because they're forks, so a lot of these things you have to be careful with. You can't use features that are only going to be supported on Maria, or if you're going to do that, you need to be aware that those features are going to be absolutely useless elsewhere, so you need to have someone that's committed to keeping those up to date, and you need to have a way to make sure they only trigger on Maria. Drupal is going to benefit from a lot of these. They should know about it. The optimizer is a big deal, especially for views. It's a really big deal, actually. But no specific changes for it, except for commit time checking. Any other questions? Very custom ones, basically. Drupal can be difficult to shard, any CMS can be difficult to shard. Yeah, that's basically the answer. The ones I've seen have been very custom, and mostly have been based on custom modules and the data for custom modules. So those modules have a spectrum of data that they control absolutely. And so sharding that data is easier than sharding just generic Drupal data. The MariaDB support, the dynamic columns, or the memcache protocol. Do you mean dynamic columns, or do you mean that it can speak the memcache protocol? Right. Okay, so the memcache protocol support, basically the whole idea behind that is based off of a project called HandlerSocket. And HandlerSocket was released, or started, last year. And basically was a research project that the idea was to see how fast IMDB is. Because everyone at that point was assuming that memcache was faster than MySQL for just simple gets and puts. And they created HandlerSocket to basically test that idea and remove the SQL overhead, remove the query optimizer overhead, and just see how fast IMDB could do the things that memcache does. And it turns out it can do it really, really fast, faster than memcache, actually. So that's where all of this came from is basically people kind of wanting that support. Prokona server and MariaDB shipped with actually HandlerSocket, which is also a NoSQL protocol, but is custom. So you can't just hook up a memcache client to it. So MySQL 5.6 actually added the memcache protocol, so you could hook up a memcache client to it. And it's meant to use a single table, basically, as a using tables as the target for memcache queries, if that makes sense. So it's using MySQL as a memcache. It's not doing any conversion or any joint conversion or anything like that. Exactly. You were just pointing out HandlerSocket. Not query conversion, I would imagine. Yeah. I would imagine that, so when you start talking about taking a query and converting it into a HandlerSocket query, there's probably going to be more use in just making whatever that query is building cached and then having a HandlerSocket back into the cache. Or at least I would say that's probably a better use of time because converting chicks did a on-the-fly SQL conversion to MongoDB queries at one point. And it worked. We actually did it in Austin and we got Drupal 7 to Bootstrap entirely on MongoDB. But it's terrifying. And it has so many edge cases that you can't really call them edge cases anymore. So probably not going to happen. Yes. It would migrate with that. Absolutely. And I would imagine adding HandlerSocket to the cache back-end support wouldn't be difficult. And it could be done in a contrib module like memcache is currently. I don't know of a project to do it though. So that would be a cool new project. Any other questions? Okay. So if anyone who wants to, wants to fill out a session evaluation, you can go to the session and click evaluate.