 For those of you who don't know me and haven't suffered through me in the past six years, my name is Dave Stokes. I'm a community manager for MySQL products, which means Oracle pays me to go around the world going to computer shows. We do have an opening in the community management department, so talk to me over at 527 or 327 if you're interested. This is a talk, this is my third time giving it. The first time I did it, I did it in 25 minutes which is pretty great for 57 slides. Did the same talk to the San Diego PHP group last Tuesday night and went over it much slower at about two hours. So hopefully I'll be closer to the first one, the second one. The slides are up at slideshare.net slash David M. Stokes, or see me at the booth or send me an email. My Twitter handle is stoker. By the way, if you go through past Twitters for some folks, two years ago there was a Nicole Kidman movie called Stoker. So if you see stuff mentioning red hair and nudity, it's not me this time. So MySQL 5.7, let me get rid of that resume slideshow thing up there. MySQL 5.7 has a lot of really neat features. If you made Colin Charles's talk earlier, I told you about the complete list of features.com, which covers all 150 some-odd changes we made in 5.7. 5.7 came out in October. One of the more customer interested parts is the JSON data type. We have better security, better replication, group replication, where you update one master and it writes the other masters. It happens at the server level, not at a floating layer above it. A whole bunch of other really neat features, but so far the biggest interest I've seen so far is in the JSON data type. Now, for those of you who haven't been paying attention, JSON is JSON object notation. They pick the worst word possible by using object. That means different things to different programmers in different languages. If you saw the JSON talk yesterday by Christoph Pettis on how Postgres is doing their JSON, you'll see we do things just a little bit differently. I'm sure as this becomes more and more of a feature in all the relational databases, because all the relational databases are adding this, things will shake out. Now, if you're really old-time Los Angeles, Los Angelino, you might remember this. Interesting movie comes on Turner Classic Movies about twice a year. It's one of the last of the old Ray Harryhausen movies where they did stop motion photography, and they actually had skeletons fighting with swords against a actor. You might notice the skeletons have no muscle, so it's really an interesting movie if you can suspend disbelief. So what does JSON look like? Well, it has a key and a value. The value can be a number, it can be a string, it can be a real, it can be an array of strings, it can be just about everything. The JSON spec is rather loosey-goosey. I think it's five or six pages, and there's a lot of things they don't specify, which is great for programmers because they don't specify it. You're going to do it your own damn way anyway. So how does all this work? Well, before we go into that, let me just say there's more than one way to skin a cat. If you've been programming long enough, you know that. The corollary is how many skinless cats do you want running around? In the old days, you could actually store JSON data in MySQL database going back to the earliest beginning. The trouble was you're going to store it as a character. In this case, we have a table that has an ID number and some data. In that data, we shove in some JSON. Works. People have been doing it for a long time. Well, what's wrong with that? It's not sexy. It takes a bit to dig into that JSON data to get the stuff that you want. So you end up doing nasty things through the regular expressions. How many of you really love regular expressions? Okay. Those who didn't raise your hands, stay away from these folks. The other problem you have is you're breaking the first rule of data normalization, which is what relational databases are built on. You cut up the data into the smallest usable bits, like zip codes, states, street addresses, first name, last name. With JSON, one of the problems is you're shoving everything into one bucket. Going through that bucket gets kind of nasty. Also, it's slow. If you're going to go through all that, they're going to do a full table scan. For those who aren't DBAs, that means you have to read everything in the book from first page to last page to go through it to make sure you got it all. Slow, inefficient, nasty. Once again, Rejects is just kind of for most developers. With 5.7, we now have a JSON data type, like we have an integer or a real or a double or all sorts of character data types. When you deal with JSON documents, the default for everything is UTF-8-MB4. Once again, the default for everything is UTF-8-MB4. It's part of the spec. For those of you who are lucky enough to do only with good old USA Latin 1 data sets, no matter what you do, if you're doing it in JSON, it's going to store it in UTF-8-MB4. When you go to put in a JSON document into MySQL, it's going to make sure that it's a valid JSON document. If it's not valid, it's going to get kicked out. We'll come back to that a little bit later. When MySQL server gets your JSON document, it's going to put it in a binary format and sort some stuff out for easy searching. So if you're looking for something like the JSON data type in Postgres, where they just take everything as a straight dump, whatever you put in, exactly what you get out, not exactly what's going to happen here. Column limits. The size of your JSON document's going to be about one gigabyte. So if you're doing documents bigger than that, you're going to figure out a way to parse them up, cut them down, or do something with them. By the way, if you're on the server, the value for max allowed packet size is the system variable. You change it on your session. It doesn't work on the server. So how does this work, Dave? Well, let's create a table called T1, have a column called JDoc, and it's going to be a type JSON. And into that document, we insert the values key1, value1, key2, value2. And what does it look like when you call it out? You get key1, value1, key2, value2. Now, some are going to say, well, how efficient is this storage-wise? Well, in memory, it's going to take about 4.5% more space to put in all the overhead for the easy indexing of the JSON document. How many here really have to worry about disk space 100% of the time, like we used to 30 years ago? Nobody, so. You're the one guy, okay. Facebook, yeah. Now something else that happens is when you put in a JSON document in a MySQL and you're reusing the keys, remember it's a key value, a key and a value, if you repeat the key, you're going to lose the second end through end one. It's going to take the first one. So here, we're doing a JSON object of key1, value1, key2, valueABC, key1, DEF, two key1s. Server's going to give you key1, value1, and key2, ABC. This key1, DEF, goes off in the bit bucket. This is part of the behind-the-scenes optimization. It also will internally sort those for the purpose of making lookup easier. So let's say you want to look through a JSON string and get your value. Here's a case where we have a JSON string where ID is 14 and the name is OZLON. And we're extracting whatever is in this string and that's what the dollar dot stands for, name. And it pulls up this, OZLON. There's a couple other notations coming up that I'll explain. The trouble is basically when you're throwing a document into a column, we need special notation, special functions to dig down into that document and pull out the values. And as you see, it can get rather confusing, but it works fairly easily. So the pass syntax uses a leading dollar to represent the document that you're working on currently. Think of this in the PHP role as your this arrow, whatever object you're looking at. By the way, these slides, I know they're a wall of text, so please download them and go through them. I'm not gonna read everything off of them to you. I can do it at the booth if you want. I know we're gonna be limited for time. So here's an example of an array. You dollar refers to this entire array. So dollar zero evaluates to this over here. The string, the array starts with value zero, or count as zero. Value value dollar one is this here, dollar two equals this, and there's no fourth argument, so value dollar three gives you back a null if you're trying to pull that out. And because dollar one, dollar two are non-scholar values, you could actually go through and dig down further. So one of a equals this, one a one equals that, and so forth. Have I lost anyone yet? Also, if you're coding stuff, you have a space in your key. Double-check your work and put double quotes around it. Unquoted, it's not legal, it's gonna kick out. In this example, we have a fish has a value of shark, and a bird has a value of sparrow. The string.a fish will evaluate to that, to shark. You also have wild cards that you could use. So if you know that there's something out there like an address, and it's gonna be made up of various parts of an address string, you can just ask for address and it'll pull back the various parts of it if you have that separated in your JSON document. In recent versions of five seven, you're gonna see the column arrow path operator. Which is gonna be a synonym for JSON extract, and I'll go into detail in a minute. The idea is putting JSON underscore extract in a lot of places was real, real, real cumbersome. Also, if you go back through some of the earlier blog posts on this, we were calling our functions JSN underscore whatever the function was. And it got kind of confusing for some folks, so we made them to JSON. So if you go to Planet MySQL, look for old JSON information, and you see the JSN, please expand it mentally to the JSON. Right now, you can use the equal less than, less than or equal greater than, greater or equal, not equal, and a couple other operators. But right now, you can't use between, in, greatest or least within JSON comparators. If you're doing order by and group by, be careful. SQL will order the null before everything else. So be careful there. Also, sometimes you're going to be doing, you're going to want to cast values. Here's an example where we're pulling out the ID field from a JSON document, and we want to return that as an unsigned. Just put that in your SQL statement, and you'll get what you want. Depends on your programming language and how you're handling it. But if you want to make sure, if your name or intent of programmer, which all programmers should be, cast it the way you want it. Functions. So since you were breaking the first rule of data normalization, and you basically have a file within a column, you need certain ways to go down and dig down to that information. So we have a whole slew of functions for you to use. Some of them are for creating JSON documents. Some are for searching. Some are a whole bunch for modifying. And a whole bunch to get JSON information, like how deep the document is, how many items are in there, and that sort of stuff. So, quick way to create an array. Select JSON array, pass it your values, and it will create an array for you. JSON object. Here you can pass possibly empty string that will give you back things in a proper JSON format. Once again, if you put an invalid JSON into a JSON column, the server will kick it out. This is a good way to make sure that before you send it into the server that it's quoted properly. I'll talk a little bit later why, if you're a PHP programmer, I needed all of this for Python, a couple of other languages, why you want to do this rather than use the native functions out there. Here's one where you're quoting null. How many folks here use nulls all the time in their data? Okay, so you folks are probably the old-time DBAs. We're not teaching that in schools anymore, unfortunately. Here's a function JSON contains. This will return a one or zero to indicate where the specific value is contained in the target document. So if you're looking for an address, you're searching someone's records, that's a JSON document, and you're looking for their zip code. If there's zip codes in there with the target you're giving it, you'll get it back. Here's one that's similar. Indicate whether JSON document at the given path or pass level has what you're looking for. Extract. This is how you're going to pull a value out. We'll go over this a little bit more in generated columns. I have a great example in there. So if this is kind of nebulous, don't worry, it'll soon be clear. Once again, remember the shortcut where we use the arrow operator. So here we're using the ID column within a column that's called C where the value is equal to four to do an update. So if you're used to writing SQL, you have to get used to the arrow operator and do a few more coding, but it's nothing outrageous or massively changing in your SQL code. JSON keys will return the keys from the top level of a JSON object. If you're going down deeper, you have to do some tunneling. But if you want to go to the top level to find out the keys, here you go. Search, returns the path to a given string and an JSON document, returns null if any of this JSON doc search or path arguments are null or no path. And you can also do it where you can get the first one that comes up or all of them. So if your document has things that repeat, you can find them all. Array append, once it sounds like it does what it sounds like, it append stuff to a JSON document. Okay, for modifying, you can have insert. Here's some more information on insert. If a member is not present in the existing object, it will add it to it. If it is there, it will update it for you. Yes, ma'am. I don't play enough with SQL Lite. I don't know if they have a JSON data type yet. I'll run into the main guy for SQL Lite later this year and I'll ask him then, but I don't know. I'm guessing they'll probably have one of these days soon. Merge. If you have two documents you need to put together, merge will do that for you. Remove. If you have something you need to get rid of, this is the way to do it. And replace. Find an existing value and change it in place. And also set. Now somewhere you're gonna ask, what's the difference between set, remove, update, and insert? Well, set replaces existing values and add non-existing values. Insert inserts values without replacing existing values and replace replaces only existing values. You're probably gonna use these functions most of all. What if you have something in JSON and you want to edit out of JSON? You can use JSON on quote. You can also embed all sorts of things like form feeds, tab characters, backslashes, UTF-8, multi-unicode values. Something else you might be wondering about. The depth of your document. How many items do you actually have in that document? How many keys you have out there? Length, how many things are, how many number of things you have out there? And remember, everything you're gonna get back is going to be UTF-8 and before. So if you're not used to playing with unicode character sets, please be advised they do take up extra space. Before you throw something in the database, you might wanna run it through JSON underscore valid to make sure that it is valid JSON. Once again, if it's not valid JSON, the server's gonna kick it back and if your program isn't designed to handle that properly, it's just gonna frustrate you. Pass syntax. Once again, read number dollar is gonna be the synonym for the document you're currently working on, which if your DBA translates to column. There are wild cards. Dot dot star represents the value of all members of an object and a star within bracket equals the cells in an array. Simple, right? Get all that, you're gonna pass the quiz when I give it to you in five minutes, right? This is gonna take a little bit of working, even for folks who are at long-time DBAs. Once again, we're breaking the first law of data normalization, so you have to go through these various hurdles to get to the data. How many of you are PHP programmers? Okay. I predominantly speak to a PHP audience so I apologize if you're not a PHP coder and I highly encourage you to become one. JSON by default or PHP by default has four functions. JSON, decode, encode, last error message and last error. You saw the list that we have just to get to the data. These are the ones that PHP supplies by default. Simple example, taken right from the PHP.NET manual. If you encode JSON encode at the array, it will give you a JSON doc. Only trouble is 70% of the time it's okay when you're putting into MySQL. The other 30% of the time, no. So please go back and use the JSON quote function that I showed you earlier. Here's another example. For those of you who have never played with MySQL, this is a standard MySQL query. We're going out and pulling everything from a table called simple. Doesn't matter if it's JSON or not, you're gonna get everything back from that table because you're using the Star Wildcard. So this part of the MySQL development process is not gonna change for you. However, the insert will change. In this case, we have insert into table name values and we're gonna force it into a JSON array of one quoted ABC, null and true. Once again, you're gonna make sure it's valid JSON or the server's gonna kick it out. Yes, sir. 70% of the time, yeah. There are some things that just doesn't quote right at the wrong time. So I'm warning people, use the JSON quote function to make sure that MySQL is gonna escape it the way you want. In engineering terms, they call it, in my opinion, it's mismatch where the person who built part A is expecting part B to work another way and it doesn't quite do it and unfortunately, this is the case in this point. So one of the problems we have with the JSON data type is you cannot index that column with a normal SQL index. So since we're breaking the first rule database normalization, you just can't index all the junk in there. So we have an option called a generated column. Generated columns come in two types, virtual, which means when you do the read, it goes out there and does the math that I'll show you in a minute to get the value you want or stored where it does the math and stores it into the table. It materializes that value into the data. So here's an example. We're creating a table called T1. It's going to have an integer field called F1 and an integer field called GC and GC is going to be stored, which means it's going to be stored actually in the table and GC is defined as the value of whatever F1 is plus one. Example of using this is if you know your sales tax is 0.0625% and you're selling something and you want to put item price, sales price with tax and table, this will generate it for you. So here we have a table called JMP for J employee. We have a column that's JSON type. We have another one that's an integer type that we're calling G and it's generated, which means it's going to be materialized in the store. And to get that value from this JSON document, we're going to pull out whatever the ID value is and store it in G and then we're going to index that G. So if you need to go right to that ID number, it's there. Yes, sir. It should be able to, I'll have to play with it. Usually it's going to be a secondary key and in this case, I don't have to look at it, but in this case it's going to do it as a secondary index, I believe. So when a secondary index is created on a virtual generated column, the values are materialized or actually written down in the table. And you can use the arrow operator as a shortcut. So generate columns solve the problem of not being able to index the JSON document by itself. Another neat feature with generated columns, and this is a great example of the blog post here, if you need to do case insensitive searches of last names or whatever. What you can do is, well, indexes on a column are always created with the collation of that column. So if you just index the last name, you're going to get that collation for the last name. Now if you want, you can create a new generated column with the column's data, but stored in a case insensitive collation. I'm saying this because I know somewhere in the next six weeks, one of you will have to do this and hopefully the little colonel, there's a gentleman over here shaking his head. Hopefully this will pop up. This is a neat trick. I've used it twice since this blog post has popped up. Syntax for generated columns. Column name, data type, virtual or stored. Remember for UTF-8 JSON data, we want it to have it as stored. It's nullable and you can actually set it as a primary key. So go back to that question. Right now, subqueries, variables, stored functions and UDFs used to define functions are not supported. If you really, really, really need that, let me know, I have a couple engineers who would like to talk to you and use you as a test case. Also auto-ecrement cannot be used in a generated column definition. So all of this is wonderful, but you're either on the JSON bandwagon or you're not. Is this the best thing since sliced bread or is it just another XML? So JSON is great, but once again, you're breaking the first rule of data normalization. This adds an extra layer of complexity, which means if you have a bunch of folks who don't deal with SQL and JSON documents in MySQL and Ray thing, they're going to look at this and their eyes are going to roll back in the back of their head and they're going to go take a three-hour coffee break. So you are adding extra layer of confusion. But what if you really need to store JSON formatted documents? Because that's the way everything's going, all the APIs are going JSON, this is the way to store it. It's easy, it's fast, it goes with just about everything else you used to do with MySQL and it's been available since October in MySQL 5.7. I had a gentleman at a talk in New York City had this before lunch speaker slot and he got very excited when we had the beta version of this code. And he immediately went home and took all the multiple listing service real estate data for the state of New Jersey that his company had, moved it out of Mongo, put it in this and put it in production. Yeah, that's one of those things where it's beta software, please go out and break it but don't put it in production. Well, he did it and it's been running solidly since last July. So you might find some holes and please file a blood report review but it is fairly solid. I haven't done any direct one to one Mongo MySQL 5.7 comparisons. It depends what you're doing. One of the things Mongo does is when you write a record it adds some padding in there. So if you do a replace or an insert they already have the space carved out. NODB doesn't do that. It wants to take that old record part of the MVCC stuff and give you a new one and keep the old one around in case you roll back. So in some cases Mongo is gonna be a little bit faster and in other cases we're gonna be a little bit faster. By the way, I apologize, I'm probably gonna bonk you to get it to him. Solomon, no questions? I was hoping that he would have done a start transaction and a commit at the end. I have no idea what he did. Chances are by the way they're selling. What I'm saying. Yes, sir. Two questions. Okay. I believe it's of a numeric value. I have to go back and read the man page explicitly. I believe if it's a numeric value it's gonna return you a numeric value. If it's a string value it's gonna return a string. But let me go back and come to the table and we'll take a double check. You should be able to do that with a simple cast. You know cast something back is integer and compared to integer. Or pull the items out of the array to pull there. Yes, sir. Right now it's about one gigabyte. Now that's writing it out to disk and getting it solidified. While you're manipulating stuff in memory if it's one gigabyte and you're pulling out five K here and adding another five K it's gonna actually be bigger in memory than one gigabyte. But once it gets written out into storage it has to be no bigger than a gigabyte. Yes sir, it should. It's gonna complain loudly about it. Something I haven't done and I need to do it my test. Yes sir. Well it has internal indexes for the document retrieval of the various items. But it's not an index you can touch for the SQL code unless you use a generated index. So yeah. Think of the indexes within the JSON document as only for that document and over for in the server to use you can actually take advantage of piggybacking off that. It's for the server to use on it. Yes sir. Yes. Part of the things with five seven was we increased it. I think, I have to go back and double check. It's been a month since I looked at it. I think we're up to 32 characters, the 24 characters. I think it might be 32. We had so many people asking for it to increase it from the default 16 that we've done that. So yes ma'am, way back there. How many, depends on how often, how quickly you fill up that gigabyte of data. So if you're gonna have one value, one key, one value, one key, one value, it's gonna be up to you. So yes sir. Yes. You might wanna play with extract to make sure that you're getting the exact values that you want from the thing. Yes sir. How many data types in JSON? I'd have to go back and look at the spec but they have all the normal things like numbers, strings, arrays, all that. It's fairly loosely typed. It's a simple respect in the CSS spec. So it's kind of loosey-goosey. Yes sir. You have a JSON document that's changed. Do you replace it with the functions that we provide or you do it? I assume it's live. Yeah. I'm not really sure that's probably gonna be more programmatic how you're doing the data. I'm guessing if you wanna do it in place with the functions that I just listed, it's gonna be fairly simple. If you wanna just pull out the entire thing, change it then write it back. That might be a simpler process for you but it's gonna be your mileage may vary. So any, yes sir. You can do multi-columns but it gets messy. All the quotes, commas, and brackets. It's designed to be easily human readable. And of course once you give it to a human, the first thing they're gonna do is try to obfuscate what they're doing. So yes, but be warned, it gets messy. Yes sir. It shouldn't be if you're using the generated columns, that's probably gonna be the best way to do that. And I do that all the time. Yes sir. Variables within JSON docs. Well unfortunately that's not part of the JSON standard. There's JSON standards kind of like, I won't say a final copy of the data but it's all hard, it's nothing gonna change on the fly. It's all determinate data. Oh by the way, if you didn't get one of these, I have more swag at the booth and the booth opens up at two o'clock. I have hats, I have some shirts, I have squeezable dolphins, I have boogie bots. If you don't have a boogie bot, you need to get a boogie bot. And I have some other, they're a little wind up dancing robots. So any, here you go. Here you go Solomon. Yes. For those who don't know, this is Solomon Chang. He's been around the MySQL universe forever and ever. He also likes plastic bags for some reason but. If you have any other questions but you're too shy to talk in front of this group here of nice friendly people, see me later. See me at the booth. I'm here till Sunday midday at least. Yes sir. Well people just wanna store stuff in JSON format. They don't wanna break down the schema into its component parts. So there's a lot of folks out there who don't want to analyze their data or do anything, just wanna throw it out there. Which has its benefits in that you don't have to plan before. Problem is the speed's gonna suck and get confusing. Yeah. If you're doing composite index, so like you have the zip code and something from a JSON document, you just create the index with the column name and then the generated column name and away you go. Yeah, you can do that. So any other questions about JSON or MySQL or? Yes sir. I'm not yet. I'm on the road for the next month and a, well, next month and that's on my to-do list. Unfortunately, I'm not a, I was a JavaScript programmer long, long time ago and I've fallen off the wagon and I need to get back into Node.js to figure out if how this impacts this. But I can tell you with Python and PHP, this works beautifully. Yes sir. I know the engineers are tweaking some stuff but I really don't have enough information to give to you. If you have a feature you want, please propose it to us. We just came out with 5.7 in October and we're on a roughly 24 month release cycle. So if there's something that we don't have and it's not on the drawing board, we can put it on the drawing board for 5.8. You may not actually see it come out in a test version for six to seven months or a year but if you are serious about it and can tell the engineers what you want, they will try to get it in there. Yes sir. I have stored in tables with column names as those keys and for every layer or level of arrays inside of that JSON key, you're going to have a different table for that. I mean I see it more as something that you would actually be more so be converting table relational table data into JSON rather than having a need to do it the other way around. There's a cartoon strip called Pickles. I don't know if you've ever seen it. The grandfather is showing his grandson an old rotary phone and the kid sees the phone and goes, wonderful looking phone. How many pictures does it hold? The thing is you and I have gray hair. We're old dinosaurs. We want to break everything up and normalize it. A lot of folks don't want to do that so they want the store stuff in JSON. So a lot of folks are doing APIs or they're doing audit trails. They're going to shove everything in JSON and be happy. So, well anytime you break the rules of data normalization, things tend to bite you in the rear later. Like you'll hear people say, oh yeah, I pulled this column from this table and put it over here because this way I only have to do one select. And then they never go back to the parent table and update stuff. So that's why your customers don't have the current phone number. But it's out here in the sales record but it's not in the customer record. So when their credit card goes bad you have no way to track them unless you know how to do that. If you're saber and you're doing 30 million transactions a minute you're not going to do it with a JSON. You're going to do good old standard SQL. If you're a mom and pop real estate store or real estate shop or a store this might work for some of your documents you need to store. It's just another option for you. So, yes sir. Yeah, an option is to ask where you're just throwing stuff you don't know about out there but the dinosaurs like Solomon over here and myself are going to say but you should have put that in your data spec. There is a standards committee and things are kind of in flux which means sometime in the near future they'll probably come out with a recommendation and an RFC and a voting round. So two or three years from now there might be something a little more solid but right now you probably can't take our JSON and plug it into Postgres' or SQL servers anyone else's. I'm hoping that in a couple of years the entire no SQL SQL stuff goes away and it's going to be like the old ASCII Epsidic Wars. Ask him about that, he'll. Yes sir. Ooh, I haven't played with that yet. I don't see why you couldn't do that with a generated column. I'll have to play with that, that goes on the to-do list and I'll blog about that. Yes sir. I haven't played with that, I'm sure that I know there's an upper limit for column storage and I'd have to look that up and then backtrack from there but yeah, there is going to be a wall that you hit and of course you'd never hit a wall gracefully. Any other questions? Yes sir. Well the full text search isn't designed to be something like a solar or a leucine or elastic search, it has its limits. It's kind of you're looking for certain key phrases within a document. It's not as robust as elastic search for the type of stuff you're doing. Will the JSON get there? Hopefully one of these days but if you're doing a lot of corpus searches for certain keywords and stuff like that, this will work. What's real funny is we're doing a lot of work with MeCab processors for various different languages, non-Western languages and doing searching for patterns and a lot of that I think is going to be retrofitted back in. So I know the folks who are doing the Korean, Japanese and Chinese pattern searches are finding some really interesting stuff and it's beginning to filter back into the full text world. So hopefully with 5-H you're going to see that vastly improved but I don't think it's ever going to be something that's going to say oh, everyone threw out elastic search and go to this. Yeah, well we're Uncle Larry Ellison, my big boss. Hopefully we'll get there one of these days but it's not anytime soon. By the way, for those of you wondering how we're doing at Oracle, we're making money. We're the number eight most popular class in all of Oracle. Our renewal rates for support are in amazing numbers, much higher than the industry standard. We're hiring, so if you have a resume and you want me to intrigue it and tell you how to get through the Oracle system, let me know. We're working on plans for 5-8 and there's some things we're looking at 5-8 which is once again, 24 month release cycle. That may not make it 5-8, we'll creep into 5-9. So we have plans that are four, five, six years out there. If you have suggestions, concerns, comments or gripes, let us know. Always hit me if you have any concerns, comments or gripes or just odd questions. And once again, thank you all for coming out. Be sure to stop by the booth, I have lots of swag. And be sure to thank the volunteers who work here. So thank you all.