 Good afternoon folks, thank you for coming to the session. I got to warn you, if you do ever speak at a conference and they put you in after lunch, you tend to see a couple of folks nodding off. So I'm going to give you a stand-up comment that needs a lot more treatment. This is a odd presentation. I go around to about 50 conferences a year talking to developers. And I've had a decade or so, a lot of my fellow gray here, as I've noticed, that a lot of developers don't have any training in relational theory, database concepts, structured query language, set theory. And the programmers all come up to us and say, but my queries stink. Why is that? Why is the database doing this thing? So this is a talk I've been developing over the last couple of years for developers to teach them what happens when they write queries. I have a four-hour longer version of this. There's nothing a lot of detail in how to do this. I have an overview to give you insight of what's going on behind the scenes. By the way, if I talk too rapidly or my access throws you off, or you have a question, please shout it out. The only dumb question is the one you don't ask and take home and let the processor inside you go out. This is a little detail from the session that you probably saw on the sign-up sheet. I am David Stokes. I'm a MySQL community manager. A long time ago, I was a certification manager from MySQL AB. So if you know anyone who's a MySQL certification, they might have my signature on the bottom of the piece of paper they have. I'm a senior manager. I travel the world talking about MySQL and databases. I'm also your lightning rod back to Oracle Management. So if you ever have a question, concern, or write, give it to me, and I'll get it to Oracle Management. Fairly easy to find. Twitter handles at Stoker. Very odd subjects and databases, which might get a little bit of overlap there if you're in the same diagram. These slides are up at slideshare.net slash David M. Stokes. A whole other bunch of presentations up there. So for those of you who haven't been taking attention to MySQL, we're now 21 years old. We're no longer the bright, shiny star that we were when we were bought for $1 billion. Some bought us for $1 billion in 2008. Oracle bought us and all that other stuff a year later. And no one's bought us for a while, so we're feeling unloved. So if you have a few billion dollars, then that's again. Last week, we announced the next version of MySQL, MySQL 8. I have a lot of neat features. We'll have a true data dictionary. Some database mergers are jumping up and down right now, going, yeah, and doing that. Going to have roles, going to have table, common table expressions, and a whole bunch of other neat features. I'll look for this in about a year. There's a big change for us, and it's going to make MySQL a lot faster and a lot bigger. You'll be able to have a million tables in a database, no problem. You'll be able to transact on the VVL. Also, we announced last week an awful, awful group replication. That's an active master-master application built on top of what you do now. Unlike the layer, it's not at a layer above the database. It's actually a database level. It's in front of my store application technology. I've got a few points we're trying to move over right now, but it's really exciting. You can have eight masters right to one, all the others get updated. A lot of them go down, everything fails over, automatically without you having to do anything. A year ago, we introduced a JSON data type to MySQL. It's what you store in an entire valid JSON document in a column of the table in MySQL instance. A lot of stuff is just coming out last year from ACI and other areas that want to output stuff from JSON. So if you need to be able to store JSON like you would a real or an intro or timestamp, you know, the JSON data type. Also, we have a document store. For those of you who do the JSON store with MySQL on this, what if you don't know SQL? What if you just want to do crud? You just want to create, replace, update, delete. You don't really do all that SQL stuff. We can now comment you that to the new protocol that we've had. And also, we have app-response encryption. The new protocol key vaults and the kind of keys. Encrypting data isn't the hard part. It's decrypting it and getting it to where you want it when you want it. In that case, it's the hard part. But enough about that. This is what happens. This is kind of a story about what happened to you write a query that goes off to a database. I assume that you all were probably using MySQL or something similar. If you see something here that pops up the other team before, please cut it out. So the idea is that you have a query and you send it off to the server. What happens next? Well, you're writing instruction query language. It's been around since the 70s. The original idea between SQL was to manage data storage efficiently. I don't feel any gray hairs in here are probably my age or older. Used to be you had to allocate everything because this space was highly expensive and has very slow. So, a common data to the other folks for you to have the best way to parse up the data to make it as efficient as possible to store it. Store things in relations. If you have customer information with that one table, if you have product information, put that in another table. The idea was to break things up efficiently. So, we'll do notation. Back when disk drives were the size of washing machines or small cars and very expensive, you didn't want to have the same thing around twice. Now, SQL has been around, like I said, for decades and it's based on set theory and relational calculus. Say the word calculus in both for the door, that's good. The big trouble here is you have to normalize data. In the past 15 years, we've been pushing object-oriented programming, not top-down design. Part of the thing with top-down design is you architect your data. You normalize it, you split it up into component pieces. You'll have a street address, a city, a country, and a postal code, break it up that way. You don't teach that anymore. So, roughly 2% of you have had any formal training in structure query languages or set theory. You're still using databases. We should mismatch there. So, here's a very simple query. For this talk, I'm using the Moscow Rural Database. It's what we've been using for 20 come-on years in all our documentation and all our training classes. And we'll look at the cycle of data on cities and countries. It's very dated now, but we've been using it for so long, kind of, folksy and work well for us. So, we're gonna write a simple query. We're gonna go out and get the city. The name of the city, the name of the country. We're gonna get the name of the city from a table called city. We're gonna get the name of the country from a table called the city of the white country. Someone previously architected this so that country got code. You wanna see this, the hard way I can show you later. But take my word for it, that's the way you match up these two tables. By the way, if you get the country code and the simple two codes, you get everything and it's real messy. I've seen knowledge developers do that. I've done it. So, as I said, someone previously pulled up everything so that we know that country code matches the city.country code. And that's our language. This ironically is the way I think a lot of you think it's databases. It's really looking server somewhere. Here's some PHP code, the home page. I was a webmaster for the American Heart Association and I was looking for a templating engine. Rasmus Murdoch put it out there and did exactly what I wanted. And then started working with the database called MSQL. And then soon after that, started working in MySQL, and that was kind of mine. So we have a couple queries, SQLI query. Okay, I'll go to a PDO query. By the way, if you're using the old MySQL at the score, calls in your code anywhere. That's been deprecated, please don't do that. There's some security and performance issues in there that you don't wanna know about. By the way, MySQL pays for the development of all this stuff. We have two developers working full-time on it. PDO doesn't have anyone dedicated to that. Everyone else moved off the old MySQL on the score a long time ago. So, you send off your query, the first question that you need to know and use in answers, and you talk to the server. Last, well, authentication's a little promiscuous. For those of you working in deep security, you'll find it rather appalling, but it's the way it's been for 20 years. And so your application sends something over the network or socket to the server. MySQL will take a look at it and says, the first thing it's gonna ask is the system you're talking from to the server allowed to talk to the server? If you're not on the white list or things aren't wild carded, are you using a valid account and password or authentication string? Also, some accounts have limits, number of queries you can do per hour, the number of volume you can do per hour. Just gonna check to make sure that you're okay there. And then, number three, is do you have permission to access the data you see? Can you get to the table, a column, or even the database that you want? So let's say that your host is okay, your login's okay, your permissions are good to go. People get there. Next, the server does the hard work. Did you send something that was syntactically correct? Does it have a, the right format? You're picking the database. Then, the optimizer starts looking at your query. What is the person asking for? What columns do they need? What databases do they need? Here's gonna also recheck some permissions to make sure that you have the right access. And it's gonna lay out the recipe of what you want and have all these results in green. Then it builds something called a query plan. Any Oracle DVAs or Oracle shop people in here? One, wow. Oracle query plans are a little different than Oracle than what my field does. But the basic idea is, in the optimizer, so what's for the best way, the cheapest way, the easiest way to get the data? Now, Oracle pioneered it and just about every other database is followed with the cost model. The cost model is based on the most expensive thing in the chain, it used to be reading something off a disk. Reading something off a disk is 100,000 times slower than reading it out of memory. Which means that this gentleman here started doing back flips for 100,000. We could come back in two and a half weeks and catch the last couple hours. So if you do one a second, it would take two and a half weeks. Right now, this is changing. Hardware devices are getting a lot faster. A lot of memory storage devices are blindingly fast. And all the database vendors are trying to figure out how do we take advantage of that? So if you have something that's both out in spinning disk on solid state disk and out in memory storage, which is the easiest and fastest way to get it off all those, we have to mix things back and forth. How do we arrange it so that we send off the closed query to the hardware first? But for right now, just know that the optimizer is looking for the cheapest way to get the data. Now if you're interested in this, in later versions of MySQL 5.7 and 8, you can actually look into the MySQL system tables and see the server underscore cost and the engine cost tables for details. So where is cost determined? Well, the optimizer keeps statistics on previous fetches for data. That should be a warning to some of you that if you just reboot a MySQL database or any other database, and it hasn't been running for a time, you're trying to do a benchmark, your benchmark's gonna be thrown off because the optimizer doesn't have enough statistics to know how to deep dive from the data. If you're benchmarking, what I recommend is you set up like 100,000, 200,000 known queries of good data that you have, run that through before you're benchmarking, make sure the optimizer and everything's warmed up. Now, the statistics are based on what it had to do on similar queries in the past to get your data. By the way, with MySQL, you can save these statistics between boots, but that's beyond here, but if you, you gotta remember that it's based on a guesstimate of what it will do to get your current data. For those of you who are really interested, this is an optimizer trace. This is where the optimizer goes through and starts evaluating the various options. This is our query from earlier. It knows it has two databases or two tables within the same database it has to go out and it has some statistics. I know this is hard to read, but it has some costs that it knows what it's gonna take to go out and get this information. For example, this is a blow up of it. MySQL mainly does nested loop joins. We do have an index we can use to go out and get this data, I'll talk a little bit more there. And it has some statistics that's gonna come back and in this case it's gonna tell us to do, part of the query is gonna have to read 239 rows. And that's 100% of that table. Now every time you add a column to a query, that has another factorial complexity the optimizer has to add in. Now there are some shortcuts you can do mathematically if they're from the same database, but if it's another table, you're increasing everything by another factorial. So for those of you who go out and just grab everything you think you might possibly need, rather than exactly what you do need, you are costing yourself speed and time. Now some databases allow you once you figure out what the query plan should be to lock it down. So as long as your data never changes and everything is great, it knows exactly the roadmap to get there. It's kind of like burning a path into your GPS so every time it's gonna use the same path over and over again. MySQL doesn't do this. It wants to re-optimize it every time. How many of you have run a plane on a query before? Anyone? Wow, great. For those of you who haven't, what you do is you pre-pin the word explain in front of a query. From MySQL 5.6 on, you can do it to just about any query before that. You had to do it to a select query, which means if you're doing a delete, you can just change the word delete to select and put it explain in front of it. And it comes out and tells you what's going on. In this case, we're going to be looking at the country and city tables. We have a key, a primary key that we can use on that country table. And we know the country code matches that key. We're gonna have to read 239 rows out of this table and that's the minute of 17 lines out of this table to get every query that we want. If you run MySQL workbench, which is our second most popular download, it'll actually draw this for you in a nice pictorial format which I think is a little bit easier to read. Once again, we have the country table. If you see something in red, you can write mouse over this and it'll give you information on how to make that better full table scans we usually considered bad. We'll talk about that a little bit more. And here we're gonna do a key lookup for every city that we don't want to read. And the nested loop join. I think this is a little bit easier to read than that. But I'm a dinosaur and I grew up with the first one. Now on MySQL since five, six, if you have a query that's hanging or running slow or you just want to find out what's going on there, you can actually use explain on another process. You have to get onto the server, look at the process list, find that one. And then you can do explain for connection and if you'll get the connection number to see the explain on that. That's been very helpful for a lot of us who go into customer sites and we find, oh, this query is running slow. Well, what is it? Well, we look at it and find out it's not using indexes or it's doing things in kind of an odd way or the optimizer's ignoring the indexes. Indexes, for those of you who haven't run into before, let you go right to the record or record you seek. The example I used to give all the time and I'll repeat it here is, imagine you're looking for the plural of a word in a dictionary. Unfortunately, your dictionary was not put together very well, all the pages are thrown in randomly and there are duplicates that you're not quite sure where they are but they're not gonna be next to the one you're searching for. So if you're looking for the plural of a word like moose, you have to start at the first page, first line and read to the last line of the last page. That's called a full table scan in database terms, usually very slow. Now, sometimes you wanna do that because you're going through all the customers and doing the quarterly billing so you have to do a full table scan of that table. Most indexes, you're trying to do a quick in and out to get all that information. By the way, a big hint for a lot of you, compound indexes. When I call my doctor, they wanna know my last name, date of birth and the street I live on. So they have a three-part query to look me up. The example I have here is you have a multiple column index for year month date. What's that you can search for year month date, year month and year. It doesn't work for date but you can use that index. Now, the optimizer tries to use indexes as much as possible. These things up, let's talk. Okay, so the optimizer's done the query pan, it's gone out, got your data and it's gonna send it back to your application. That is this very simple 20 minute talk on how your query gets back to the server. Go to the server and it gets back to you with the answer. And there's a copy of the information that you get back from our query. And that's the way how a query's supposed to be run. Now, let's talk a little bit about common problems. N plus one problem. This is usually bad programming practice. Sometimes your ORM forces this on you. Turn on prefetching on your ORM if you can. And the way you avoid this is by thinking in step. I'll let the database do the heavy lifting. N plus one is, I see a lot of them that are very complicated that you don't realize is an N plus one problem to dig into three or four query seeds and realize they could have done one big query instead of a bunch of old fussy queries. Because every time you make a query, are they allowed to log in? Do they have access, da, da, da, da? You turn the data. If you're chaining queries, it closes down. Let's say like you need a ride to work tomorrow. So you're gonna go to your employee database and you're gonna look up to the employees who live near you. And then from those employees who live near you, you find the ones who have a parking permit, which would indicate they have a car or some sort of vehicle. And then get their information, give them a phone call so they can give you a ride. A lot of applications code that way. If, then, if, then, if, then, where databases like put dives in the database and you get out, each database access has cost. So this is a case where you probably write up writes instead of three or four queries, do it in one big query. You know, find the folks who live near me and have a parking permit and have a phone number. Okay, quick quiz for y'all. Which of these two is better? Your boss comes to you and says, I wanna give everyone in sales a 20% raise. So would you rather do that in PHP code on the left or a transaction on the right? By the way, transactions. Start transaction commit. Now the great thing about the one on the right is all the records for folks who match up criteria get done at one time. It's atomic, it's boom, done. The trouble with the one on the left is halfway through you're running at that. Your boss comes in and says, oh, did I say 20%? I meant 2%. And then you gotta figure out, okay, which records have been updated, which ones haven't, I gotta go back, put the old ones back, or do I just do the ones I haven't done and then go back and put the other ones. It gets real messy to do the one on the left. One on the right shows the power of transactions. If you're making adjustments to whole bunch of records and you wanna make sure they're all done at the same time or they're not done at all, this is the way you do it. Okay, another quiz. Which one of those two queries is gonna run faster? On the one on the right, the only has a difference where it's gonna ask for the first five to come back. First five records. Normally we want the first five city names. Now, normally you see this where you have a whole bunch of order buys and sorts and whole bunch of other stuff on the right hand side, but you only want five records. Now this is a trick question. As far as the optimizer knows, to do this and this is the same amount of work. Yes, sir. That is a good, having the results stored in a temporary table to do the top five does happen a lot of databases and that is a good answer. Yes, sir. The right answer is the optimizer will run both queries the same way and then on the right hand side, throw away the other ones in there after the first five. So, bravo on the answer. So the same amount of work's gonna have to be done by the optimizer in either case. So if you're sitting there looking at query and running, G, why is this taking so slow? I only want five people. The last time I looked there was a thousand records in the table, but that ran quickly. You look in there again, suddenly you have a million records in the table. If you're using limits and wondering why things run slow, that's why it has to do all the work, especially if you're doing complex sorting and all that because it doesn't know which ones are the top five until we're very in. Here's another one, which runs faster? We're getting, what's it again, city names and we're calling it, we're aliasing the city names of city and we're aliasing the country name as country. They're both from the same table, but here we're ordering by population and here we're grouping by country name. Which one do you think's gonna run faster? The proper answer is, you honestly don't know. You have to see how your data's architected. In this case on the left, the population field is not indexed, which means it's going to do all this other work and then go through there and read through all the population records and then sort them, put them in a temporary table and output it. Now, in this case, country.name, that's also not indexed. But, ironically, in this set of data that you're playing with here, they just happen to be stored in alphabetical order. So you get a fractional better optimization. But the trouble is as a programmer, unless you know what the data looks like underneath, you can honestly say there's no way to tell. By the way, how many of you have been programming for less than a year? Okay, I see how a bunch of people do. You know when you open up someone's code, you pull in something with Composer and you're looking at it and you look at it and there's something wrong with it. You get an intuitive gut feel that there's something wrong with the code. It just doesn't look right. Not so much as Composer came through, which you pull down people's libraries, you look at them and you go, no, there's something wrong here. You get a gut feeling about it and you can tell if it's crap code or not almost instantaneously. Database queries, you can't do that. You can't tell if they're good or bad by looking. You have to look at the data, you have to run explain on it and you have to see how it works. There's no intuitive way to tell whether it's a good query or a bad query. Yeah, I've seen some horrendous queries. I've also seen horrendous queries that actually work, which always scare me because it takes a while to figure out what they're doing. By the way, about two years ago, a gentleman came up to me and said, I have a wonderful new library for PHP. It does all statistical functions. And I said, great, I have a stats background. What do you have in there? Well, I have min, max, average and standard deviation. Okay, so in your little application that's running away on a container, you pull down two million records and start trying to do an average. What does it do? Well, he was computing very basically adding up all those numbers and then dividing by the number of rows. And then I showed him that databases usually have all the standard statistical functions and plus they're usually running on bigger boxes. They also have better at crunching numbers. So if you're doing statistical analysis, at least anything up to probably finite element analysis, do the crunching on the database. Okay, you've gone through the presentation. It's after lunch that food is settling and you're going, well, that's great. But I really don't care about doing all that. I really don't want to do SQL. I don't want to explain queries. I just want to be able to throw my stuff in a database. Well, MySQL and a lot of other relational databases, Postgres, MySQL, Postgres and SQL server now have JSON data types, which means you can put unformatted, so it's a valid JSON document into a column. It's great for that sort of stuff. And if you're not really going to have a schema data, it's great with source stuff. Well, it works great, but it's kind of like your teenager throwing everything in the middle of their bedroom saying, but dad, I know where everything is. Why do I have to put things away? I can get to it instantly. Doesn't scale, it's messy and it pisses off mom. So also there's no rigor applied to your data. So Programmer A is putting in first name, last name, email and postal address. Second Programmer is putting in first name, height, weight and astrological sign. There's no rigor to the data. There's no way to make sure that you're getting all the information you need for your records. Also, we announced really recently with MySQL 5.7.13, a new protocol. This lets you use MySQL as a doc store. So if you just want to connect to the database and do crud and drop off your records and do whatever you want with them, you don't need knowledge of MySQL. Currently, we support Javascript, Node.js, Python, Java, PHP is coming, bugging folks last week and all they're ever gonna tell me is it's coming. It works very well, but once you get past say like 100,000, 200,000 million records, you're probably gonna bring someone in to do some DBA work and they're gonna architect the data and they're gonna start drawing stuff out of the JSON columns, JSON data and turn it into your own materialized columns in the database. So you can use traditional database techniques on there for speed. Last minute hint. First step in great performance is data normalization. There's some wonderful courses and books out there on data normalization. Do yourself a favor, buy one off Amazon, keep it in your bathroom. When you have a couple spare minutes, you can start reading through. It's kind of a osmosis and reverse osmosis process, but it's a great way to learn how to normalize your data. It pays off. The very high end relational calculus is very scary and you don't need to go that far, but there are mathematical reasons of why you wanna normalize your data. Indexes, indexes are great for speedy searches but they take overhead. So every insert, insert the record, change the index. Lead a record, change the index. Also I tend to see that a lot of folks add more indexes than they need to. There are tools out there from the Kona Toolkit or MySQL Sys Schema. They'll actually go out there and show you indexes that are redundant, indexes that aren't being used. By the way, indexes aren't being used. Run after the databases from running for a week. You do it right after a reboot. No one, none of the indexes are being used. You're gonna delete too many. Heavy lifting, like I mentioned earlier, let the database do the heavy stuff, do transactions, all the SAP functions, move big chunks of data. Think of your applications like little remoras. The database is the shark. You're just chewing off a little bit of stuff that the database is dropping off to you. Disk drive, do not go cheap on disk drive. In the States, it was fairly common about three years ago to people to go down to the local fries or best buys because they're selling terabyte disk drives for 50 bucks. Great. Well, there were consumer-grade disk drives that weren't designed to run 7.24, 365. So I had a whole bunch of friends who, after six months, started noticing their new terabyte drives were cratering. Also, solid-state disks pay for themselves rather quickly. Slow query log. For those of you who are really interested in what's going on with your server, turn on the slow query log and prove that every morning with your copy. Now, some queries are in the slow query log because they are doing a lot of work and it takes more than a second to run it. You can configure that time, by the way. But start looking for the queries that take a while to run. And if you save 50% of that time, you can run more than that time, which takes less load off the box. Slow queries are usually the easiest way to gain performance on a box. So schema. So schema is now coming to default with MySQL 5.7. It was an option in 5.6. This is a bunch of views and stored routines that let you peer into the heart of your server. In the past, MySQL has criticized because we really couldn't give you a granular view of what's going on at the low level. So schema was written by our support engineers to answer questions. Who's hogging my IO? Which indexes aren't being used? Who's using temp space? They're fairly easy to use and they're well documented. And if you go in through MySQL Workbench, you can turn this on and start seeing what's going on with your server. It gives you great views of the low level stuff if you need to dig down. By the way, I'm sure you've seen this in every slide deck. This is a collaboration sprint. You make that. So, in 35 minutes, that was a brief overview of what happens to your queries. If you need the slides, there's slideshare.net, slash davidemstow, this is the most recent slide deck. Twitter handle and email. And if you have any questions or any comments, you've got roughly 20 minutes to more in the room and if you email me, run. What advantages does the JSON, I have to repeat everything because they're recording this later, play about. I'm sure that one guy will listen to it and go, woo, you repeated the questions. In the past, you could take JSON information and throw it into a var car or a car and had all this wonderful JSON information in there. The only trouble with that is if you wanted to search it, you end up using regular expressions. Anyone here writing a regular expression and was able to read it a month later and knew exactly what it was gonna do? That's a superpower I don't think anyone has. I'm waiting for Marvel to come out with that. Rejects, ma'am. You can do that, but it's messy and it's not sexy. The new JSON data type, we have 22 functions to let you insert, change, extract, get meta information, do things like search for how many instances of email tags there are in the data. I'll let you walk that tree. There's a little overhead because we have to put in some pointers into the column to get to the various tags. But if you're really concerned about space, you can keep it in text. But for more ease of functionality, the JSON data type is a wonderful thing. In the past year and a half, I've had roughly 20 folks who've taken big data sets off Mongo and other key-valued pairs and thrown in there and I haven't read anything like that, so. I can show you the easy way with MySQL Workbench. What you do is you get into the query writer and there's a button in there that will let you see the execution plan. And from that, you can actually change the tabular view. It takes a lot less to do it than actually describe it. From in there, you can actually get the explained in the tabular form or the optimizer costs. When we get done, you come up here and I'll show you. It's real simple. You get a question? Have I already used Drupal? Yes, several years ago, I was maintaining a Drupal 5 site that someone else had written very badly. Well, it doesn't have a direct tie with Drupal that Acquia would love me to bless. The thing is, a lot of you are working on code that's just not Drupal. A lot of you also have other things you're trying to integrate or scrape out of the database that you need to get to. So it's like for most people don't have to know how to change a tire on their car because these days tires last 100,000 kilometers without blowouts. My dad's day, if you got 15,000 miles out of a set of tires in the state, you were lucky and you were changing them all the time. Well, put it this way. Next year when you're on mastermind, what does an optimizer do with a database? You'll be able to... The optimizer trace is probably the best one. You can debug through it not easily. There are ways to use GDB and other debuggers on minus QL. It's kind of tricky but well documented. We also have, if you're a paying customer with support, a query tool that will actually go out and break things down in the lowest amount that you have to be a supported customer. Yes, sir. The other thing that pops up from time to time, we don't do it too often, is we'll suddenly have a new reserved word. With minus QL8, role becomes a reserved word. How many of you in your table somewhere have the word role buried as a column name? So anyone? Yes, sir. Ironically, in the old days, post-crest folks would tease me that we didn't do sub-queries very well, and I'd tease them they didn't do joins very well. Luckily, both databases have made major strides in that with a minus QL5, 6, and 5, 7. Sub-queries are a lot better. Sub-queries, for those who don't know, is like a sub-routine that spits out stuff. Unfortunately, in the past, the optimizer wasn't quite clever enough to realize what was being spit out and how to optimize that as long as the bigger query. So, it's gotten a lot better in the past two releases. Yeah, we're gonna have common table expressions with eight, which should make that a little bit easier. But I've also found that by writing everything as a join, it's easier for the next person to come behind me and figure out what's going on. I had a problem with SQL Lite a couple of years ago with CTEs and sub-queries all mixed together, and I had a major migraine trying to figure out what was going on. Any other question in here? Yes, sir. The query cache will save the results of a query and a hash of that query, so if the same query is being run over and over again, it knows just to throw that out there. We started turning that off in MySQL 5, 6, and it's often 5, 7 by default, because it's single-threaded. If you have the same query running over and over again, it's either better to put it in something like a caching layer, memcacheD, reddit, or something like that, or in your application. Something like the score of a football game that's after the game's over, it's not going to change. Store it someplace where you can get to it easily. You don't want to keep querying the database, because every time you hit that database, it's a big performance hit. The other thing is, if you are using the query cache on a more recent version of MySQL, makes more sense to turn it off and turn that memory over to the NODB buffer pool and you get much better performance and it is multi-threaded. Our trigger is good or bad. We have optimized data. In the MySQL world, there's not a lot of business logic at the database layer. The Oracle database world, the DB2 world, and the SQL server world, they have a lot of business logic put into the level. So you change a column from 17 to 32 and you go back to it and suddenly 19. Change it again, it goes back to 19. Then you figure out, is someone running a trigger or a stored procedure on this to do some sort of calculation and give the final value? So what I'm putting in there is not the final value. Well, I mean, yeah, yeah. Triggers, when you change a column, either before you make the change or after you make the change, will do some sort of function. And that function can either rewrite the column that it's playing with or write information to another column. Very valuable with MySQL 5.7, you can stack them so you can have multiple triggers per table. I used to use them heavily in a couple applications I had because they were great. We also now have generated columns so that if you know the sales price is this amount, the VAT is gonna be this amount, you can calculate that automatically on the fly without having to do the trigger or the expense of your application. So triggers are wonderful if you like using them. If not, well, document them very well because the next guy behind you may have no idea there's a trigger. Yes, sir. Well, the query I was showing earlier, we want a city name. So it grabs the first record and says, oh, I'm supposed to chain this to the country table. And in this case, I'm supposed to find, I know that the country code for the city name is here. I go over this other table using that as the index into the table and pull back the name of the country. Yeah. But it can be the same table, self-joining is valid. And if you're really looking for a good book on that, there's a gentleman named CJ Date that has a wonderful book that goes probably too deep of a level for most of us but explains exactly the mathematics behind it, which if you're not a math fan, you skip over it. And it talks about the mechanics behind it. By the way, if you're interested, we're doing great at Oracle since the seven years since Oracle has been taking over my SQL. We've gone up five times in staffing. We're still hiring. So if you know anyone who wants to work from home and work with databases, we're looking for folks for making money for Uncle Larry. We're the number five class overall of Oracle. Oracle has like 10,000 products, so to be number five in the education products is pretty amazing. And we just released my SQL eight for testing. So if you want to be on the bleeding edge, you can download it today. And if there's no other questions, I'll be around for a while. I think we have to pre-up the room for the next group, but thank you all for coming out. Any other questions? No questions.