 I even made a fun sticker of the big band and put it on the slide just to be a bit more original because I gave this talk before, but it's revamped. So if you watch it, it's going to have some different stuff. I'm Gabriela Davila-Ferrara, but I prefer to go by Gabby because it's easier for everybody. I work at Google Cloud and I'm a developer advocate, but that means I was actually a software engineer for 12 years before, before becoming a developer advocate. And if you want to learn more about me, just follow me on Twitter or my website, and it's DM open so you don't need to follow me at all. It's up to you. So MySQL now, as of this moment, is version 8015. I want to know from the audience, who's here running MySQL 8? Raise your hand. No one? Wow. Okay. That's going to be fun. 57? Okay. But are you using the JSON features? Okay. That's okay. There's a lot of cool stuff on MySQL 8.0.15. From what I see, they've been releasing new minor versions every three months. But I don't work at Oracle, so I don't know how long it's going to be the next one. This one is from February, and it has the following. Before a brief history, and Dave can correct me if I'm wrong, MySQL was created in 1995 and it was created to handle up from 10 to 100 million rows or under 100 megabytes stable. So I don't know if from the MyISOM era, which was way faster than I need to be at the time, now supports terabyte-sized data. I have seen really, really big databases running MySQL. And not just only the plain version, but the forks that are around too, and for instance, I've seen the big ones that I saw had 10 terabytes on. And sometimes I have one of my great 38 terabytes to your service, and they're like, yeah, we only support up to 10, but that's how big you got. Our SQL standard is new SQL 2016, so if you don't know, they revised the SQL standard so often, every two or three years. But some stuff just from 2003 became available, like window functions and CTEs, table expressions. So it's kind of hard balance. So what is awesome now? What is exciting now? Well, for me, why would be awesome? And please take note, it's partial indexes, I don't know if it's a way to create indexes just with a condition on the database. So that would be awesome. And instead of select star, maybe have an accept, that would be nice. So those are on my wish list, at least. But so this is one of the new things, not deterministic defaults for table creation. That doesn't mean anything if you don't know what it's there. It could be deterministic and it could be not. What I'm trying to tell you, because, oh my God, that's finally there, it's you can have functions and expressions on the create table statement before you could only have on timestamp or daytime with current date, for instance, or just a constant, like number one for an integer or string for varchar or char. So now, they finally are supporting built-in functions or functions are already on the database to be called by default. And what does that mean, in the end, is that you have more power. You have now support for UUID. My SQL now, they try to show up as also a document store. That's why I ask about the JSON features if you're using it or not. And UUID is a great part of that, because it allows more reliable ID generation than auto increment. There was a bug about auto increment that took like 10 years to be fixed. So that's one example that would be more suitable. And this is an example of a table having the UUID. So here, you see UUID is being called. You could use before, but not on the create table statement. Now you call the UUID and transform to binary form and save as binary. This is just a recommendation. You could save as a string. I don't recommend it, but save as a binary form as a primary key also. So there is a demo that I did in here showing up how that works. So I'm going to use my blog database. And I don't know if I have the table users. If I have, I'm going to drop it. So this is the code that I showed in the slide. As I said before, it's a binary. It's not null. And so which means you can omit it on the insert. You can only insert username, and it's going to generate for you like they do with the auto increment. A new UUID. So I'm going to create it. When I select, it's going to be empty because I have no records on it. As I said before, you cannot omit the use of the primary key in this case because it's automatically generated for you. And it's going to insert the UUID. But if you select start from users now, you're going to see a weird thing because it's in binary form and you don't want that. You want the UUID. So that's when this comes to life. When you convert from binary to UUID and show the username. And there it is. It looks the same, but if you look in here, it's a bit different. I'm going to drop the table again. And I'm going to show you a use case for an expression. Why would you want to do that? Not just the UUID, but other things. Let's say you want to have every registration of your user on your website or application. You want it to have an expiration date. And you want it set up for a year or two years from the date that the person registered. Usually, you would keep checking that on the code, like the registration date. Is it less or equal to the date of today? And doing math with dates is kind of not nice. So there is a way to do that, already saving on the database. I'm going to create the table user. I'm going to copy this part here, so I don't have to type again. And as I said before, you can have created at which was the default as timestamp, not no. That's going to generate the default current timestamp. And now I'm going to do it with an expiration date, not current timestamp. But when you work with expressions, you need to put it between parentheses. And I'm going to set up from the current timestamp plus one interval of one year. Could be any interval here. And I'm going to create the table. Oh, I think I need to drop it first. Let me see. They created, I already dropped. So when I insert the data again, again, I just need to insert the users and select it. You're going to see that expiration date is actually a year from the date that was created at. There's multiple use cases. That's just an example that I thought. You could put a previous column that's already declared on the expression and do another expression as default, like total price times quantity, for instance, and with the definition there. So this is one reason why you would want to have functions. However, if you have a custom function, it's not going to work. Because it's only the built-in functions on MySQL. Another thing that always bogged my mind was user management was not easy. How many people here Google every time they install MySQL? How to grant permission to the root user outside of localhost? And then that happened in Stack Overflow, right? So now you could create a DBA role. They have roles. So with this user management now, you can have reusable permission and roles. So I can create a role for DBA. And I can use it for myself. I can use it for Bath. And I can use for anyone that I want without having to memorize all that grant that's usually gigantic and has a password policy. So you can set up the new password. If you can reuse the password that you used before, then expiration date on that, and the rotation, how long it's valid for. So you can have user management with that. So there is a problem. On 5.7 now, if you do a grant, I don't know if it's already stopping you. But it gives you a warning that if you try to create a user using grant, it's going to say you need to use create user, not grant to grant permission. Because grant should only be to grant permissions, not to create users. With that, because it's not creating new users, you don't need to use flush privileges anymore every time you create a new user and grant them permissions, and saves you from Googling and going to Stack Overflow every time. So I have an example here. I want a read-only user. You create the role. In this case, I call read-only. And I'm only applying to the app database to the read-only role. So anyone that I create now and it gets read-only can only read from the app database. That's an example of creating a user. That didn't change much. You put the user. And by the way, Gabriela at wide card percentage, and Gabriela at local hosts, there are two different users. They're not the same. So you need to pay attention to that. And instead of saying grant select, again, I just say grant read-only to me, to Gabriela at wild card, and it's done. If you change anything on the permissions, you don't need to change on the user. So one thing is tied to another, making it easier to do user management. There are new stuff under the hood. The default, that it's really important. If you're using 5.7, you're probably using Latin 1 or UTF-8. There's a talk showing how to hack a database just using UTF-8. If you're using UTF-8, please, even on 5.7, 5.7 supports it, use the UTF-8 and before, as your default, not Latin 1 or just UTF-8, because it's not as secure. I saw this talk. It's really scary. And I was like, boy, did I know if you could do that on the database. And one of the improvements means you can have mathematical equations now inside the database and emojis. Plus, SNPs, which in this case means if I have a weird language that I just invented with weird characters, I could support it. That's what it means. Oh, by the way, CharSat is the configuration of the CharSat where you're going to store the data. And collation is how you're going to compare the data. So it's important, too. It also changed to that thing that's a bit ugly in my opinion. But what it means is I'm going to break it out. When they start, I'm studying my SQL 8 since 2016. And at the time, the ISO version for emojis or stuff like that was 9.0. But they are in version 11 already. So they supported the F89.0. That's why the 0900. And the AICI, it's because it's accent insensitive. Like my last name has an accent, so it wouldn't matter if you compare with or without it. And case insensitive, which also doesn't matter. But you need to pay attention to if it's case insensitive and you're storing a hashed password, you may want to change the collation of that column for password to be case sensitive because some hashes have difference between capital letters or not capital letters. And there is a weird bug where the sushi emoji was equal to the beer emoji. There is a blog post on the MySQL server team, which I put it in there. It's because the collation, these both collations that they had, something was making them compare one to another. And this person found out just because of the sushi emoji and the beer emoji when he compared one to another. And that means if it was comparing that wrong, it would compare other characters wrong, too. So that's why collation is important. So if you go to MySQL 8, do not use general CI or Unicode CI, use the default, which it is this one, or case sensitive. Other defaults and verbals. Replication, if you use it, or binary logging, it's enabled by default. You don't need to enable it anymore. There is a new plugin for authentication. If you try MySQL 8 on an older application, it may have trouble, because it's using a new plugin for authentication that uses SHA2 for authentication. Medatory default value for timestamp on columns there are not nulls. So if you have a column that's not null and uses timestamp, it's going to give you an error if you don't set a default. And there is a new variable that's for dedicated servers. Sometimes you're running on a VM. And the VM is the only purpose of the VM is to be a database. And you don't know how to turn the tiny on the database, which I don't. So I just set it up to on and let it control for me the variables that are necessary. This is not an approach I would do in production, but as a small thing that's just proof of concept, you can do it and let it define for you what is good and what is not. Indexes made me happy, because although MySQL has most of people use iNoDB, and there's other engines, like federated engines, MyISM, TokuDB, and memory. For instance, allows you to create a hash index. Or if you're creating a temporary table, also create a hash index. But normal tables are B3. And there are B3 because B3 is on the market since 1970s. It is efficient. And they had a problem, because they did not supported the sending indexes before. Until 5.7. This is what it was. The syntax would allow you to add a sending or descending. So you could try now on your 5.6.5.5 installation, and that's not going to give you any errors. It's going to say that created index is descending. But there's a caveat when you look at the documentation, because for future extensions of MySQL, they would currently, they are post but ignored. And the first time I discovered this, I got so angry, because I needed a date on the sending order and it was not working, that I started, like, I know someone that works in Oracle, I was like, I need the sending indexes. Please put it on, you know? And not because of me, of course, they didn't do it because of me, but they finally did it on MySQL 8. And that's how I feel it. Like, it's freedom finally. You could say it's a bit tree, you could traverse it, and it could get the same result, but on MySQL, it didn't work like that before. So I'm going to leave you with him a bit more. Just teasing. So that's the syntax now. It's the same syntax. It's no longer ignore, and it's no longer forcibly created as a sending, because you would say that is descending, but would create as a sending, which it's not cool. So it actually works now. The cool thing about that is even though you're not using the index on a descending manner like I did it in here, it's going to use the index anyway. It's going to know that they already has an index on a username that's descending. It's going to say, OK, I'm going to get up from the other part of it and traverse it from there. So that's one thing that made performance improvement on a lot of databases. So if you use a lot of descending queries, maybe just a bit of my SQL, it's going to gain a lot of performance because of that, creating that kind of indexing. So another one are invisible indexes. And that one, I was surprised because I never heard of this concept before. I have a table of users, it's kind of just username and created app and updated app. And I do have a, I forgot to put it in here, but I do have index on username because I'm going to search by username. So with invisible indexes, it's not used by the optimizer. So if you put an index as invisible, the optimizer did not run on the query. So you think, why the hell would I do that? If it's not going to be used, what am I going to put it invisible? Well, there's a lot of reasons because every index is visible by default and to create an invisible one, you just put it as invisible. And one of the reasons is to drop an index, or create an index, it's a really costly operation. It takes a while, it depends on the size of your table. I had an 8-core machine with 52 gigabytes of RAM and on a table with 50 million records, it took two minutes to create an index. That's fast. But not everybody can afford to have a machine like that. So the thing is sometimes you create too much indexes and one index starts to get away in the way of another. So if you can just disable it, put it as invisible, you can debug better your query. So it's more for debug purposes only. As invisible, even if you create as invisible, doesn't mean it's not updated. It continues being updated. But it's not used by the query planner. You can toggle the visibility. So if you create as invisible or visible and want to change it, you just do an alter table, alter index, and between visible or visible. That's just it. That's an example of the query cost. So I'm looking for a username called Fancy. It's all generated usernames because of GDPR. So I just use adjectives to the usernames. And with a visible index, the cost was 0.98 and only one row. So it found one row. On invisible indexes, just turning it as invisible, it did a full table scan on five million rows of five million users that I had just toggling on and off. This is a demonstration because the cool thing about that, it's because it is fast to do it. So first, I'm going to show you my or by username. It found me. And that's not the right database. Sorry about that. And if you look here, that's all those results. And it does use the index. And if I change to descending because I created as ascending, use index condition, the same index that I created before, but backward index scan. So it still uses it. Now, right now, the index is visible. And if I toggle the visibility to invisible, it's going to be a fast operation, already changed. And when I do the explain of the same query, it uses type all and rows here, it's five million, just to be doing a like. I'm not going to run it because it's going to take forever. But that's just the explain if we're not used to it. So toggling on and off helps you debug the queries. And of course, it's an oversimplistic example. But if you have too many indexes, you may want to use that instead of dropping your index and creating indexes again. And you saw how fast it was. And it was fast because my SQL before had two algorithms to run alter tables and that kind of stuff. And one was in place and the other one was copy. You know probably copy very well because every time you added a column, it would copy the whole table again, rename it, and drop the other table. Now, they have instant algorithm. With instant, you can add columns without doing it in place or cooperation, which means you can add instantaneously a column to the table without having to wait that long. But there's limitations. You need to be appending a column to a table. What that means? You do not use like I'm going to create the status and add after username. If I do add after username, it's going to have to reshift the table, so it's going to be a copy operation. So it's not going to be instantaneously. But if you're just adding on the end, just appending, and you don't care about the order about the columns, it's going to be instantaneously. And must not have a default value on the new column. It can be not no, but it cannot have a default value because, again, then it's going to have to copy the table all over again. In naming table, modifying columns, virtual columns, before on 5.7, they were in place operation, which is still fast, but now they are instant. And if you change the default on a column, it should be also instant because of a feature called data dictionary that's on MySQL 8. It changed the architecture of how things work under the hood. And now a lot of the changes that you do, you do to a metadata. You don't do it to the table, to the file itself. It's just a metadata change, which leads sometimes to interesting bugs. Like on MySQL 8.0.13, they found a bug. If you rename the table and had the foreign keys, you would lose consistency. The foreign keys would not work just because you renamed the table. So they're still getting there. So keep up to date always with your database, with your minimal versions. And this is an example. I'm adding a column total to orders. It's a decimal column. And I'm forcing the algorithm to be instant. It took 0.26 seconds, which is really fast. Four table that has 50 million records, which took 7.8 seconds to count. So it was faster to add a column than to count how many rows I have on that table. But as you see, I did put not know, but I did not put a default value. So you can have not know on there. Postgres had a problem, as far as I know, if it needed to be nullable to be able to be instantly added. In MySQL, it can do with not know, which is nice. Now, I want to stop for a minute because we're using usually what we call allLTP databases, which MySQL and Postgres are. AllLTP stands for online transaction. I forgot the P. OK, for online transactions. And the OLAP, which is for analytical queries, it's online, analytical. I forgot the P again. So MySQL added features that OLAP database, for instance, like BigQuery has because it's columnar, into an OLTP database, which means you could do analytical queries inside MySQL now without having to create a ETL, send it to whatever database you do, columnar database you do analytics on. But you should not abuse it. That's why I said stop. Because it's not, performance is not as good as it would be with normal queries. So the first one is window functions. Window functions are they are able to analyze the rows of a given result set. If you use a window function, it does not change what is on the result. It's going to look on the result and give you insight on what you have. It's like when you're going, it's like when you're, if you're camera firm, you're here seeing me speak, and you put your camera to take a picture of me, and you have a frame. And I'm the result set. This room is the result set. But you can move the frame and see details of it. That's the same idea. Or you just can peek over it, as we say it. Peek over the result set of a window. So you can use to enumerate rows, like use row number, count how many rows you have, aggregated sums. Let's say a commerce. You have a court. And the user, let's say, bought a chair and bought a table. Or do you want to calculate how much is expanding? Not the total of the bill, but for each item, how much I'm accumulating for each item that I'm going to go adding to the court. You can do that with some rank. You rank the results based on any parameter that you have. Or look that school, in my opinion, on the row before or after the row you are, which is lead and lag. And I have an example for you. I have an order stable. And it's data generated. So the updated app could be different from the created app and before or after. So I'm sorry for that. But I'm going to create this stable. And this is the result that I get it. I want to look at created app for this user, this 654321 user. And I want to know when it was the previous order that the user had. If you're doing PHP, you have an array, a result set. And you can look at the next item of the array or the previous one of the array and give you the result. But you can have also this on the SQL query. So lag means I'm getting the row before. So I'm getting from created app, which is the item that I'm interested on, over created app, again the same item that I'm interested on, on the ascending order, because I want the order before. And then the lead, which it is the order that come after. So I'm saying here, give me the created app column at the row before me and the created app column at the row after me. It's going to look like this. Here is no, because before the first order, there's no other first order. So it's going to be no. So the red ones for the second line is actually the next order for the first one. So it's the lead. However, it's the lead. However, for the third line, it's the lag. It's a bit complicated when you see it first. But you could see, for instance, you can do analytics on this and see, oh, how much time does it take for a user to do an order again on my website? And you do one minus another, and you have the interval in days, or seconds, or years, whatever. So that's one use case for that. So previous order, next order. Break down. The function itself lags. The column that you're working on, how many rows before or after, the full is one, rows preceding, and the window. So we are developers. We hate repetition. We like dry. And this, I mean, I love writing SQL, but I hate copy and pasting. And for me, there's a lot of discussion about this lately. We're like, it's visual that, this thing, for me. But there's one thing that helps you, which it is naming the windows. So it's the same. You declare a window, which I call dates. And I said, what that window is? And when I use lag and lead, I said that I'm using overdates. That way, if I change my window, I don't have to change my select again. And it can have multiple windows per query also. I have a demonstration of that. So this is the query that we have on the slide and was pretty fast to run. We saw there's 50 million orders there. And this is my problem. Given three users, get me the top five orders of each user ID. So a lot of people would do subqueries. What I do is this. Let's first rank it. Let's rank what are the top orders of these users. So I'm going to be lazy here. I'm going to copy, because it's going to be faster that way. I'm going to remove the lag and the lead. Those are the users I put it in here already that I'm interested on. So if I do this, wait a moment, users, those are all the orders for those users that are on my system. So I limit for 10. No, let me remove the limit. Training nine. So I want only five per user, so I should have 15, right? So let's rank it. I'm going to rank. And when I rank, I need to say over what. That's my window. So I'm going to rank over user ID partition by. It's like a group by, but it's a bit different. Partition by user ID and order by total. Descending. And it's going to give me the ranked result for everything. Let me put it here. And to make it easier, I'm going to remove the created at the order ID. So for this user, this is the order with most expensive. So you can see the price. And the second one, more expensive. And the third one, more expensive. And so on, so on. But I only want the top three. How you do that? If you're writing a PHP, you just get the top three of all the results. That's where common table expressions come in. Because remember, window functions do not change the result set. I cannot change this result. But I can build on top of it. So I'm going to build on top of this. I'm going to say this is my ranked orders. And it's defined as this query that we just created. And I'm going to filter it. I just need to filter now. I need to filter from ranked orders where the rank order is less or equal to five. I want it. So it should have 15 results. And as you can see, one, two, three, four, five. One, two, three, four, five. One, two, three, four, five. That's the power of window functions inside the common table expression, which I'm going to explain right now how it works. The sum, you could say, I want to know how much accumulated spend that user had. So if you do a sum on total over, it's going to give you the total order. How much that user spent on everything. That's for this user. Wait, over partition by user ID. So each user spent that amount of money on all the orders. But you can have an aggregated one if you order by. So you see, it's ugly, but it's how it looks like. If you say partition by user ID because you're getting for that user, the cumulative spending, rows unbounded preceding the rows there are before. So the first row, I spent total of 20, 200 something. And the other one, 200 something. So the sum of those two rows are 409. And then the sum of the three rows is 532 something. And so on, so on. So you could actually then create another table expression to see the difference of spending between one thing and another. So that's why we call it analytical queries. I showed you a common table expression, but I didn't explain to you what it is. It's similar to create temporary table, but you don't need to create privilege. You just need to have a select privilege to use it. You can reference other CTEs if those are already defined. I created a CTE, and I could use it on another CTE and inner join them together. And they have a beautiful query with all the analytics that I want. It can be recursive, too. And it's easier to read than a sub-query it would be, a sub-query of five, then a query, and then a sub-query to get the results that you want. I'm going to show you a bit of recursive CTEs because they are useful for your hierarchical data. Who here uses WordPress? Who here knows? You have categories, and then there are sub-categories, and then there are sub-categories, which are hierarchical data. They're probably, I didn't look at the code, or querying that high structure, that traversal in the PHP code, not in the database. But you can do that on the database instead of doing that on your code. So base query comes first. It's all recursion starts with a simple case. Then a second query comes after a union statement, and the stop condition should be on the recursive call. So because there is a problem of depth of how much recursion you can do, and by default, I think it's 500 recursions, I increased for this demo, but you should put aware to know what is your breaking point for that recursion. So I have my categories on my blog, and I have animal and plants as main categories, and all the other ones are child, children for all the categories. That's an example of hierarchical data. But I want to traverse that tree. That's what I said that I was going to try to do. Okay, where is mine? I'm not gonna look at that. So I'm gonna start here. It's like the other one you start with a with, but you say recursive. You give it a name. I'm gonna call it a tree, but I don't think it's a tree, but I'll call it a tree. And then what are the fields of that table that I'm creating? Because I'm creating a table here. So it's gonna be depth level, name, the path, because I want to know the path of the hierarchical data, and the node ID. And then you need to do the definition. Again, basic case first. So the basic case is gonna be depth level equal to one. I'm gonna call root as my first name, but Mexico has a problem of that. You cannot use text as a recursive field or voreshore, only shore. So if I, it recognizes this as shore, and it's shore four, and if I have like the word animal, which is higher than four, so it would give me an error. So I need to cast this as a voreshore that's bigger, a shore that's bigger, which I'm gonna do 255 in here. And the same thing for the path, because the path could get big, and I'm gonna use the maximum that I can in here, which is this number, I'm not good with numbers. And what is my first node? It's zero, because plant and animal, their parent ID, it was zero. So that's my first use case. If I do select star from tree, I should get one result only, for now. Now comes the recursion. You do a union all, and then I select, I have my glue in here, because sometimes I forget. And you're gonna select, I'm gonna leave blank for now from tree, and from categories, I'm sorry. And inner join on tree, and I'll explain later. So from the first one is my depth level. For each recursion, I'm gonna increase the depth level in one. So it's depth level plus one. The name is gonna be the same. I wanna show the name of the category. So it's gonna be categories.name. Now I wanna show the path. I'm gonna use a function that my SQL has, which is concat word separator. I don't know if you know it. Which means concatenate the string and add this character to it to split it on the path, because I want to keep the path, so on tree path, with the name that I just got, which is categories.name. And in the end, I want to know the node ID, which is categories ID. However, my inner join is incomplete here. And I'm inner join the tree on what? On the tree, the node ID, it's equal to the categories parent category ID. And that should work, technically speaking. I hope. Let me organize it, okay. Okay, oh, wrong database. And it did work, but it's hard to read a bit. I'm gonna do an improvement here, order by path to be easier to see the path. And in the end, you have the traverse tree of all the categories of a table. I didn't need to put a stopping condition because I only have like 11, so I didn't reach the maximum that's, but you should have something like where that's level less or equal to five, for instance. And then it's going to give you the same result. That's a use case for recursive CT. So instead of we're using your script to go querying all the time to get the stuff you need, you can use that to be able to get it. So that's gonna be on, I'll share the slide. This is the base case, as I said before. And this is the query that I wrote. Same query. And now subqueries. I don't like subqueries, but they are useful sometimes. Who here did this? Everybody, no, come on, everybody did this at least once. You never put a subquery inside a select? Never, okay. I'll pretend that I believe you. So I'm having two subqueries here and they're the same subqueries. I'm just getting the ID and the total from the table orders. Should be a easier way to do that instead of getting it this way. There is. It's called lateral and it's new on my SQL 8. Lateral, it's like a for each. It means for each row on this result, please do this query. And instead of returning only one column, which you only have access on the select, you can return how many columns you want because it's creating a lateral table to your users. That's why it's called lateral. And you see here, I call it total orders and I use as ID and total here to be able to show it. Performance-wise, the other one's better. For now, depends on how many columns you're bringing. But on readability, on my opinion, this is better to read than the other one is. And I'm just gonna show. It's not a, it's a lateral join. The problem is, my query I'm gonna show you here. The use cases, I want the biggest order of the user. On a join, you would have no control over that. You could use a group buy to get the maximum order. You're right. But then I couldn't get the same ID on the same row because when you use an aggregator, you only guarantees that column. It doesn't guarantee the column that's next to it. You would have to do twice. So that's one example. That's the same query. And what I wanna show you, it's how they behave. So this, I'm gonna pin this result because I wanna compare to the one using lateral, which it is this one. It's not the same result. And there's a reason why. When you do the subquery here, this one is with the subquery. You're getting all the users and then looking for orders that the user has. If it doesn't have any order, it's gonna return no, which makes sense. But the lateral guarantees that you're not gonna have a user that doesn't have orders. It's like a inner join. That's why it's called lateral join in this case.