 Well, hi, I'm Jeremy from Sequel's Maintainer. Here's a CQQ about what Sequel is and how it can help you. First, what is Sequel? The Sequel is a database tool kit. It's a collection of tools that allow you to interact with the database and know the solution to solve your problems. The Sequel is not a kitchen sink, but a little effort to build whatever type of kitchen sink you want with it. The tool kit versus kitchen sink approach is one thing that differs between Sequel and other Ruby database libraries. And there is one of the main difference. When we say the picture is one of the thousand words, but in this case it's only three. Sequel and its core is SQL in Ruby. Now, to understand Sequel's purpose, we need to talk about the evolution, and specifically the evolution of database access in Ruby. Now in the beginning, well, Ruby had no database adapters. But after a while, people wanted to use Ruby to interact with the SQL database. So they wanted adapters specific to each database, such as the Ruby Postgres adapter, originally owned by Matts itself. And these adapters allowed programmers to use Ruby to increase their productivity, but they had a few shortcomings. For one, they were database specific. So code different from one database would work on another for two reasons. The first reason is that the APIs were different. The second reason is that the SQL code was not abstracted, and differences in SQL syntax with the SQL work on one database might not work on another. Now, the database specific adapters operated at a very low level, and required the programmer to hold the SQL app, which was off of the ghost. And finally, adapters offered a little opportunity for abstraction, as the return of rows is arrayed or hashed by objects, making it difficult to assign behavior to records. Now, in 2001, Ruby got it, which was started, which gave programmers a standard database access interface. Now, what's left for any database-independent code is still lots of problems. They don't give driver gate guides, so they're still responsible for any database-independent SQL. It's still required the programmer to write all the SQL out himself, so it was off of the ghost. And while it was more flexible, a lot of user issues where they wanted to array, hash, and row objects return, it still didn't often be really easily assigned behavior to records. I fast-forwarded that in 2004, when Rails were updating the active record, it solved some of these problems. Active record also offered interface subtraction, allowing the same interface to hold for databases. And while abstracting some parts of the SQL creation, programmers still had to write SQL fragments, which led to database-specific SQL values. Now, by abstracting some parts of the SQL creation, Active Record cut down significantly with the robust being inherent in previous approaches. And Active Record's best feature, in my opinion, is that it will help programmers specify behavior in the rows, allowing the rows themselves to do things. And this made for a much nicer, more object-oriented code as opposed to the procedural code that was previously common. Now, while Active Record makes most things easier, it also came with strong opinions on how things should be designed, and it wasn't always amenable to disagreement. Now, in 2007, the SQL was created to solve these problems more completely. Now, SQL broke the table with more database-dependence by abstracting not just the interface, but many SQL syntax issues as well. Now, with SQL, the programmer doesn't even need to know SQL syntax, though they should have still understood SQL concepts. Now, you might be thinking, how do database-dependent SQL shoes pop up? If you remember work on an application that supports one of the databases, you know it's more often the first thing. And, for example, taking the simple thing, like concatenating strings. The SQL standard string concatenation operator is the double pipe. The Microsoft SQL server uses the addition operator, and MySQL uses the concat function. But the general approach to database-dependent independence, using the library, is to basically avoid SQL concatenation and database altogether, and select all the columns needed, and then do the concatenation Ruby. And the problem with this approach is even more straightforward, and it's not even always possible, since a filter may depend on the result of the string concatenation operation. Now, SQL extracts the SQL syntax issues and allows you to write efficient database-dependent code. Now, SQL code is generally very concise, more than active record in most cases, and it's still easily readable. And SQL gives the programmer more control by making the decision about whether to assign behavior records to optional. Now, there are many cases where you don't want to assign behavior records, and reporting is probably not the best way to do it. Now, SQL does have some evidence about how to do things, and it tries to make it easy to disagree. Now, Matt said that Ruby should be like playing in a child's hands, and SQL reflects this philosophy. It tries to be flexible so that you can mold it to suit your needs. Now, if you know a little about the reason for SQL's creation, you can ask yourself, why SQL? You may already be using one of the other libraries discussed, it may get the job done, and there's a natural human desire to hear it. What does SQL bring to the table, and why should you consider it? First, SQL is simple. It's simple to learn, it's simple to use, but the terminals can be as simple as possible, but no simpler. Second, SQL is flexible. As I mentioned, it has opinions. However, it doesn't have dot-on. Most options are easy to override and bring granular level. Now, SQL's toolkit approach allows you to pick which tools to use to solve your problems. And SQL's design allows you to use your tools to build more tools, which can be a specific or as general as you need them to be. The ability to build your own tools is part of what makes SQL powerful. And the other part is that SQL's toolkit comes with some power tools built in. Now, in a day and age where other VRMs are looking like poster children from the American or BC epidemic, SQL was raised in the South of each diet in yoga. It takes less than half the memory of active reverend, and starts working twice as fast. Now, do you feel depressed when you find a bubble hole for the other guys? They're just sits festering and running on their road track. They have a more and better approach on everything. How did you feel if you took your car in a repair shop and told them exactly what was wrong? And they told you to find three people having a problem before they even consider fixing it. Wouldn't you rather get a response quickly from someone who probably knows how to fix it and maybe fix it for you? And what about suggest improvements? Maybe you have a great idea and you aren't sure how to do that. Other guys, I'll tell you to come back after completing it yourself. With SQL, you'll have someone who may be quite great or at least work with you to help you achieve your goals. If you decide you want to influence only yourself, sorry. Wouldn't you like to make your events easy to follow and design specifically to be easy to modify instead of code or optimization if yours could be the main design objective? Well, I should point out that while SQL is not focused on performance, it doesn't make it competitive performance-wise with the other guys. To finally, and I think perhaps most importantly, you should use SQL for the same reason you use Ruby because it's more fun or at least it's less painful. Now, by now, I'm sure some of you are thinking, talk is cheap. Show me some code and let me judge for myself. So, let's drop the buzzword crap and I'll write my code. Now, a good measure of the complexity of good pieces of software is the number of steps you have to take before you can start using it. In my library, using SQL for the first time is like setting up on a long journey. It begins with only a single step. Now, that step is creating a database object and there's multiple ways to do this. One common way is using the connect method. SQL also provides methods for each application type. So, if you are using S for a light, you just need to call the S for a light method with the file on the database. Now, that's it. Once you have your database object, you can immediately use it for your results. Now, SQL does not force you to create models if they don't help your application. If you are using models, SQL will return rows as a hash with simple keys. And you can actually use SQL to return any type of object you're choosing. Now, some of the last time I had a question with SQL Mail.ly was that they had an database with thousands of tables from the same schema. They used an active directory. They used a meta-program to create thousands of model classes for each table. With SQL, they could access the tables directly, or go out easier. Now, I certainly don't advocate that kind of database design, but it does show you that SQL does have to handle degenerate cases more easily. Now, the convention when using SQL with a single database is to sort that database object in a constant name, dv. Now, the database object is mainly used to create datasets, which also used to handle transactions instead of S for models, which I'll talk about now. Now, the only way to use transactions in SQL is through the database objects transaction method. It takes a block and ensures that all database interactions inside the block uses the same database connection inside the database transaction. Now, this is necessary if you're making changes to the database and want to ensure that you're all changes are made, or no changes are made. Now, an example on the screen, you want to add an accounting entry to the database and update the account balance at the same time. Since we shouldn't be inserting an entry unless the account balance is updated, we use the database transaction to ensure that you're all statements are successful, or none are. Now, SQL orders are used for seeing SQL sending the database because SQL abstracts so much SQL code, you might not know where it's going to get unless you add an SQL order. And you just access the array of database orders to the database to be able to order something, and you add all orders as you see fit. And every time a query is executed, the exact SQL use is logged in info level to all the database' orders. Here, run the all method in the activities data set, and you're done. Now, each database object has its own private connection fit, and SQL use the connection pool is designed for high concurrency, where SQL doesn't check out a connection from the pool until it's needed, and returns it to the pool as soon as it's no longer needed. SQL's connection point never requires the programmer to clean up connections manually, nor does it require a connection looper to clean up connections automatically. Now, let's go on the quick map and do the four basic query types in SQL. You can also use each method in right over the rows, as the database app provides them, and you can use that to process a million record data set at once, depending on your adapter. Now, if you only want the first record, you can use the first method. Now, inserting rows is done with the insert method and hashing arguments, where the key is listed by columns, and the values are divided by that column. And updating rows is similar. Using the update method and hash similar insert, updating affects all rows in the data set, you'll be updating all rows of the table. If you only want to update certain rows, you can filter the data set first, and then update it. Deleting rows are very similar to using the delete method. It's like updating affects all rows of the data set, so if you only want to update certain rows, you can filter before deleting. Now, I've got a little ahead of myself. First, I need to explain the interest of little creatures on the screen. This is the SQL data set, and it's what gives SQL a lot of its flexibility. A data set represents an SQL query or, generally, an abstract of objects. And at any point, you can take that abstract set and turn it into a complete set upon all. Now, as shown here, data sets are usually created by calling the array access operator on the database object with the symbol. Now, my friend, the data set, he's got a baby goat, and he doesn't think he can be improved. Let's say you wanted to change the icon filter to restrict the rows he represents to a subset. He's going to pull fast one on you. My return of the data set that looks like him, doesn't change. Now, the data set itself won't change. And if you ask that copy to change by calling limit, it's going to turn another copy with both the limit and filter applied. Now, this is known as a functional-style API where objects return modified copies of themselves. It's great as a collection of shared data sets of multiple threads without worrying that those threads are modified shared state. Now, data sets of many methods that modify the query to change the SQL use. They're pretty much at any standard SQL cost We'll bring to review the most common methods, select, changing which columns are included in each return row, and in general, you get the best performance by selecting only the columns you'll actually be using. Filter produces the rows to the included and specified subset, and it's probably the most used method. Order. Change the order in which you want rows returned. You want things in that chronological, apathetic, numeric order, and this is the method you use. Limit. It's up to the number of rows returned. You can also use the specified offset, and you can use the open offset to loop them like a pageant search feature, which SQL's pagination extension does. Now, there's a method for almost everything you can do in SQL. SQL is a very powerful language that runs in a set of objects, and SQL gives you a simple interface to tap that power. I mentioned earlier that SQL is SQL in a movie, but so far I haven't given many examples, so I think I should rectify that. Here's a fairly simple query. You should not have a movie counter that doesn't contain any SQL. It uses Ruby symbols for SQL columns and Ruby strings for SQL strings. That is how most SQL code looks. Rarely do people write SQL manually. Those symbols don't support that, too. I see how you can select to include the actual name columns, filter to restrict the records to one where the name is your name, and the word also includes records for case true. Now, if you're used to SQL, it's pretty easy to translate code into SQL. And if you don't know SQL, learning SQL is probably easier than learning SQL. Here's a slightly more complicated example of the normal actual name. It shows that you can select all columns of table using the symbol multiplication operator with an outmarking it. Very similar to how you use SQL. It also shows how easy it is to join tables by specifying the table name and conditions. SQL assumes that the ID column is for the events table and the event ID column is for the activities table. If I don't have the bitwise operators on the symbol, operate as the logical operators in SQL. The ampersand is used to sand, the pipe is worn, and the toe lay is not. Alright, this is the last query. This one uses a filter with a block. Inside the block, instance methods with out arguments, such as date, refer to SQL columns. The exclude method operates as an inverse filter using a hash of the new value, a general set of its null condition using what the exclude changes it to is not null. Finally, you can reference existing columns when setting new values, which have a new values price depending on the existing price. But this is powerful as it allows you to update prices for all filter records at once. So you're treating all filter records in the impartiality and updating each of them individually. And when possible, you should attempt to update filter records in a single query, unless you have a good reason not to. Now, if I look at these examples, what you might be thinking, what dark magic is SQL using to support its DSL? And it's actually not that complicated. SQL adds some methods to simple other core classes that will be used to define these methods. Great option to SQL understands. SQL, the American expression. So here our task is the American expression with multiplication operator and arguments price in 0.009. And the American expression is also having mathematical operators defined, which return other American expressions, which is what allows you to create complex queries. The American expressions also have the inequality methods defined, which yield Boolean expressions. And SQL has a basic understanding of the differences between American types and Boolean types of SQL. If it knows an object is Boolean and SQL, you can use the bitwise operators in place of the logical operators, which will produce other Boolean expression instances. And if it knows an object is Boolean, it's not going to let you use mathematical operators since they don't operate on Boolean and SQL. Now, while you're writing your complex SQL queries directly in Ruby, it sequels a Boolean form of simple abstract syntax query, which it compiles or literalizes when it comes to generating the SQL. This is a very simplified abstract syntax query for the allowed object. Now, knowing about SQL at an object level instead of at a string level, allow SQL to have quite powerful introspection capabilities. For an instance, for SQL manifest this knowledge, it's when it comes time to invert existing conditions. Other database libraries that only understand SQL at a string level would probably just take a knot in part of the conditions. In SQL, because it understands SQL at an object level, you can prove the abstract syntax query based on the rules of logic. It changes the and to and to or, but not to, but to a coupon, and the greater than or equal to a less than. The final result is clearly looking at SQL, but it's easy to understand. Our row is not allowed. If it's priced including tax, it's less than 25, or the coupon is used. Just to prove that the original object operator works, we can invert the object twice, and I mentioned earlier that SQL codes are very concise. One of the tricks SQL uses to keep code concise is to allow it to use a single symbol to contain both a table and a column by separating them with a double underscore. It also allows you to use a single symbol to contain both a column and an alias by separating them with a triple underscore, and it allows you to combine the two budgets by using both a double underscore and a triple underscore in the same symbol. If you want to access, especially by custom SQL, you have to find the string. Now, you're using the array access operator to use this object, combined with short and intuitive method names, is one reason that SQL code almost always ends up being more concise than code in the confusing way of the database library. So far, I've only talked about so-called core SQL, and SQL is actually split into two parts, SQL core and SQL model. SQL model is just an object relational network built on top of SQL core. Model classes are backed by core data sets, and SQL models are built on top of SQL models. Now, the basics of SQL model are similar to other BORMs, and one thing that differentiates SQL model is it's very powerful and flexible associations. In keeping with the toolkit approach, SQL only supports the three most common association types natively. However, it allows you to build your own custom associations, and even supports the even loading of custom associations. Now, one example of an association type that doesn't support natively can easily be built using SQL's toolkit, which is the dataset option which I'll be specifying the dataset to use for the association. In this example, each firm has many clients, and each client has many databases to make many invoices. To get the invoices from the firm, you'll have all invoices for the client's firm as the current firm. Now, the eGograph method equally loads the clients for each firm, so you also get the benefit of each client object for each invoice and number 20 invoices. And you can create custom associations in active record, but you need to work all that as you go by hand. Well, that's painful enough, but what's most is you can't equally load custom associations in active record. SQL gives you that ability using the equal loader association option. This option is a product that takes three arguments, the key hash and the rate of current objects and the dependent associations to equally load. Now, the key hash is just an optimization. The key hash is an example of the value of the sub-hashes, such as ID, and the value of sub-hashes. The sub-hashes have keys, which are the values of that column, such as 1 or 2, and the values, which are a way of instances which have the related value for that column. So, for the example above, firm 1 has ID 1 and firm 2 has ID 2. And in this case, since the association depends on the firm's primary key, we only care about that specific sub-hash and for each firm that did so in the group we were voting, we first set the cached invoices association to the empty array. Then we get all invoices for all clients and all firms in the dataset using the keys of the ID map. Now, for each of these invoices, we associate it back to the loaded firm using the values of the ID map for the invoices client's form ID adding it to the existing array of invoices. Now, after we've finished processing all the invoices, each firm among all related input invoices in the association uses the cached invoices method on any firm object in return will not cause any additional database queries. You should note that there's nothing inherently specific about this approach. Creating a generic plugin that supports any hazmary through hazmary association is possible and probably not even all that complex. There's already a plugin that doesn't include polygolf associations using basically the techniques that I've explained. Now, in addition to advanced features that aren't shared by the already all-awareings, it has both some validations association callbacks and extensions and supports two separate invoices one of which uses joins and another which loads each association and set the query. This equals a tool game so it gives the programer a choice which invoices you use rather than guessing. Like Aaron mentioned, the ambiguity is reasonable. So, like single core single model isn't very flexible. It's built completely out of plugins. The most interesting model function is the advanced plugin and the association's limitation is also a plugin. SQL ships with seven other optional plugins which ask for things like caching single table inheritance and serialization. Plugins can modify any aspect of SQL model they can also add any class instance or dataset method and call super to get the default behavior. I don't have that much time left but I thought I should make some other advantages that SQL brings to the table. 13. There are a number of database adapters that SQL currently supports. By comparison, after record and fiction only supports 4 adapters and Ruby DPI usually supports 5. Now, only database that's supported by the other guys that SQL doesn't support is SQLite 2 and it's only because it was so old that it wasn't even common use when SQL was originally built. Now, one reason that SQL supports so many adapters is adapters are easy to write if only 5 outputs are required and the showcase adapter is only around 50 lines. Now, SQL doesn't possibly be a unique feature called dataset graphing and to explain the benefits I first need to quickly explain the column that solves. Now, that problem is joined clobbering since SQL returns rows as hashes if multiple tables have the same column names and you don't be able to use the columns randomly columns in the join tables end up clobbering columns in the original table. In this example, both entities and events are ID and name columns. However, when you join them and get results the columns from the events table end up clobbering columns from the entities table. Now, graphic fixes the situation by returning rows as a hash with table names in the keys and sub-hash values where the sub-hashes have columns column keys and values being values for those related columns. And it does this by giving small columns for you and splitting the simple hash results into sub-hashes when the rows are retrieved before returning them to you. This is how you identify activities columns in the activities sub-hash in the events columns in the events sub-hash. Graphing makes it much easier to deal with database relationships at a row-based level. SQL supports creating and altering tables as well as most other types of schema modifications which you can use both inside of and outside of migrations. SQL supports both bound variables and prepared statements with native support on force-output adapters and emulating supports on all the others. SQL supports in-base storage procedures for my school and agency adapters and SQL supports both master-short database configurations and shunt configurations. Now, finally, I mentioned earlier that there's a natural human reaction to resist change. Let's say you have an existing active record infrastructure that can be tedious and time-consuming to change. How would you like to really use SQL's powerful fill-training ease with DSL while keeping all of your current active record behavior? And you can actually do this in a single line of SQL that I've broken in multiple lines here. You just need to add a row property to this that changes the hash from simple keys to string keys and calls the active record private columns and instantiate with the hash. That's all it takes. And then you notice that we're trying to active record instances. That is a slideshow part of my talk. It's my reading time. I'm going to do some live coding showing some features to SQL, hopefully with some monitoring and specification. As you're seeing, I'm doing this presentation on Windows, which makes me even brave and stupid. Let's assume I'm brave and I'm doing it to show that if you're one of the unfortunate souls forced to use Windows, you can still leave your SQL to accomplish your goals. I'm going to open up this story on this machine and it takes a long time and I'm going to hold on just a second. Does anyone have any questions? I have a question, yeah. Is there, I mean, you mentioned it last slide a little bit, is there, what's your vision of the future of SQL and its coexistence with active record? Personally, I don't use it. Mostly I would expect using all SQL cloud. I just want to give you an example of how powerful SQL is in terms of being flexible but using a row clock, with a SQL dataset, you can get it in terms of any type of object you want. So I expect that the record is the most common that we all are in. But if you want to turn other types of models or basically any object you want, the row clock takes the hash that SQL produces and in most cases you just turn a different type of object with it. So you can't use a new to turn active record, because it's not learning to that. It's really flexible in terms of you can do whatever way suits you best. Does that answer your question? What I was just curious if you had a vision for it like do you want this to be part of the next Rails release or down the road or have you talked to those guys, or where do you see it going? I mean, I assume that if you like SQL more and you use it, you can probably form a new SQL model as well. SQL models, just like SQL, I think it's a good active record in most instances. The one thing that active record has that doesn't happen, I think really is a good idea is it has a little more powerful schema support in terms of taking an existing schema and you need to be able to create migrations that will create your schema sort of taking a turn of SQL and bringing in a new reform. SQL doesn't have that currently. So that's what the matter is. But other than that, I think SQL is a better choice. But first, if you do have a lot of active record construction, you can't change the SQL and still return active instances. So you sort of have an instance level behavior that you have one of the main draws of SQL is that the film screen is just so powerful and easy to use and flexible that people might not still use that. It's crazy awesome and it's supporting all these different database adapters and all these crazy things. It seems like you must be using this all the time. In what context did you develop this and how do you meet compatibility problems? I actually was not the original developer of SQL. I actually took over after we went to using it about a month. The original development was by Sharon Rossner. He was excited to give a program together. Not sure what he did, but he gave it up. I was excited to be able to patch that we made the SQL to 1.3. Shortly after that I decided to take over in the morning. You know, like for developers of SQL, say if you want to take an open can I was willing to involve because it was fine and the person didn't have enough time. So obviously I didn't really look at SQL's code base. Most of the time I can't make an open can. And if you look at my mountain list which I gave last month, it goes in a lot more detail. It's very, very code heavy. You know, I almost have to pause it every few seconds to read my slides. But it sort of goes into that. I can't really take credit for a lot of what's in SQL. I might have a lot of improvements to it. But basically I'm sort of already there. Oh my god, I tested. There's also, one thing that SQL did have is it currently expands the test suite when I started with fairly input coverage. So that was extremely helpful in terms of changing things without breaking things. Another thing is I added an integration test suite. So originally SQL's tests didn't use database at all. It's weird for database library. It basically tested the SQL that was being produced. Used a lot of box. It was really good but it had some problems because some things are so tight in database interaction. So I added an integration test suite. We basically put it in a live adapter and you can test it against a live database to test a basic model. It basically is that functionality. And that's been really helpful in terms of getting around some quaranties and all this stuff. So mainly distributed tests are one of the main reasons we keep SQL's code quality high. Another thing is unlike some other Ruby database libraries SQL's master branch is pretty much almost always more stable than the latest release because almost every patch that goes into SQL at least before I push it to GitHub gets tested by the person with full test framework that I use for their releases. So it's pretty safe and good in production. Any other questions? Go ahead. Most adapters they don't catch a database-specific error from returning as a SQL error. So you write your application code and catch SQL database error and you have Postgres and it raises PDR that converts it for you. So you can use a single exception in the class to basically handle all sort of database problems. They're not all adapters with that but the most the three most common ones, SQL High, MySQL, Postgres I think also JVBC all do that.