 So I'm here to talk about data mapper My tag line here is the persistence framework get into a little bit more details about why that is, but um That's what I'm calling it So who am I first of all I work for engineering probably know who that is I work on jQuery. I work on Merle. I work on data mapper Which is really cool because I have commit access to pretty much all the things I use on a day-to-day basis It works out pretty nicely I guess I forgot Whoops So what is data mapper Data mapper is an ORM It's like actor record Drops into rails like actor record Or more and let's take a look at what the architecture looks like you have your marble rails, which is the application You then have data mapper Which talks to Merle rails? You then have an adapter which DRB is one of the adapters and it's just a unified database adapter You can have YAML or I don't know something else that someone's gonna write later And those talk And then you have your database your set of YAML files or the something else that someone may create at some later point And that's the data store. So the first thing that you should notice right off the bat is that unlike something like sequel Data mapper is designed out of the box to work on arbitrary data stores. The API is designed around accessing data rather than creating a good API for SQL and that's sort of the main Difference between that and some of the other alternatives There's some caveats about this presentation. The first one is that data mapper is a work in progress So oh nine should be released like some early next week So some of the things that I'm talking about here are going to be released in oh nine and are not out yet So I have code here that will not run today, but we'll run tomorrow That's sort of the way data mapper is right now I'm also assuming active record functionality. So I'm not talking about here's how you do an active record And here's a slightly different way you do it in data mapper I'm not going to talk about how to do it has many or has one or belongs to or has many through Not going to talk about migrations because all these things Exists an active record and have either identical or slightly different implementations in data mapper and I Don't have enough time to go through all the little nuances. So I'm going to talk about What's cool about data mapper above and beyond what exists in active record? Also, there's code in this presentation a lot of code I tried to keep the code to things that you would actually do in a real application So it's like here's how you would do data mapper in your application I tried to keep the examples as simple as possible. So I don't have an entire model anywhere It's just like little stub models and then whatever I'm talking about And Lush very there's no code. That's in data mapper itself. So it's there's a lot of code But I hope that will not be a problem So what is data mapper look like what it look like to make a query in data mapper? So Looks like this There's no find so instead of find all and find first is just a find and a first accessor You can see that there's some cool stuff here like the symbols take greater than not Less than equals a bunch of different things like that to make it more easy to do complex queries without having to Resort to a condition string And so what does this create an SQL? It creates this so you can see that the Not arrow array got converted into not in Bob Jones The age is bigger than 30. So Basically, this gets you into more ruby-esque writing of cool SQL stuff without having to write SQL as much What else is true This is not true in in active record right in active record every time you get a new object from the database It's a new Ruby object and even though it has the same ID and I guess you could override equals equals to save it As the same ID. It's the same object or something They aren't the same Ruby object in data mapper There's something called an identity map which basically means when you do zoo dot first it goes and gets it It gets the ID I think the idea is one and then it gets stored in memory as we know that the that zoo one is this object And the next time you go to get it it pulls it out of the identity map Which is really cool for things like you have you pulled all the zoos and all the animals Separately and now you want to go get an animal zoo. Let's say right so It has a belongs to something you have the foreign key an active record even though you might have the zoo in memory You still can't go and get you have to go ask the database for it again Whereas in data mapper you have you just say dot zoo and it will go say oh zoo number two I have that memory here right so you have a situation where you have you don't have to do as many queries and you have this which If you've run into problems Entailing it you will know that it's cool. I'm the good So how about this this is like a big no-no an active record because it's like an n plus one query problem, right? How many queries isn't is it in data mapper? It's two queries in data mapper The reason for that is that what data mapper does is when you say zoo dot all it says okay Here's a set of zoos big array right, but it's not an array. It's a special set object And when you go and call animals on any one of them What data mapper will do is say oh it looks like you're starting to Start calling some associations from inside of the object. You probably want all them And in fact 99% of the time that that's true right you're gonna do something like this So what happens is we have this thing called strategic eager loading Which means that you don't you pretty much eliminate the n plus one problem and even for something like a nested tree We're an active record that might be like an absurd number of queries Right because it has to keep asking for each child in data mapper It's the number of queries you have to do to get a complete nested tree is equal to the number of levels of the nested tree So five there's five levels or whatever not, you know some absurd number So after we talk about eager loading, there's also a lazy loading technique that we use in data mapper Let's take a look at some code here I'm gonna talk first about each little piece here just to let you know what's going on first of all We have include data mapper resource you may notice that we aren't We aren't doing post inherits from data mapper base right where you would do an active record What this means is that you could take any object that you you'll have and make it persistable, right? So basically all you have to do is you take your you take an object that you have out in the world You include data mapper resource and now all you have to do is specify which instance variables you want to have persist because in data mapper instance variables are the same thing as properties and You can then save a an arbitrary object. It may inherit from something You don't have to worry about the fact that it doesn't inherit from data mapper base You know may inherit from some random thing and you can now persist whatever you want Property ID fix num serial truth So this is the basic setup of a data mapper property. You say what its name is you say what class it is We'll get into that a little bit more later and you say in this case that it's a serial column Title as a string And here's what I was getting out with the lazy loading you can say lazy true what lazy true means is that When you load the when you load the thing the first time get all the things that aren't lazy loaded Right so get in this case the ID and the title then if you actually ask for the body then go get it This is really useful if you have a situation where you have an index page Which has a lot of things that you want to get but then some other page where you want the whole thing Right so in the index page, you're not going to necessarily load all these columns They may be big text columns Maybe God forbid you have images in the database You can lazy load these things and only get them when you need them And of course Because it's in a set once you start lazy loading one thing it goes and gets everything in one query So it's not like you have to constantly do queries every time you try to get a lazy loaded element from one object That's in a set Okay, so that's the end Okay, let's take a look at some What data mapper maps to what's equal so in this case we had a lazy loaded column So when we did posts equals posts at all it said select ID title from posts Right let's assume that it returns two objects that have IDs one and two and Then you do post up first stop body That'll do select body from posts where ID is in one two right in other words We got two objects back there on our set now go get the lazy loaded column because I now ask for it Right, but if I do this The only sequel query that gets called is the exact same sequel query right because once I have it once I have the identity map and I can just get the other ones out when I need them there's also a really cool thing called lazy loaded grouping and what that means is that we have Property title string and instead of saying lazy to you say lazy details and here you say the body has lazy details And what that means is that when any lazy loaded column gets asked for that's in a group in this case details It'll get all the lazy loaded columns that are in that group So in this case, I have some for some reason I have a database a table that I only want to get the ID for the index Maybe it's for an API or something and then as soon as I do dot title It'll get me the title and the body because they're in the same lazy loaded group So that's lazy loading. There's those are some nice enhancements that help you build more efficient things Then you could easily build when you have to stick with either, you know Writing raw sequel or writing the little Ruby that you have access to Let's talk a little bit about You know the fact that you can go off the golden path in data mapper You're not trapped in either. It's a default or the crazy, you know writing everything raw sequel yourself. So This is a stripped-down data mapper model This is what I'm what I want to talk about which is that you can specify for any class There's a new repository called legacy which you define elsewhere and we'll Talk a little bit how you do that in a little later But you can say for the legacy database the title property has a field of weird thing, right? Weird thing so What this allows you to do is it allows you have a database for which all the model logic is the same But the mapping you might have a legacy database or you know yaml a list of yaml data Yammel files or whatever store you want that have different conventions for what the names are right? So you can say is when I do this with a default It's just a title do the regular data mapper defaults Which are the same as the active record defaults, but if I want to call the legacy repository Then it has a special field and you don't have to care about this You can just say dot title and it'll know oh it has this weird field name And you obviously we might be able to do this for any as many as you want The nice thing about this is that if you have a bunch of fields and only some of them are different You can specify the regular one all them on the top level and then what goes inside of repository legacy is just overrides So if you have 10 property names And only two of them have special weird names in the legacy database. You only have to specify those two in the legacy block So how do you actually call? Posts in the legacy repository so it looks like this post at all and you can specify repository in the conditions hash or This is equivalent repository legacy do post all The top one is really syntactic sugar for the bottom one the bottom one You would use it for example You had a bunch of things to do in the legacy database and you don't want to have to keep specifying repository legacy, right? So now you say repository legacy while I'm in here all the models that I talk about use the legacy repository to access them So similarly Post at all by default means the same thing as that right so post at all Looks like oh, it's just post at all same thing as active record behind the scenes. It's identical to this and this Basically, there's a default repository that if you don't specify it becomes that And I guess for most cases in most like new projects. You wouldn't even care about this. You would just omit the Default all the time and you'd be done But if you had a legacy database you want to import data from or occasionally use You'd be able to specify in this case. I want to use the default in this case. I want to use the legacy one Okay, now we have naming conventions. So in active record your you have you have to either decide all my tables are Name the way that active record wants them to be named with plurals or whatever or I have to do set table name on every table Right in data map or you can specify what the naming convention for your table is so there are some built-in ones like underscored or underscored and pluralized But you can do this really cool other thing which is say the naming convention is a lambda And you can specify let's say your table name always starts with TBL It's some weird legacy thing and that's just what it is so you can specify that it's table class that camel case and All the tables the first time that they have to figure out what table they are it'll run that lambda against the class name And you're done Okay You can also specify for a given a given model that you want the default repository to be something different Just by overriding the default default repository name method. So for instance, you might have a bunch of classes that are Basically only there to exist for legacy purposes It's not something that you're converting legacy into new But you just have these legacy databases that you want to have access to so in this case You want to specify legacy is the one I mean when I say post at all, right? I don't mean to go to the default Do you get a nice API for importing data which is just post copy legacy default and you get an even cooler API If you want to only import some data in which you can use those same selectors that you saw earlier for Deciding in queries what to use you can say that here So post copy legacy to default and created that created as greater than a year ago So now I'm only copying over the post from legacy database that are a year or more old Yes, sir It so there's basically depends on the repository that we use Today it generates basically an insert statement for every single one, but if the repository supports it We're definitely gonna have multiple insert support so that you can do that and then have it generate like one giant insert statement for everything but I'm sorry You're gonna have Yeah, and so just a reminder you can do this field thing which means that you basically can copy over from a legacy to the default Database without having to specify what the mapping looks like because you've already specified it So that's importing of data Yes, so What he asked was is it possible for different repositories to have different back ends? Yes, the Pretty much the mission here is to make data mapper not care about what the back end is so you can have a SQLite database as your legacy database or postgres database is your legacy database and the and the mysql database as your default database or something it basically as long as you stick with Using data mapper queries it puts everything through the thin Thin eye of data mapper and then converts it back on the way out So it takes the data puts it into normalized form and then puts it back out the other end So we have We also have custom types so in active record you're used to seeing you have basically whatever types of database supports And maybe you can do some serialized thing for objects, right? So we have custom types first. What primitives do we have? We have true class string text float fix num big decimal daytime date object in class and basically the repository Stores are required to implement how each of these should be handled, right? So True and false and string and text and float all these are just required to be specified on the store How they should be serialized and then the store will handle the serialization and deserialization So those are those are all supported by default things like object would probably be implemented by either Marshall load and dump or Or Yama load and dump But yeah, these are all supported by default and the ones that come with Data mapper and any other adapters that were created would simply have to implement the same load and dump routines But custom types go beyond that so what if I have a post and I want to specify some special things like My author is actually a full name and my details are in CSV, right? And what I want to do is I want to be able to while I'm working with the object in the arm I want it to look like CSV or an array form and I want it to look like an array from the name first and last But I don't want to have to deal with any of the messy details I just want the database to know that last comma first gets deserialized into an array of first comma last, right? So warning, there's a bunch of code ahead And it's just sort of for purpose of illustration It's not necessarily not necessary that you necessarily understand all the pieces in each routine But I guess I just have it so Here's how you would implement full name. This is trivial, obviously the top piece has a primitive string So you say in the database, this is a string and it has a size of 100 And you can use you might be able to use any of the other primitives that we defined on the previous slide And you just have to specify a load routine which says how I want to handle it coming in from the database What I wanted to do and a dump routine which says I'm gonna have an array. How do I want to serialize it into the database? Similarly CSV Works the same way. You just have to specify load and dump routine what primitive it's based on and Then what does the create look like so it looks like this We have a create we say title which is a string but then we get to say author which is an array Which might come really in handy if you have a form and you can specify you know first and last name and you can use the Properties of forms to make it come in as an array and then you don't have to worry about doing any kind of serialization logic on Your own it will just get handled for you and then metadata That's just something that needs to get converted into a CSV file And what does that look like in the database? Looks like this right. It looks like whatever the dump routine said that it should do Also, it's lazy parsed what lazy parsing means is just That we take the data From the database unless you lazy load it so let's say you didn't lazy load it We take the data out of the database as a string and until you actually ask for dot CSV dot details or whatever It stays as text in memory But as soon as you start asking to do anything with it it becomes the right thing that means that if you have Difficult routines that take some RAM or CPU. You don't have to worry about the fact that every time you're using these special Classes, I just want to build an index and I want an index page. I don't want to have to do all these I don't want to undo the CSV or I don't want to care about the name at this point You don't have to until you actually ask for it and then it will unparse it Okay, we also have custom stores So stores are URIs they look something like this My sequel colon slash slash user at local host and you set them up like this So data mapper dot setup default my sequel colon slash slash local user at local host And this is how the database knows when you say repository legacy repository default or repository, whatever This is how the database knows what you mean, right? So you don't you set it up once you say here's what the default database means and then every time in in Data mapper that you use the default database It'll know to access this repository data mapper has connection pools which means that it's thread safe and there's you don't have to worry about Connecting to my to my sequel database is the data mapper will handle the connection pool for you This is not finalized, but the database that yaml will probably look something like this. So you'll specify Something similar to what you have right now and we'll get converted into those URIs. Here's what you might say legacy sqlite But custom stores go beyond that you might have something like this yaml colon slash slash slash fixtures Maybe at some point in the future ssh plus yaml colon slash slash fixtures Obviously this is probably not going to get into 1.0 But the the simplicity of the way that the store API works means that it's relatively trivial to take Make any sort of store and make it work So as long as you have a way of getting getting at data and having it implement those types that we talked about earlier You can pretty much do anything Sam who wrote data mapper was joking yesterday about a you know IRC colon slash slash data store Yeah, it basically the the options are pretty limitless and there isn't a ton of code that you have to implement So that's cool For tests you probably would have something like this right and now that you now that you have a yaml store You no longer have to actually Assuming you don't have any sql on your database or in your models There's no reason to have to actually take the fixtures and load them into any database right you can just use data mapper to access The yaml store yourself directly right and yaml stores will support things like joins associations All the features that are supported by regular databases will be supported They will probably be too slow to use any legitimate production sense But for tests they'll probably be more than fast enough and this will save you some of the horrors of having to deal with Loading and dumping fixtures in and out of the database And yeah adapter yaml which is cool So how do you make fixtures? The obvious answer is something like this right post a copy default fixtures You won't even have to specify any sort of repository information in the models because yaml will support all the default stuff So the field names will just be whatever and the table names will just be file name dot yaml So you'll have a table will basically be a file with a bunch of yaml documents inside of it Which will be records And you'll simply be able to do post a copy default fixtures, and you have your fixtures you're done right But there will probably be a rake DB copy fixtures that does that for you It occurs I have more stuff But it occurs to me that I'm well behind well ahead of schedule So I'll probably open everything up to questions shortly So validations I said I wasn't going to talk about things that are in active record, but there's this works Sufficiently different that it's worth talking about So we have class product right and we have property title string and What's cool is that that automatically generates a validates length of right So the database has some amount of characters that string means right and if you just do that in active record what that Produces is a string and if you try to save to that it will work and truncate right in data mapper That will automatically produce a validation that says don't be bigger than whatever it is a database accepts You're able to say auto validations arrow false if you don't want that behavior, but typically you do want that behavior We also have property price string this thing over here No noble false means validates presence of and then there's just cool thing called validation context Which lets you say I only care about this validation it when you say valid for purchase question mark Right, so say I have a price and I want to be able to import all this data from a legacy database And I don't want it to fail because there's a validation called you know Validates presence of price, but I don't want people to be able to buy the thing until I have a price in there Right, so now there's validate or valid question mark, which will do the default validations But there's also valid for purchase question mark Which will check all the validations that have validation context purchase listed, right? Which is pretty cool We also have you can say format arrow D that thing or you could have which produces a validates format of Or you can say It's a proc which takes a string and that also produces a validates format of we can say Number and that may have a length of two to ten which produces the validates of length of and We may just reiterating you can do special things like in this case. I only care about the number on import For some reason so I can have a valid for import situation Now we just showed how you can do it in line the same old validates length of all these present stuff stuff still works With basically the same Cool additions that are available like the proc form format But you can do them with the old validates length of and you can still do supply context in this case You say when import which produces valid for import so About a half an hour in which means I'll take questions So I guess any questions Basically what happens is Sorry, he said yeah, you do if you do find one it gets put into a memorization table called an identity map What would cause it to be reset? so data mapper Basically tracks if you do if you save something it'll track it in an identity map as well So if I modify the if I modify property, there's a dirty attribute that will get set So you the dirty question mark it's dirty and then when I actually go and save it all it does is persist it back but it doesn't It doesn't actually Do anything to the object except say it's no longer dirty so There really there's no reason for it to be reset if you're doing something with Where you're actually using the connection pool and there's multiple users Then what you should do is you should you should exit out of your session So data mapper allows you to say like I want this stuff to happen inside an identity map session So if you're actually expecting people to be changing data that's not in your process You should put things inside of data mapper sessions And then the identity map will get closed out after you leave the session and then loaded and again So basically the identity map should be identical to whatever's in the in the database except for that case and then You should be careful. Yes Right The thing that the the saving grace of active record in general is just that these things happen very fast, right? So when you click on when you make a save from a form Active record opens up does something extremely fast saves it and you're done so any Active record has it has a problem where if two people save it roughly the same time They just clobber over each other and there's no locking or anything like that and data mapper Will have optimistic locking which is good, but The reason why that doesn't really come up very often is because it happens so fast. So similarly identity maps in Rails or MIRB apps are very short lived, right? So what will happen is both the rails and MIRB plugins will say we'll put the identity map session block around your Around your request. So as soon as you start the request, it'll go do the active record It'll go get the thing copy identity map You'll be able to do other things that will make use of the identity map go return the request and then kill the identity map So it's really useful for a block of code that's inside of one request It will it unless you want it to persist for your entire the lifetime of your app Because you're only one process something that would not be the default anybody else So Suffice it to say that there there is a connection pool that Doesn't require a thread for connections. So you have a bunch of connections that get reused That are in the process Connections Yes So yeah So the mapper will run a j ruby except for the data objects that are be framework. So a couple of months ago I was looking at the state of ruby database drivers and noticed that they have all been written like more than five years ago by Japanese dudes they have no consistency whatsoever The half of a big chunks of active record are dedicated purely to making them have what should be a similar interface So I wrote I use the swig interface to build new database drivers for My sequel sequel item postgres which became the data mapper core Drivers and then they recently rewrote those to not use swig anymore. So they're pure C So those drivers won't run in j ruby What would need to happen the things that the do that are be driver project was created pure simply to make a really simple API So the API basically is open a connection close a connection make a new query Open a reader once you have a reader go forward with a forward-only cursor until you're done be able to get the fields from each At each point in the record set. I think that's the entire API what I just said So it wouldn't be hard to write a JDBC adapter that complied with the DRB interface and then it would work Maybe it did at one point though Yes It might be Profiles every use Yeah, so there's basically Threaded mode and the quote-unquote unthreaded mode the the one that is thread safe, which people should be using actually Makes a new thread per connection, which is way way worse right data mapper has a number of threads But there's a limited number of connections and it's able to reuse them I guess if you didn't want to have thread safety you could optimize for something else, but That one of the missions of data mappers to be able to be thread safe and to be able to run inside of Merb without the mutex So That's a trade-off we made Other questions Your Yeah, so let me go back to that slide a long long time ago So yeah, you can specify in your that your test database should use YAML adapter or if someone wrote an in-memory adapter That would work too. So basically this the storage adapters and data mapper are also very have a very small API They basically just need to implement like all and first and some other joining things So you could probably you would pretty easily write a simple in-memory adapter that was designed to do Effectively what people do when they're mocking out the database And then just say that that's the adapter to use and as long as that adapter wasn't burdened with table names needing to be something special Or column names need to be something special, which they shouldn't be then it would just work Data mapper supports fine by SQL and the same conditions hash that active record does but the goal is to make that extreme much more rare, right and In addition to that API that I showed earlier with the GT and LT and that stuff There's also going to be an API in 1.0. That's going to support being able to go do associations, so let me see if I could pull up a text-mate window So let me so the API is going to look is that too small All right, so it's going to look something like zoo that all Animals The other example is a more cleaner so animal that all zoo or animal that zoo And then you'll be able to specify some stuff in here like you could do or you could say You know if it was a has one association that was a number you'd be able to use the same greater than for So you'll be able to do I Hesitate to say ambition style queries because it's not as ambitious as ambition, but you'll be able to do a lot of Join style queries with an API that's similar to the rest of datamapper and and because of that Have adapters like in-memory adapters or yaml adapters that work. So yeah, it's definitely We consider it a bug if there is something that requires SQL that's that's common enough that a lot of people are doing it Right. We're trying to avoid you having to write SQL Primarily because we want to support other storage strategies, right? No There's one auto migration strategy, which is destructive at the moment and basically what auto migrations does is Kills your database goes through all your models go through those properties And then makes new tables, right? If you had a dump your development data into a yaml store you could trivially then reload the data in and I would That's the recommended strategy for development, right instead of having to keep a database That's crafty and is like and is old that way new developers could just run the auto migrations Instead of having to run a hundred migrations that happen to exist from the past that have no bearing on the present And you and that way you keep your data separate in yaml files You load them in and because you can do you know zoo dot copy default comma yaml It should be trivial to do it, right? So that's the auto migration strategy There's a myth. There's a goal for us to have non-destructive auto migrations Hobo and Django both have this which basically allows you to keep some track of what you've done in the past And when you say auto migrate it'll say oh it looks like you've deleted a column and added a column That probably means you want to rename the column So then you know is that what you want to do and if you say yes, it'll like make migrations for you So that's not going to be in one one prop one oh probably but will probably be in a one dot X release So that's auto migrations. The other migration strategy is basically Rails migrations with a couple of exceptions one of them is that they're not required to be in numbered files they still have numbers but the numbers are specified together with the migration and You can have multiple migrations with the same number as long as they're not dependent on each other So if me and my friend who are both working on the same project both check into migration number seven And they aren't dependent on each other, which probably they aren't since we're working separately Act data mapper will just say oh you you haven't done migration number seven named foo yet do it Oh, you haven't done migration number seven named bar yet do it This also allows you to add back migrations if you would want to like you're ready up to migration number 20 But you want to go and add a new migration number three for some reason you'd be able to do that and data mapper would say oh You haven't run migration number three named bar yet. So now run it Basically these are just Simplifiers over a lot of problems people have with rails migrations mainly that since their number and files People have problems working in teams. So that solves that but otherwise We use we don't use the X dot style So rails does you know create table do X X dot add column blah blah blah. We use the style. That's like you know create table do add column Which was in sexy migrations originally and we also support the syntax for Modifying tables that's modified table and then you don't have to specify the table name throughout all the columns So if you want to modify ten columns in a table you just say you know Modified table table name do and then you have the column modification without having to specify the table. So I don't know it's just Just a bunch the regular classic migrations are rails migrations minus a lot of the hurt that people have cool So pretty much everything that I had in this presentation with the exception of the YAML plus SSH is Will be in 1.0. Oh nine should be released This week ish The main things that are missing in oh nine that will be in 1.0 are There's some association stuff like that syntax that I that syntax I showed earlier in text mate and there's where there's going to be has many through through through through and Belongs to through which are not yet in oh nine, but we'll be in 1.0 Pretty much everything I showed on the slides will be in oh nine I actually I in doing these slides I spoke to Sam last night who's Writing a lot of the code these days and said can I put this on a slide? Can you commit that this will be an oh nine? And he said yes, so pretty much everything that's in these slides will be in 1.0. Oh nine rather There's a git repository which you can download which is datamap record and you can install you do rake install And it will install but at the moment it's not considered stable enough to be supplanting the very stable datamapper o3 so Yeah, if you do gem install datamapper, it'll still install three if you want oh nine for now You got a rake install it from git, but There will probably be a gem install datamapper minus minus source datamapper that'll work soon And then eventually we'll get pushed to before other questions Yes, sir Right now, so oh nine will have My sequel sequel light postgres and yaml I Think we'll probably support whatever a Combination of what people asked for between oh nine and one oh and what's easy to implement between one oh nine and one Oh, but the API like I said is very simple so The anticipation is that weird back ends that we don't want to support will still exist and be supported by other people who want to The DORB project will probably separately add new back ends like Oracle so DORB is like a C API for For databases and that's a whole different animal than the other storage API, right? So the storage API is really simple It just needs you to implement how to find things and buy conditions, right? Which is trivial but these the eight once you have SQL in the mix you have to figure out how to Open readers and general and send SQL to them and get record sets back That's different Yes Yes, I Would say That There are some And some for when we fake it at the moment the one oh release will definitely Have the one oh release that comes with the data data objects are be library will support placeholders Let me reiterate the question. He says there's support for placeholders in you know, basically passing in a bunch of parameters with the query There's some Databases for which this is extremely efficient and some for which it's a wash Basically, the goal is that right now the API lets you say when you when you execute a query It lets you pass in parameters if we support it right now It'll pass it and using the placeholder API and if we don't it'll just generate the query with interpolation The goal is to to the extent that such things are supported to support it natively and it will that Yes So So to the extent that a API actually supports a forward-only cursor that's more efficient than the alternative we support it in the C API Postgres has a weird C API where it supports a cursor But then it makes you load the whole thing into memory anyway, which is weird so we don't do the cursor API there because it's silly but Yeah, if there's a cursor like the oracle driver will probably have this stuff in it and to the extent that there's a simple way of doing cursors Our AP the DOR B API is cursor oriented So it's trivial to pass it through if it's supportive anybody else Yes, sir Oh A lot of people have done it hobo does it and Django does it but I Don't know we it seems to be an obvious idea when you're really thinking about the problem space and a lot of people have come up with it but There's definitely the plans for out for the one that we're doing is Pretty like pretty wicked. It does more things than some of the other ones It does better inferencing about what you mean by things And obviously there's I haven't really talked about this But implicit in this whole conversation is the fact that data mapper makes you say what the fields are and I guess the active record People that may seem weird But there's a lot of leverage out of it the fact that you could specify Validations in line that you can do auto migrations that you can specify You can specify things that other that in rails have to be separate lines in line Let's you have like a one definitive place for here's the schema Here's what it means in the database and here's what it means to my to my model Which ends up being really powerful if at first makes you queasy because you're used to active record not making you have to do it Anybody else I guess that's it. There's nobody else