 All right, we'll go ahead and get started. The name of my talk is MongoDB Rules, has kind of a nice double ring to the title. MongoDB is a database. It's an open source database. It's written in C++. It's document-based. I'll talk a lot about it real quick. How many people have heard of MongoDB? Can you just raise your hands? Okay, cool. How many people have used MongoDB? All right, so this is kind of, I'm hoping to speak to the middle in some sense during this talk. It's not gonna be entirely introductory. I wanna basically give a set of rules for using it well, and hopefully in the process, if you haven't heard about MongoDB, you'll get a sense for what it's like. So the first thing, I just wanna take care of some really important business before I actually get into the meat of my talk. So there's a happy hippo. Okay, the important business. My name's Kyle Banker. I work for a company called Tengen, and all Tengen is, is the company that sponsors MongoDB's development. I'm at Huat on Twitter, I'm Kyle at Tengen.com. We're at MongoDB.org. There's very good docs, and you can download the binaries there. You don't have to compile it, and get it running pretty quickly. And if you ever happen to have a question on MongoDB, the Google group is a great place. You can usually get an answer in about 20 seconds. So, check that out. MongoDB rules. So I wanna ask one more question, and that's how many people here are using some type of NoSQL database? Okay, so maybe like 30, 40%. And of those of you who are using a NoSQL database, raise your hand if you are using it for some kind of operational reason. In other words, you need extra performance, or you need scalability. So maybe half. I feel like when the discussion comes to NoSQL databases, most of us are concerned about performance and scalability. And that's really important, and I think MongoDB addresses that. But there's another side to it, which you might not have a scaling problem, but there may still be a reason for you to use a NoSQL database like MongoDB. And that's that it makes things really easy to understand, makes your data easy to understand. That was my initial kind of reason for being attracted to it. And I think you'll see kind of why that might be if you haven't tried it already. So rule number one is to get to know the real MongoDB. What does that mean? There are some pretty good ORMs out there. They're kind of like ORMs. So we have MongoMapper, it's pretty famous, and Mongoid, and there's a whole bunch of others in the Ruby community. And those kind of give you a very active recordy kind of interface to Mongo. But they do hide some of the ease of actually just working with the database. When you're working with a relational database, your object relational mapper is absolutely necessary, right? Unless you want to kind of write a lot of SQL that's really complicated. In the case of MongoDB, it's not that difficult because we deal with something called a document. And a document is something in JSON. So I say, when I say get to know the real DB, I mean to start playing with it. We have a shell, and our shell isn't SQL, it's JavaScript. And so actually I was able to build this little TrimongoDB in the spirit of Try Ruby. And you can go to TrimongoDB.org and actually go through a little tutorial and play with a real MongoDB instance in the background and a JavaScript shell. And it gives you a sense for how that thing works. And then when you're done with that, you can go and download the actual shell and do a lot of the same things. But the other thing about the real MongoDB is using the Ruby driver. It's actually a very natural way of working with your data. And you can see right here, here's an example. We're just requiring RubyGems, requiring the Mongo driver. We, line five, we build a connection. Line six, we choose a DB in the very same way that we choose a database with a relational database. And in line seven, we choose a collection. In this case, we're working with page views collection because we're tracking page views. And so here's a sample document. And you see our view document in line 97 there. And it's, all it is is a hash, right? It's a Ruby hash. It points to a couple of different types. We've got a date type in there. We have some strings, we have an integer. And we just pass that to our collection object and click save. Now, what happens when you click save? What happens when you call save on an object like that? Well, first of all, the driver is responsible for creating the primary key. So everyone sees that underscore ID field in line 36. It's an object ID, and I'll explain kind of what that consists of. You can use anything as an underscore ID, but the MongoDB object ID works kind of like this. The first four bytes are a timestamp. Or the second three bytes are a machine ID. The third are a process ID. And next is a counter. It's a 12 byte ID. And one of the nice things about using MongoDB's object ID is that they have a timestamp included. So if you do a query and sort by object ID, you basically get things sorted by created at. You can also extract the created at from the object ID. So there's some value in actually using that. The second thing the driver does is it serializes the Ruby hash into a binary kind of dictionary form. And it's called Bison. And you can read all about it at BisonSpec.org. It's not difficult to understand, but all the drivers, regardless of which driver you use for the database, serialize to Bison. So when we call save, we add an underscore ID. We create that underscore ID if you haven't done so yourself. We serialize to Bison, and then we send it along the socket. And because we already have the ID, we don't have to wait for any kind of response from the database. And so the philosophy is basically that you can always add machines, you can always add clients. So if you're gonna pound the database, you shouldn't necessarily have to wait for a response. And so the drivers end up doing a little bit more work. So the second rule is to use object IDs. One of the things we've seen in the Ruby community is that a lot of people kind of decide to use a string version instead. But object ID is a proper Bison type. It's a little bit more efficient because it's 12 bytes versus 16. It has that timestamp built into it, and it's kind of the standard. So if you're working with object IDs, that's just kind of one gotcha to keep in mind. The next thing is to use rich documents. And this is probably one of the really important aspects, or one of the reasons why MongoDB is great, because you can represent rich data structures within a single document. Now I'll give you an example. I have what looks like, this is a single document right here, and it's representing a cart or an order. And I'll explain, you can see a lot of data in there. I'm gonna explain it kind of one by one. But in the idea and in the interest of kind of talking about what's so great about rich documents, I wanna just go on a really quick tangent. Because I always like to show this in my presentation because I think it shows a good anti-pattern that I feel like is often solved well by document databases, and particularly by schema-less document databases. And that's this. So I like to show this database diagram. And this is a relational database diagram from Magento. It would be the same for any other kind of huge e-commerce app. But you can notice some patterns. And one of the things you notice just looking at kind of the overall view of this database diagram is that you see a lot of little tiny tables. Can everyone see those? If you look at the different entities, you'll see all these tables. And the question is what are these tables doing? And a lot of you know exactly what they're doing because a lot of you have been in this exact same situation. But what you're trying to do is you're simulating a flexible schema, right? And so we have all these tables right here, and you have these five here. And the only way in which they defer is that each one has a different field representing a different type. So if I need to add a dynamic integer attribute to a product, you know, that table is used. If I need to add a dynamic string attribute, et cetera, et cetera, you understand. And the really important question is like, what is the join like? You know, you cannot sit down and open up a relational database console and be a sane person and be able to say, I want to see what this product looks like. And so you sit down and you can't do it, right? Unless you're a robot, right? And so basically, you know, you can't really reason about this style of data. But I think that you can reason really well about document style data. And this is totally oversimplified. But if I can represent all those relations in a document, and if I'm allowed to modify that document as needed, add attributes as needed, without any kind of special migration or without any kind of extra tables, there's a win there, right? I can sit down at the JavaScript shell in MongoDB and say, give me this product. And I get a complete representation with all the data right there in JSON, which I can read as a human being. So what I like to say about MongoDB is that it's human oriented. And I compare it to Ruby in this way. You know, we were able to conceive of the data, you know, with a document kind of model, we're able to look at objects as kind of holistic entities, as opposed to completely normalizing and decomposing those objects. Not that, you know, that's not necessary in cases. It definitely is necessary in cases. But I see this as a huge advantage to a document oriented database and a good reason to use it. So if we go back to our order here, our cart, right? You can see that I have a couple of line items, right? And one is a Lumberjack laptop case. So I bought this laptop case online because I thought it looked kind of cool when it was online. And when I got it, I realized it looks like a Lumberjack's laptop case. It's kind of flannel and it's not very cool at all. But anyway, that's my Lumberjack laptop case. And you can see I have the SKU and I have the list price. You can see I can also store the shipping address in the same document and different kinds of totals at the bottom. If I go back though, the line items, right? It points to an array, an array of other objects. And all of that is represented pretty economically, I feel like, in this object. But this wouldn't be out of any value to us if we couldn't actually query it or manipulate it with our database. And that's the nice thing about MongoDB is that you can actually query on these items and manipulate these items. So we say that MongoDB has dynamic queries and that's just like a relational database, right? I don't have to define everything up front. So you see a super simple query right here. I'm just looking for something by user ID and that's totally straightforward to us. But look at the second query here. Well, the first thing is I'm creating an index and you'll notice that the index on this collection is gonna exist on an inner object. So I'm creating it on lieditems.sku. So you see in here, if you look at my lieditems object, it's lineitems.sku, right? That skew. So when I do that query for that attribute, on line 49, I actually use an index. So that can actually be efficient. I don't have to iterate over all these documents. I can do efficient queries over a rich structure. Here's a slightly more complicated query, but I just wanted to give an example of what's possible. Let's say I wanted to find the most expensive product purchased in the last week. So you can see in 52 and 53 there, I'm just kind of creating a date range and then I just pass those in as a document. My query is a document. My data is a document and the query is also a document. So I'm saying where created at is greater than last week and created at is less than today and I'm sorting by the total and I'm limiting by one and that can use an index. So that can be an extremely efficient query and still allow us to use that document style really easily. So these are some other query operators like in, for example, we can pass in an array, all size exists. So there are a lot of different kinds of special query operators that allow you to do a lot of what you can do in a relational database. Okay, so the rule number four is that array keys rule and let me just give you an example. You can really simplify tiny relationships. Tags are just a really good example. So you see this, I have a document here on line 68 and it's a link, right? And so say I'm having a social news site, right? And I wanna have it be taggable. So I have a link and I have a title of a URL and then I have an array of tags, right? Tech startups and Time Waster. And then I go links.save and I save the link, right? Now I can have an index on that array field. So I'm saying I'm creating an index on tags so that when I do this query, when I say, okay, find me all the links that are Time Wasters, that's actually efficient and it's actually not wasting any time. So this uses an index and you see the query is very straightforward. I don't have to do any special iterating over the document. So that's just a nice thing whereas in a relational database, we'd have this table that's really small that has some kind of user ID, tag ID and a piece of text for the actual tag, something like that and here we can just like, we can just totally eliminate that level of normalization and just throw it right into our document. So that's one reason that array keys rule. Second reason they rule is because you can represent kind of more complex things like many to many relationships. So say I have like products and categories and you can see the category here, just a title, vinyl. The product is some record and the category IDs are just an array of object IDs. Now I can have an index on that so that any kind of query on that is efficient. If I wanna find all the products in the vinyl category, the query is super simple. I just passed this document, products that find where the categories ID is the object ID and the MongoDB knows how to use that array efficiently and do an efficient query in that case. If I wanna find all the categories for the product, the same thing, it's just kind of a reverse process. It's a slightly different query but the important thing is no join table. I can represent this really simply. So number five, use atomic operators. So a lot of you are probably familiar. Everyone know like Redis is pretty famous because it uses atomic operators. MongoDB has a whole bunch of atomic operators as well allowing you to kind of modify a single key in place which can be really efficient. You don't have to pull it down and modify it and send it back up to the database. So for example, oops, this didn't come out right but say you're reading hacks, say you're building hacker news and you want to implement that upvote functionality right here. So the query looks like this, right? We wanna find a post with a given ID, some ID X, and where the voters does not contain this user's voter ID, right? So that query only succeeds if that voter hasn't voted on that particular item yet. And then the update looks like this. If that query succeeds, then we're gonna push, which is an atomic operation, we're gonna push the voter ID onto the voters array. So that's an array of voters. And then we're gonna increment the votes by one. So we're caching that value so we don't have to kind of dynamically count that every single time. And somehow I put that twice. But what I meant to do is just send it up to the database. So you can set, when you do an update operation, you have the query for the particular document and then the update. And I use those atomic update operators right there to kind of handle all that in place. I don't have to bring anything down from the database and that can succeed all in one operation without going to multiple tables, anything like that. So another kind of cool example is concert seats. And we have this command called find and modify. And it's a special command. What usually when you do operations to MongoDB, kind of I was talking about earlier, you don't wait for a response from the database. And so when you do an update, you're not gonna wait for any kind of response from the database. But if you do need a response, we have a special command called find and modify. And commands are special entities in MongoDB. And so you basically create them with a hash. So I'm saying look at the seats collection here. In the query on line 186, I'm saying find a particular concert, particular seat with no expire date. And then I'm doing an atomic update operation right here. And I'm setting an expire date, like in the next 15 minutes, they have to make the purchase so they can't have the seat, right? So I send that command and I get the document back. I get the document back only if it succeeds so I can be sure that there's, I can ensure that level of consistency for that kind of item. So if there's some kind of inventory or something. So even though we don't have full transactions, you can still use these atomic modifiers in a lot of situations. And here's some of the more update operators, incrementing values, setting given values, and then a lot of different operators for pushing or popping things from arrays. And like I said, the array keys are just something that people find really useful. So we also have something called MapReduce. And most people have heard of MapReduce but think of Google's MapReduce or kind of a large distributed computing style MapReduce. And that's definitely one kind of MapReduce but originally it's just a functional paradigm, right? And what you use it in this case for is aggregation and results always come out in a new collection. So here's an example and you saw our order document. Let's say we wanted to find the total of a bunch of orders from a given zip code. We emit a value and you can see in our map step here, I'm writing a JavaScript, oh by the way, I'm writing a JavaScript function here because the database understands JavaScript. And even if I'm writing this from Ruby, I'm still sending a JavaScript function to the database. And so in this case I'm emitting a zip code as my key so things will basically be grouped by zip code and then I'm emitting a document with the total for the value. And so you can imagine your reduced function, I always kind of explain it this way, imagine your reduced function receiving a unique key and an array of values. There's a typo inside that array of values right there. So our reduced function though just grabs those values and sums them all up and what you get when you return the MapReduce job is you get a sum. So you can do all kinds of aggregations, averages, counts, things like that with MapReduce. It's typically not used for like a live application but for like a more extract transform load style situation. Okay, seven is indexes are indexes. And so in MongoDB, collections support up to 40 indexes. So and the indexes are the same data structure as is used in a relational database. Which means a couple of things, they're gonna behave very similarly. So you have to kind of use all the knowledge you have about working with a relational database to build indexes in MongoDB. So for example, we've seen people with 80 gigabyte databases who create indexes and their database kind of freezes up on them even though indexing now happens in the background. Still it takes all, it's a huge IO operation. And so it's really important to just keep in mind you gotta be smart about indexes still. You can take some time if you have gigs of data not if you have megs of data. Really be smart about compound indexes and make sure that your queries match your indexes or rather your indexes match your queries. That's incredibly important in any database and it's important in MongoDB. I've seen this, this is kind of a, you see the line of code at the bottom of the screen here. You see this a lot in some of the ORMs and I think this is kind of a remnant of older ORMs. Basically we're saying define a key username and then index is true. And some people will be really cavalier about this and they'll be like, oh yeah, index is true. And then their database will be like, so you really have to be careful about indexes. Okay, so gridFS is the specification for storing binary data in MongoDB. So you can store large files in MongoDB. And basically it works on the driver level. It's not technically a feature of the database itself. And so suppose you have a picture of a lumberjack. Okay, and you wanna store that in MongoDB. Here's the API, it's pretty simple. You define a grid object in line 230 there. You open up a file, this lumberjack file, and then you just do grid.put, give it the file descriptor and a file name. And the way this works is it throws things into two collections. Everything in MongoDB works on the collection level. Collections are analogous to tables. Not sure if I explained that. So we have a files collection for the metadata and a chunks collection for the data itself. And that data will get automatically chunked into a chunk object like this and maybe one of these for each chunk of that data. And MongoDB, it's pretty fast and I'll explain that in just a second. So it can be efficient for storing data like this and can be really nice for the organizational win that you get for doing it. So a little bit about kind of MongoDB's speed slash durability trade off. One of the techniques that is used is called memory mapped files. And probably a lot of you know what that is. But it's basically a kernel mode for mapping files onto virtual memory. And so when we're writing information to the disk, it's almost as if we're writing to memory. And the kernel manages which of that data is actually in memory and which of it is still on disk. So one of the important things about this is the kernel is gonna handle syncing to disk in a lot of cases. And the database is gonna enforce a F sync to disk every minute. But it's possible if you just shut down your master node, it's possible that you're gonna have some corruption. And so what we say is that you need to replicate. You need to do a master slave style setup if you're gonna be serious about durability. You can adjust the F sync if you want to F sync every 30 seconds. But sort of MongoDB, the trade off is this. We think that the speed gained, the speed gained by using memory mapped files and storing a lot of data, allowing data to be stored in memory that is, is more important than being super durable on a single node. And that durability is better achieved kind of multiple nodes. So replicating and backing up are important. It supports the same kind of replication that you'd be used to in, for example, MySQL. A couple of performance notes about MongoDB. So the drivers do quite a bit of work. I showed you a second ago, we generate the object ID and we serialize to Beeson. Beeson is the format in which the data is stored in the database itself. It's the format of our queries. So the drivers actually do quite a bit of work. And the Ruby driver, because it's Ruby, can be a little bit slow, which is okay. But if you're doing benchmarks on Ruby, the only thing I would say is don't benchmark on a single node or a single process. Make sure you benchmark across multiple processes because MongoDB isn't gonna care, right? But if you start pumping data into MongoDB via the Ruby driver and you look at your CPU consumption, everything's gonna be in Ruby and pretty much nothing is gonna be in MongoDB. And some of the guys who I was talking to here, Alan, I can't remember the company, but they're doing a lot of analytics style kind of gathering with MongoDB. And they've actually, for the analytics side of the application, they've actually decided to use Clojure on top of the JVM using the Java driver, which gives them a huge kind of core performance. They can use multiple cores and they give a huge speed performance. So in the worst case scenario, you can use a different language driver. But in a lot of cases, Ruby is very good. Definitely use embedded documents. And I didn't talk about this. There's kind of the mental win that you get from embedded documents, but there's also a computational win because we don't do any joins. That's part of the scaling strategy. The database doesn't do joins at all, but if you can store a rich relation inside a single embedded document, which you often can, then the server doesn't have to do any work when you retrieve it. So you get a big performance win in that case. Coree should definitely use indexes. I think a lot of you know that, but we've seen people who don't necessarily know that. And so it's just good to remind. Keeping indexes in RAM is really important and keeping your working set in RAM are important. Those are important with relational databases. Those are important with MongoDB as well. So we've seen people with 80 gigabyte databases who have four gigabytes of RAM and four gigabytes of indexes. And they're wondering why performance is hurting. You really need more RAM or you need to scale out. So auto shard to keep it in RAM. So the idea is if you can't keep indexes in RAM, if you can't keep your working set in RAM, then you need to shard. And we have a feature called auto sharding. I'm not gonna explain it in too much detail. I'm just gonna give you the basics here. But data is automatically chunked across multiple shard clusters. And you route all of your information through a Mongo S client, which automatically distributes your queries, updates, et cetera, to each of the individual shards. All the shards are completely autonomous. They don't communicate with one another, which is the reason we don't support multi-object transactions and joins. But this kind of gives you the ability to really scale writes and reads. And it's pretty cool. And you can set it up right now. You can set up all the shard nodes on your Mac and kind of get a sense for kind of how it works. And I'd be happy to help anybody out with that. But it doesn't apply to everybody, right? And like I said, scalability isn't the only reason. It's also just about the mental win that you get by using a document database. So just two production cases. And then hopefully I'll have some time for questions. I wanted to go through this quickly to answer more questions. Sorry? Oh, five minutes. I don't have much time, actually. So just SourceForge.net, they store all their project pages in MongoDB. They can store all the information on the project page in a single document. So it gives them a huge win in terms of their queryability. They don't have to do any kind of joins on the server. If they need to add attributes, that happens completely dynamically. And they've been running it since last June. And you can see presentations on it on the web in different Python conferences. They've talked quite a bit about it. Oh, shoot. I meant for that to say GitHub. Actually, GitHub's using it for a back-end analytics. And I had to mention GitHub because I know we're not really SourceForgeers here. And then I want to talk about HarmonyApp, which is John Nunamaker's pretty cool CMS. They're originally on MySQL, but they switched to Mongo and it just completely simplified their schema. And especially when you're dealing with something like a CMS, it makes sense to have something really dynamic where you can store a lot of information in a single document. And there's a whole bunch of other production deployments. And you can check our page to see some details on those. And I'm willing to take any questions. Yes? You're familiar with Rescu or is it used for Redis? Anything you could build up in Mongo? I think so. Now that you have that find and modify command that I showed you, you can easily go up and set the status of something and bring it right back in the same operation. So yeah, I definitely think you could. And I think somebody is working on it, actually. Yes? Can you describe or just get a little bit more on what you think about the document is and also talk about... Okay, well, I mean, that's just, you don't have to call it an embedded document. All it really refers to is the idea of a document containing kind of documents within documents. So you saw, for example, that example I gave of the order and the line items represent an array of other documents inside of it, right? So that's one example. Another example might be, if you had a simple blog and you wanted to put all of your comments inside the post object, you could do that. And so that's the idea of kind of embedding another structure inside the same structure. Basically, any structure you can represent with a Ruby hash, you can send to MongoDB. Sure. What are the differences between MongoDB and Couch? Yeah, I can't totally explain all of them, but so there's a lot of differences actually. The biggest difference I think is dynamic queryability. Okay, so CouchDB, you know, Couch, well, there's a lot of differences really. CouchDB uses HTTP, MongoDB uses a binary protocol over a socket, CouchDB has a kind of multi, they focus on this kind of multi master replication scheme whereas MongoDB's path to scalability is via a auto sharding mechanism. CouchDB, you query by, you have to build indexes using a map reduce language. And so you write these map reduce functions that build a B tree index in the background and then you're able to do queries on that. Whereas MongoDB, you're able to do queries kind of ad hoc as you will and build the indexes more in kind of the way that you would in MySQL. Also, MongoDB supports atomic updates which I don't think CouchDB supports. So those are a few of the differences. Yes? Kyle, I just wanna, we have a lot of us starters with Ruby, I think we start out writing like Java and Ruby. Yeah. We wouldn't want to make the same mistake like moving to a model. Like how can we break our SQL background? What's kind of the high level of design? Is there a good document, you know, support? I've written some documents on data modeling. We have a couple of documents on our website on data modeling. In general, I would say trivial relationships that exist in a relational database should be contained in a single document. And then more complicated relationships where we have two full-fledged entities. That's sort of a judgment call and you need to kind of decide on what your use cases are and whether that's gonna fit in a single document or whether you can normalize. And there's nothing wrong with normalizing in MongoDB a little bit. And plenty of people do it, it's designed for that as well. Yes? I heard people talk about, maybe you use short key names because they're listed over and over, does that? I don't think that's a problem in practice but people have worried about that. But I mean, I haven't seen that in any cases really. Yeah? Are there any things you have to worry about with character set and coding? Can you store all the given code? Yeah, so everything, for the moment, everything we're saying, everything needs to go in is UTF-8. So we're not supporting particular, you know, your various character sets. And like, yeah, there may be some way to go in terms of that. Yeah? There seem to be a number of different binary serialization standards out there. Beeson and Burd and AppRow and ProtocolBot versus RIP. What do you see as some of the benefit of Beeson versus some of these others? Or do you see it converging or? I'm not an expert on that topic. I guess like the reason I see Beeson as a good standard is because it maps very nicely to data structures that we all know and work with. Which is JSON, Python dictionaries, Ruby hashes. That's, yeah, yes? What's the potential for accessing gridFS directly from say a web server like nginx or patchy to get those files? People have actually built those. So it's totally possible, yeah. And in fact, in fact, one of my coworkers, Mike, has built the beginnings of an nginx module to just go right in and grab the stuff at the very top. What's on the roadmap for the future? So the roadmap, the main thing right now, so let me just do one cool thing. We just added 2DG indexing, which is really good for a lot of use cases. Find me the 100 objects in nearby. So one of our customers needed that, we added it. The big thing is sharding beta. And the thing that has to happen for that is we want a 100% automatic failover. And right now, the individual shards, you'd probably have to manage the failover manually. So there's a concept called the replica set. I can tell you all about it after this for automatic failover. But obviously, the sharding is really, really important. And that's over the next few months. Over the next month or so, we should hit beta. And the next six months, definitely production. Yes? So speaking of sharding, if you go with a few slides back, and you have a diagram of what you're going to structure with the sharding, and you still got the Mongo OS, and the Mongo OS, that's still a single point of failure. Kind of no. The Mongo OS is a super lightweight process that would live in the same machine as your application. And so you could have multiple Mongo OS clients across your application. So there's no persistence within that process. It gets all of its data from these config servers. And those are all redundant. The idea is that those are all redundant so that there really isn't a single point of failure. So yeah, if your app server dies, then it fails. But then you fail too. Yeah. So the guy who just got arrows going to both of those Mongo OS is down. Yeah. Yeah. Yeah, sorry. I forgot an arrow there. All right. Yeah, I can talk to you more about it afterwards. Maybe one more question? One more question, Nate. Or maybe not. OK, yeah. So do you have any tools for finding out after the fact that it's like one of the queries you've got online? Yeah, we have explain. So you can run explain on any query, and you can see the query path and how the query optimizer is working. So it's all right. Great. Thank you very much.