 Others come in, then they can pick up in progress. Speaking of, good afternoon. So thanks for coming. I'm Matt Inchenthrun. I'm in charge of a group at Couchbase. Sides are a little slow there. I'm in charge of a group at Couchbase called Developer Solutions. What my group does is we build all the client interfaces into Couchbase as a cluster, including things like all the SDKs, Java, Ruby, et cetera. And so this afternoon, I want to talk a little bit about polyglot application development and what that looks like in this world of no SQL in general. Of course, I'm going to look at it a little bit from a document database background, but I did include some other information here as well. So hopefully, this is what you're hoping to learn. Just really quickly, is there anything anyone was hoping to hear in particular out of this session? Nope. Most of you are trying to get a definition for polyglot. And that was the goal. OK. I was actually asked in the hallway, what is a polyglot? But the quick answer is polyglot in language usually refers to somebody who speaks multiple languages. There's a common term in our industry recently, polyglot persistence, where people talk about possibly storing the same data in multiple persistent systems with the purpose of being able to access it in different ways, because each system has certain capabilities that maybe we can exploit. But in this particular case, I'm referring to it mostly as application. So I would assert that modern applications really must, these days, manage any type of data that modern systems must be ready to scale, must be, to some degree, fault tolerant, and to some degree, able to adapt to change. And so a lot of what we do in Couchface and a lot of what's happening in this no SQL industry in general follows along those lines. And I'll give a couple of examples later. But reality these days is you don't really highly structure the data necessarily. I mean, there are certain situations where you have machine-generated data that maybe has a particular structure that you're able to work with. But for the most part, when we're dealing with these languages and persistent systems, we expect them to be flexible with respect to data. We certainly expect them to scale. I remember, I guess, I was doing web stuff when it was Web 1.0, I don't know anyone else doing Web 1.0. And back then, scale was a very different kind of animal. But the user expectations are actually quite a bit different. Until recently, I think, well, maybe it's still this way, on Facebook's page, you could go hover over the copyright, and it would actually show you how quickly that page had loaded in milliseconds. So they actually kind of stuck that in there as a way for their developers to be able to see how quickly had that page loaded. And really, if you looked at the complexity of the page, that's a lot of data that's getting rendered on that page in a very short period of time. And so scale actually means a couple different ways. One is that, as developers, we always want to add features, and so we want the people that are running the systems behind it to be able to scale it so I can still meet my service level for my end user. And there's an expectation with a lot of these modern applications that we can just do a lot more. Personally, I hesitate to use the term real time, but I will in this context. I'm sure you know what I mean, because real time in computing actually means something else. But real time to business users sometimes means a different thing. And then with modern applications, people expect them to be fault tolerant. It's very rare anymore that somebody deploys an application and you're allowed to have a service outage. I remember working with one utility years and years ago. Saturday was service outage. So you couldn't work on a Saturday. But these days, especially if you're running a large, if you're running a consumer facing web application, it's pretty much expected that that's going to be fault tolerant. It's going to be available, and even be available in the face of other kinds of change. So for example, if I want to adopt a new version that maybe has new features of either the application side, which is typically easier, because it has less state associated with it, or the persistent side, which is harder because it has state there. So to update that takes a little bit more effort. We want to be able to adapt those. So looking back recently, and you may have heard some of these terms before, so hopefully it's new to at least a couple of people. Historically, really, we kind of look back on what we did, I guess, in the 90s, early 2000s, and it really was sort of a standard architecture, sort of a regular approach. At the time, we would talk about three-tier architectures. And I'm sure all of you have probably drawn this diagram on the right-hand side a bunch of times saying, this is how it works. And realistically, at the time, what we frequently did is we had the same language across that middle tier, that same language touching. Everything was HTML out the front end. Everything was the same language in that middle tier. And everything was typically a single database, or maybe, if you got fancy, a clustered database of some sort, typically an HA cluster database of some sort. But let's kind of break that down a little bit. I mean, for modern approaches, is that really what we need? So reality these days is that we have lots of different data sources. The data sources are less likely to be just a user in front of a browser or something like that. We may need to ingest data sources, which are really metrics from the system or from our mobile devices. And we may have a lot of other data sources that are feeding into this. At Couchbase, we see that a lot in places like ad targeting and recommendation engines, because there are situations where people are taking now their log data as a feedback loop in to make decisions on what information to render for a particular person at a particular point in time. Those are actually a lot of the more polyglot kinds of situations. So there are lots of different data sources, XML, et cetera, that we need to work with anymore. It's not just user input. If we, for example, look at something like Hibernate or JPA, is that really a good fit for no SQL systems? Hibernate in its current state of the art was kind of built, the term I like to use is a substrate. It was kind of built with this idea that there's the substrate under it, which is SQL. And the reality of the matter is that that substrate is, in some cases, fairly limiting, certainly limiting it around scale and high availability, at least within a certain bounds. And so then if you kind of want to break free of that limitation, then you can look at some of the other no SQL type systems, whether it's Couchbase or others. And then you get to, well, now I've got this, or even in the SQL case. Sorry, I went down a different path there, didn't I? So you've got this impedance mismatch, right? If you are using object-oriented and you're using traditional Hibernate that's expecting SQL out the back end, you've sort of had this impedance mismatch. And as application developers, sometimes you see that. You end up having to work through it. Or sometimes you toss it over the wall and the DVA has to work through the SQL statements that come out of it. Not that any of you would ever do that. I'm sure you always tune things very well. But then you need to scale. And so this one is an actual example. I'll talk a little more about it in a bit, to go from 0 to 50 million users in about six weeks. So there are real live situations where people have needed to do that and have been successful doing that, actually. And can you do that with the previous generation architecture, and realistically, in most cases, you can't? Plus all the how to add features easily. Code changes, schema changes, et cetera. This is usually where I like to relay a story from the MySQL conference where one of the Facebook guys was talking about a schema change that they needed at Facebook. Product manager said, hey, I need this new feature on this particular whatever it was. Developers say, OK, no problem. We can do that. We've got to go get a schema change, go to the DVA, say, we just need a new column on this table. DVA say, OK, let's see. We've got this many systems. We can have that for you in about eight months. And that was what the Facebook guy was talking about in that particular environment. So they had gone and invented another set of tools. For them, I mean, that's obviously an extreme. But adding a column isn't add a column to one table in one database. It's add a column to one table in this massively sharded environment where you have a lot of other things going on. It's a very disruptive process. Certainly disruptive from a performance standpoint. And it's a little hard to go back to the product manager and say, yeah, we'll get that for you in about 10 months, especially in that environment where you're very, very competitive. So frequently people need that for yesterday. And then finally, from an app developer perspective, the way that we create code has changed. And sometimes it's style, or fashion almost. But sometimes you find that we have evolved the state of the art even over OO development that we were doing with things like Java from the 90s and so forth, lots of new frameworks that have come along that make our lives easier. So generally speaking, we would argue that one size doesn't fit all in this environment. Just to look at it a different way, let's look at it from a data perspective. So from a data perspective, we certainly need a more flexible data model as well. So in our deployments, this is a little hard to read here. I think it's an IDC study talking about the amount of data that's required. And what you may notice from that curve is that it's more exponential than it is a linear or logarithmic curve. So we have a lot more data coming online at a regular basis. Now the interesting part about this study was that if you look at the amount of structured data, the amount of structured data is relatively linear. So according to this IDC study, the amount of structured data is growing as normal regular growth patterns occur. But if you look at the amount of unstructured and semi-structured data, text logs, log files, click streams, blogs, et cetera, that's growing very quickly. And I have a colleague who's in the ad targeting space in Los Angeles, he had a way of putting it. He said they started just keeping everything. Now maybe it's because they're in the ad targeting space, but the way they came to that conclusion was they basically calculated what does it cost on sense to keep a byte of memory, or keep a byte of something, and then they said, OK, now what are the chances that we can use that byte to make a penny? If we can at least do that, then maybe it makes sense to just keep everything. And the ad targeting space has over the years gotten much more and more sophisticated, so for them it made sense to keep everything. But that's not exclusive to that domain anymore. Orbitz is one of our deployments. I've done a lot of work with the social gaming kinds of companies. And those guys talk about things like compulsion loops. How do you keep people coming back? And they need to understand user behavior to be able to bother you just enough to get you back into the game, but not so much that you'll uninstall the app. And you're talking about very apps that make tens of millions of dollars a month, and so it's worthwhile to do that kind of thing. So this applies to all businesses these days. So it kind of started off with text, log files, click streams. Those are valuable things in our world these days, and lots of companies are trying to make sense of how to use it. So the structured data, relatively linear, unstructured, semi-structured, certainly growing quite a bit. So I mentioned earlier, I know we're on a little bit of a different tangent, but this is all to that. Could we actually, from a polyglot perspective, how do we really address this on both polyglot persistence and polyglot application development? So I mentioned earlier a real world deployment. So there's a game, or was a game, I'm sure it still exists, but there was a very popular game for a period of time called draw something. Anyone in here ever play draw something? OK, you can admit it, a few people. OK, great. You've all used catch base. Thank you. So draw something came from a company named OMGPop. And OMGPop had launched the game draw something. They were a smaller game company. And as you can sort of see from the chart, I'm going to step out here so I can sort of see it too. As you can sort of see from the chart, if you measure your daily active users in millions so that we're around two to six daily active, that's how game companies tend to measure things, is daily active users. That's their metric for if something's working. So they've done pretty well, actually, withdraw something. They'd hit around 20, oh, this is time. Here's daily active users, millions, yeah. So here, I thought that was wrong. I think they were in the tens or hundreds of thousands. So they were doing OK. And then these are the calendar dates across the bottom. So around March 1, one of the stars of Jersey Shore tweeted something about draw something. And that led to their first hockey stick in growth. So they had a catch base server cluster. They had six nodes at that point in time. By the way, we had never talked with them. They had never talked with us. Catch base is open source. They downloaded it, deployed it. They'd heard about some of the things that we'd done. Anyway, so that's our growing. And then that got them up to, the number's sort of obscured, but I think it's about 4 million daily active users. And it had leveled out there for a little bit. And then Miley Ray Cyrus tweets, it's official. I'm addicted to draw something. I need an intervention. And that put them on their next hockey stick of growth that went up for quite a while. So by the way, if anyone here is in the social gaming industry, work out your celebrity marketing strategy up front for getting the game out there. That's apparently what you learn from reading this. So that led to their next hockey stick in growth. And then at one point, they hit the number very large. For quite a while, they were number one in terms of games. And somewhere around here, they were acquired by Zenga. So if I remember the number correctly, it was a $250 million acquisition. And the interesting thing here is this is February 6th. And this is March 21st. So 45 days. So 45 days, they went from zero to 50 million users and acquired. And while we'd all love to have that problem and we don't all necessarily have that problem, the good news was that because of some of the technologies they'd picked, both from a polyglot persistence perspective as well as from an application development perspective, they were able to scale that growth. They were able to do it. It doesn't mean it was trivial, but they were able to do it without taking the game offline without app downtime. There was downtime on a per user basis. There were certainly some interesting things that occurred, but they were able to grow that very quickly. So we're pretty proud of that particular case where things worked out quite well. So how does this apply in your environment? How do you take this growth? Historically, if you were to look at that, going back to that previous diagram, the Web 1.0 three-tier architecture, realistically, if you were to try to scale that, you would tend to have one of these curves that doesn't look so good here in the lower right versus what we really desire is the ability to have the system cost and the application performance linearly scale as we add things to it. So if we were to historically kind of look at it, how would we do that if we were to scale out the RDBMS? Anyone here ever do MySQL sharding before? An application layer? Oh, wow, okay. I'm the only one. It's not as fun as it sounded back then in those days. So the goal sort of is, well, hey, this app server layer, that thing scales out nicely, mostly because it's stateless. Let's just, first we'll stick a caching tier in here, we'll stick a memcached tier in here and that'll help us scale it a bit. And then after we run out of steam on that, oh, that's okay, we'll just take the databases and we'll break it down by username or something like that. It's okay, we can just go change the application. And then you find that, well, now a lot of the queries have to be changed because now I may actually have to query multiple systems. I have to do range partitioning type queries. And before long, my application is doing a lot of this work and it ends up being a lot more challenging. But it is an option. Although, personally speaking, I would say you, by the time you start doing this at any reasonably significant scale, you're really not a relational database anymore. You can't do a relational query across the system unless you've hashed along that one. And I've seen that one a couple of times. So versus what we tried to do with no SQL data stores, and this is not exclusive to CouchBase, but not all of them do this either, which is a different question. It's that funny sort of thing, like well, what is a no SQL system? I don't know, I guess anything that doesn't speak SQL and stores data sort of. But there are lots of things that could fall into that definition. So but many of the no SQL systems, in my previous presentation, I referenced, Martin Fowler talks about when you start trying to scale these systems, you look for something that will work well in one really simple way from an application developer perspective is to store things as an aggregate. And that aggregate, to us, that means a document. To others, it may mean a column store, a sparse column store with different records filled, et cetera. But in general, we're all kind of trying to do the same thing as make that layer scale out. And by making that scale out, we can flatten the cost and the performance curve as we grow that application up. So let me pause for a moment and see if there are any questions. Any questions? Or is it just lots of, yep, I agree with all of that and interesting story so far. Obviously, you're at this conference for a reason. So a couple of other things though about this space. So going a little bit further on the polyglot side, right? There's, you could argue that there are, and this is even true with relational databases. I remember my first data warehousing project and that's when I learned, oh, you do star schemas and things like that, right? You're taking the database and you're trying to, you're laying your data out in different ways so that the RDVMS can kind of optimize the way that you access your data. And there was a whole era of roll-up tools, right? Where people were trying to say, oh, you don't really need to do that. You just use the same relational structure and we'll come up with better ways of indexing. But in the end, really the way the industry evolved is kind of down two paths. One was that there were operational databases and one that was that there were decision stores, DSS type systems databases that exist and so if we were to look these days at the NoSQL space, that is generally true as well. So we have a set of NoSQL systems, things like Couchbase and MongoDB on this left-hand side as you see and these are, especially us at Couchbase, you know, we really aim to be that database behind interactive web applications. Be always really fast, always very responsive. Even to the degree that sometimes will be very fast and tell you, I can't get you the data right now but at least we're telling you we can't get you the data right now very fast because we're overloaded or something like that and I could talk more about that if anyone's interested. Then as you kind of move right a little bit, you'll see that there are things like Cassandra and Hbase that in part not out of trying to be slow but in part just because of architectural choices that were made and trying to work on top of certain existing technologies, they don't have the same capabilities with respect to flexibility or response time and so they live kind of in the middle. They let you do lots of analytics type things that are very hard to do with an operational database and I can tell you where we are on that and where Mongo is and so forth but they do allow for the kind of response time you may expect, the kind of throughput you may expect to be able to get kind of towards an operational data store, right? And then on the far right, a friend of mine actually at Dan Templeton at Cloudera, he had a good way of putting it with Hadoop, he likes to call Hadoop the big data Petri dish and what he means by that is you just throw a bunch of stuff in it and then whatever kind of grows and crawls out, that's sort of interesting so and it was a really interesting way to put it and you get, these are more analytic databases and that's not to say that you couldn't use them for other things but the design choices that were made with those systems tend to make them better for analytic databases so this would be Cloudera, Hortonworks, MapR, companies like that that are mostly in the Hadoop space but there are some others in that area and so kind of relating that back to Polyglot so you got Polyglot, this would be the Polyglot persistence side of things that we want, big data analysis, these are very good for things like log capturing, very good for running different algorithms over it to be able to generate recommendations, do ad campaigns, understand user behavior and so even if you look at the Hadoop project and there's some things like Mahoot which is a machine learning language, or I'm sorry, machine learning set of algorithms to be used with that large amount of data. Then sort of you also have, in most environments you'll also have, we would argue, something like a document or a key value type system, this is where you put things like products and user profiles, things like game actions if that's your industry, sessions is a really popular one, shopping cart is a really popular one so maybe I have a long running workflow and I need to keep the state of that workflow as the user's moving through it. That's a really good use for these document and key value type systems. The products one, that's where I think I may have mentioned earlier, companies like Orbitz move their content over there so they know that the content's not going to be changing all the time but they need very quick access to lots of content and when I say content I mean things like what do I say about a particular hotel or a particular vacation package? And even if you're an Orbitz or many of these other environments you're still going to have this, you're still likely going to have this RDBMS, you're going to have financial data, you're going to have different reporting, you're going to run an HR system of some sort, most likely, you're going to have all of those other systems that you will have in that environment and you'll probably want to move data across the three different environments as appropriate. Sorry, is this a question? Mm-hmm. Yeah, I'll repeat the question as best I can and I'll answer it as best I can in a short period of time which probably isn't good enough. I've had that problem before as well and I've also seen other people implement that so to repeat the question, the question is, okay, great, I've got a big data system, I may have some sort of key value type system, I may also have content that is more like videos and pictures that this operational system is referencing and then I have the RDBMS that maybe that application is using a little bit of the key value and a little bit of the RDBMS, how do I actually back that thing up and recover it in a consistent way? And the, so a couple answers, one is that in the app dev world there is, I think there was a keynote on this the other day talking about Storm and I've seen this pattern in a couple different places. One is that people just do write only meaning that they're only ever appending data and then if you do that you're always able to kind of get to a point in time within it using application level views. The other is that honestly in many cases the application has to learn enough to be able to do repairs on the fly. I have seen that pattern a number of times and I know it's sort of a difficult one and then you come up with the right cadence for each system, but your application has to be relatively resilient to the fact that things may go slightly off. It gets to be a little harder for sure when you start talking about, well, not that you would do this necessarily but what if you were trying to do a transaction between this and this, right, like something atomically between the two and so typically first thing, try to avoid that as much as possible. If you can't then it's certainly possible to do but you're gonna have to pull a lot of that in your app. Sure, you can certainly do file system snapshot, yeah. Source of record, yeah, yeah. And yeah, that's a good point. I have seen that pattern many times as well which is flow the rights to something that is a well-known environment but add these other layers on top. The only thing, only comment I might have which is a challenge for that one is then sometimes you get into sort of consistency sorts of issues. You're certainly opening yourself up to either the right throughput of that guy is going to slow you down or you're going to allow rights here and there and potentially open yourself up into having to deal with that consistency issues. But anyway, so we can certainly talk more about that at the end if it's interesting. So the other thing is that there are, if you have software developers and I'm sure you do, software developers are always wanting to pick up new skills as well and some of that is fashion, right? Like we talked about and some of it is really, they really are new tools and new ways to fix the problem. We're always looking for better tools to be able to address the problem. So in that case, you kind of want to understand the pros and cons of each solution. In what, I guess in the mid to late 90s it was really, there was one or two maybe games in town but for the most part there was what was happening with Java, Java EE in the late 90s, early 2000s. And since then there's been such a growth in open source and new languages, et cetera. And each of these tends to find a different way to solve things. So they all have different APIs for accessing the data, different query languages, et cetera. And denormalizing and duplicate data in our current context is not as much of a problem as it had been historically. Especially the duplicate data part but also in many ways the denormalization of the data. It's not as much of a problem for the application to take on more of that work. There's a guy I had attended somebody else's presentation. He had a really good way of putting it. He just kind of related it to his background. And that's it. When you first learn about certain kinds of systems you say, oh, I can be more efficient if I use a trigger for this and an internal stored procedure for that and so forth. And then you get to building your application on top of it because you built this structured data model and then you find that it's really hard to test this thing. And so then you end up taking those back out because you can put it all in an application level that you can test. And even though it may not be the most efficient way, it's a way to do it. Yeah, sure. Actually, let me defer that one if I could. Yeah, yeah. So I can certainly talk about denormalizing in the context of application updates. So the question was about denormalizing and why it's not a problem. You can also, if you're developing new skills around new languages, new frameworks, you can also apply those to the particular problem that you're trying to solve. I would argue these days, I don't know if any of you agree, but I would argue that if I were to build a CRUD app these days, it would be a Ruby on Rails app. It just makes it so simple. And it may not actually be Rails, it may actually be, maybe it's a Spring data app and that has the advantage there is that there's no SQL support. Actually, even Ruby has, Rails has something called Active Model and in Couchbase, we even have a Couchbase model that allows you to define, just define objects and it'll automatically handle getting those two and from the data source for you. And so, but you can kind of build that around whatever problem you're trying to solve, right? So note that this is, by the way, something that you're already, chances are you're already doing, going back to the operational RDBMS in the data warehouse, which was not something that was intended when the RDBMS was deployed but happened to have kind of grown out of need more than design. So the other thing that certainly has been talked about over the years is this concept of building a data service layer and so this is where service oriented architectures come into play, et cetera. So frequently, after you get to the point that you have these, you'll have this, you know, polyglot persistence layer, then you can build out different ways to access the data and that really helps you. This diagram isn't very good in that it doesn't even show a mobile device up there, which is one of the biggest growth areas in our industry these days and in part why CouchBase we're investing quite a bit in CouchBase Lite and others. But if we wrap these, if we take this persistence data store, we build out the services that are required, then we'll know, then we can expose those as needed to the user. So just to look at one example of that, so this example is a content-driven site. So if I'm running a content-driven site, I want to keep up with changing needs of the users so as the users hit that content-driven site, I want to be able to do recommendations back to them. This is, if you're ever on Amazon Wayfair, any of those sites, that's a very common approach is that they're really taking, they have to build that content for you very quickly so they typically need some sort of operational store. That content-driven site is also generating a lot of data and that data that it's generating tends to right now go into log files in many environments, it probably tends to go into a log file, get compressed and then sit there waiting for something more interesting to happen. But in some environments, and this is just talking about it in the context of Hadoop, those log files may be imported through something like a flume flow. So Hadoop has a tool called flume to bring it into the Hadoop cluster. Then at the same time that data's going into those log files, there may also be data going into the operational data store. This is more or less a pattern because there are a couple of deployments I know about that work exactly like this. So they have information that's being constantly updated in Couchbase or whatever the NoSQL store is there as well. And they need to, in order to really be able to generate user profile information, they need information from this, they need information from this and because it's an older application, there's still one of these. And so they need to be able to pull all of that data together and then process it and Hadoop excels at that kind of complex analytics that may require multiple steps where you put together chains of map reduce jobs to understand, to really build a profile around a user. So I'm taking profile data from user behavior there, from user behavior there and maybe user profile information there, for example. After all that's put together, I gotta get it back to a place where I can make use of it. So getting it back to a place where I can make use of it is really a matter of pushing it back over to something like my operational store as represented by a catch-face cluster in this particular case. So that's a really common kind of use case. Another use case, this is a real live, this is in Europe a mobile services case. And so really what they have is a lot of legacy which is around user registration or setting up user profiles for information they wanna be able to access when they're on their mobile device. In this particular case, it's almost like they're selling different kinds of application services that then are accessed through the mobile device. And as a subscriber you set that up through whatever the web interface is for that or whatever the interface is. Maybe you're actually talking to somebody who's an operator and they're adding a service to your account. After that service gets added to your account, this goes through a process to get to that PIM database. So this is sort of personal information management. And this is mostly a legacy application, has lots of things like, so this has product information, different things that you can subscribe to. The actual delivery of that product data of that application, so you've got an app now on the device that you've provisioned through the telco, that actually delivery comes through this other path over here. The product data and the additional metadata in order to deliver that product data is coming from here but the provisioning side is starting at that layer. And the thing that's sort of in common is that there's some sort of web app server tier. In this case, it's really just doing services. In this case, it's actually doing services and presentation. But again, another scenario where we've seen people are able to extend their existing assets to the next level, so. So, now what? Now that, whoops, now that you've all, there's lots of head nodding, you're all in agreement. How do we, what happens on the app side? How do we handle this from the app side? So this was also about the polyglot on the app side. So sometimes we have no choice. And we're forced into adopting new technologies. We may not necessarily be able to take that existing Java app dev shop that I use and see people looking at me. Am I that close to the, oh, I'm that close to the end. Okay, I gotta move faster. We have no choice and we need to be able to let people, and we'll have new languages sort of forced on us. But we also have lots and lots of options. I sort of referred to this earlier. It's sort of interesting that Java EE-5 is still mainstream. That came out in 2006. So it hasn't evolved very quickly. But we do have things like iOS and Objective-C. We have lots of new languages like Groovy and so forth that many app developers want to adopt. So just some very quick examples, and it looks like I'm not gonna have an option to get into a demo, but one of those is that people want things to be more, again, real time. So with web sockets. So we have a few options, right? We could wait for Java EE-7 to come out, and then we can do web sockets. Who's gonna do that? We can hack our app server and add applications with continuations, or we can go use Socket.io and Node.js, which is available pretty much right now. I mean, available as relative, I guess. And then on the data collection side, and how do we handle that? Well, we could grab a library, we could create a library, or we could wait for Java 8 and lambdas to be able to handle that data collection. So, or we can use Scala, which is available now, runs on the JVM, does quite well for us. So one other super quick example, just talking about a CRUD application. We could use Java EE stack, could use Spring, that might be one place I would go these days, or we could use things like Plague, Rails, and Rails. So how do you choose, well, maturity, supportability, feature set, learning curve, how quickly can I be productive with that? That's probably what, that's really where I was trying to drive to in general, is that as a polyglot, both on the application side and on the persistent side, you should really consider these things as you decide how to build it. And you know what? I feel bad. I'm supposed to, oh wait, I'm supposed to end at three? Huh? Oh, I have 12 minutes, okay. Some people walking by as if it was over. 245, I'm done, okay. I'll wrap up fairly quickly, I apologize. I'll tell you what, so they gave me this, so that's why I'm confused. I thought I was done at 245, but then I see this. Okay, I guess that means I gotta be absolutely done by then. So, polyglot programming in action, so a few things that you'll have to look at. Manage distributed processes. Data and caching are something that you'll likely have to look into. And then I won't spend a lot of time on architecture, but if you're interested in hearing how we do some of this, I'd be happy to walk through that. Another use case was second screen gaming. We talked a little bit about that in conclusion. So we can get into Q and A. Do try to use the best tool for the job. Don't try to hack it in. I'm sorry. Don't try to hack something in when you may have a solution that may be more productive for you. Just because you have an investment in a particular persistence store, don't necessarily build everything on that persistence store. Same thing with the language side. It's worth, this is something that we should be doing, learning for us as professionals, it should be part of our job, and sometimes we don't necessarily take that into account. So the world changes, new technologies come up, so this will help you in your project. And sometimes you may wanna just compare that to what would it take to maintain this if I build, if I hack in the solution versus really build on technologies that are designed for it. So with that, whoops, Q and A. Any questions? I know we took several during the process of the presentation. Certainly. Either, or I can give you my email and I'll send it to you directly. The slides have already actually gone to the diversity folks, so I think they'll be posting them. If not, I think my contact information is here somewhere. Matt at couchbase.com, fairly easy to remember. And I'd be happy to stick around and answer any additional questions like that really complicated one. How do I back all this stuff up consistently? Thank you very much.