 We're going to talk about very briefly how we got here. Just as a reminder, I assume that most people in the room are familiar with NoSQL and kind of how it got created. But we'll touch on that real briefly. I'm going to talk a little bit about where I think we are today in terms of the state of the technology. And then I'm going to talk about what I think is important in terms of evaluating different technologies in the NoSQL space and how the Oracle NoSQL database tries to respond to those requirements or what's important in that space. And I'll talk about some specific use cases for the Oracle NoSQL database as well. So without further ado, let's talk a little bit about how we got here. Relational database technology has been the linchpin, the cornerstone of data management and data storage capabilities over the last 20 or 30 years. But about 10, 15 years ago, there were a bunch of companies that said, you know, there's certain problems that relational databases don't do terribly well. In particular, if I have applications or needs that need to have very flexible schemas or they need to be highly available as opposed to highly consistent or I need access to very simple data really, really quickly, these are things that a relational database can do but really wasn't designed to do. And these companies went out and started a bunch of projects, both open source and internal, and kind of gave us what falls under the umbrella of big data today, which is a combination of Hadoop as a processing framework for parallel processing and NoSQL technology, which are essentially distributed, replicated databases for simple data types. But the argument started to ensue about, well, what kind of applications are these good for? And I think some of the original authors of the technology had a really good response. They said, we're not trying to replace relational databases, and we're not trying to move these applications away from relational, because these applications work really well within a relational database context. What we're really trying to address is applications that have requirements for very simple data management, which we can do more efficiently with this technology, that need availability more than they need consistency. The data has to always be available. I always have to be able to access it. And I need it very quickly. And the types of applications that we're building this technology for are things like distributed web scale, transaction processing, real time event processing, managing millions of cellular phones, understanding what their content is, where their location is, what their authentication priority parameters are, as well as things like sensor data capture for manufacturing, utilities, and other areas. And these are the applications that kind of no SQL was built for. I talked to a customer earlier this week about, well, I have this relational application, and I want to move it over to no SQL, because I've heard no SQL is really cool. And the question I asked the customer is, what's your application trying to do? Well, it turns out their application is trying to solve this problem. It's a star schema, and it makes sense that it's using SQL, and it has to be accessed from multiple applications. And the response to the customer was, well, you could move it over there, but you're essentially abandoning all the features that you actually need and are good in the relational database. What you need to be thinking about are, what kinds of applications do you have in this class of things that could benefit from no SQL technology? So a little bit about where I think we are today. The pioneers in no SQL who experimented with the technology and came out with the first set of no SQL databases paved the way for what does no SQL do, what problems does it try to solve. And here's a particular problem that we solved in our company using that technology. So I would say in the mid 2000s to late 2000s, it was all about let's figure out what this technology is good for. And from there, we've moved, as evidence in this conference, from experimentation to deployment. We have enterprises today looking at no SQL technology and saying, OK, I get it. There's value here. It's not going away. But how do I use it to deploy real world applications? Where does it fit? Where does it not fit? And where we are today is that real world deployment of specific applications that take advantage of no SQL technology. Now Oracle looks at this landscape and says, well, we're about data management. We're not a relational database company. We're a data management company. So as part of our strategy, Oracle looks at the landscape and says, well, we've got the premier relational database as our cornerstone of our relational technology data management systems. But we also need to invest and provide customer solutions in the non-relational space. So Oracle looks at this and says, well, our data management capability span the spectrum of both relational and non-relational technologies, which kind of brings us to where we are today. The Oracle NoSQL product was launched two years ago in 2011. It's been steadily gaining customers. And in a nutshell, what it is is a non-relational database system designed for very low latency, very high volume access to simple data. In a nutshell, at a very high level, we look like the other 15, 20 vendors that are in the room across the hall. We're all providing products that essentially do this. In terms of kind of peeling back the onion and looking at the next layer of what's in the Oracle NoSQL database, I could tell you it's a key value database. I could tell you it's client server. I could tell you it's designed with these particular capabilities and facilities built into it. Essentially, it has an intelligent driver that understands both how to do query distribution, data distribution, query load balancing, and understands what the state of the storage topology looks like. That driver links into your application. It's a jar file. And as the topology changes, it gets notified of those changes. So the driver knows how to send queries for a particular key value pair to the right storage node based on the information it has about the topology. The product is smart enough that if you tell it that you have multiple data centers, it will automatically make sure that it allocates a full set of replicas, a full set of data, to each one of those data centers. Now, which data center your application request gets sent to is based on the driver and based on the application. So what we've tried to do is simplify the data management and data storage aspect of NoSQL and put smarts in the driver so that the application doesn't have to be topology or latency or data center aware. It just simply goes to the right place. We've tried to make sure that administration is very simple, especially as you scale up the number of nodes in your system. NoSQL, Oracle NoSQL database supports a very flexible data format. We support key value pairs. But those values can be either opaque objects, JSON objects, or they can be RDF triples. One of the things that's very important that we'll talk about in a few slides is not just about performance but predictable performance. And I'll talk in a few minutes about why I think that's important. And it supports asset transactions as well as base style transactions at scale. So these are the general capabilities in the product. It looks at a high level pretty similar to some of the other products in the other room. But I think what I want to talk about more is what is different about NoSQL, Oracle NoSQL database, and why an enterprise would be interested in looking at this technology in particular as they consider moving applications into production that are providing mission critical real world application solutions as opposed to let's experiment with this technology and see what it can do. And that kind of brings me to the question of how do you evaluate a NoSQL product? We're a bunch of techies. And so naturally, our inclination is to say, oh, well this is a key value database versus a document database versus a graph database versus a columnar database. These are convenient buckets we can put products in. And I guess my first point is that really doesn't matter a whole lot. Because over time, that's all going to merge. For any of you in the room who might be old enough to remember, the argument back in the early 80s was, are you a raw file system user, or are you storing things on the operating file system? Are you using Quell or are you using SQL? There was a whole bunch of discussions about how did you implement your storage engine. 10, 15 years later, none of it mattered. 15 years ago, we were arguing our native XML databases better than XML databases that are inside a relational database, for example. How clean is your XML implementation? Didn't matter. That issue's been settled. So I would say if you're evaluating products and you're making your decisions based on what kind of storage model the NoSQL product has, I tell you that's not going to be relevant two, three years from now. I would say it's important to look at the features and the performance of the products, but those are going to change. So there's 20 of us in the room across the hall, and there's 20 other NoSQL providers that are not in that room. And over the next two or three years, we're all going to leapfrog each other. We're going to publish a YCSB number that blows everybody away, and then somebody else is going to publish an even better YCSB number. And somebody else is going to come out with the cool feature that does X, Y, Z, and that'll be cool for this conference, and then at the next conference, the next vendor will come out with something. All of these features and performance kind of nits are interesting, and if they're important to your application, you should certainly consider them. But I don't think they're the long-term reason why you pick a product, because they change. And they will absolutely change over the next two to three years, and we'll leapfrog each other. What I think is enduring and important to consider when picking a NoSQL product are the bottom three. And the reason for that is because enterprises have a class of problems and a class of challenges that aren't about the features and the performance. They're about how does the solution integrate into the rest of my IT infrastructure. And the problem is, NoSQL solutions are not being deployed as silos. You don't want a set of data that none of your other applications can access. You want a set of data that is useful for your NoSQL application that the rest of your IT infrastructure can access, and we'll talk more about that in detail. But this is the long-term repetitive cost. If I choose a vendor that doesn't have or I choose a technology that doesn't consider integration as paramount, then I'm going to struggle with this every single time I try to integrate that data into the rest of my IT infrastructure. And I'm going to do it over and over and over again. Reliability and support. It's great, and hey, I'm a techie. I love grabbing the latest version of product X from the open source repository, and I love playing with it. But those are experiments. That's not the same thing as my IT department saying, I need to deploy this by September 1st, and I have a half a million customers that depend on the reliability of that product. Enterprises will tell you experimentation is great. That's not what we're about. Experimentation may be great for companies that have large IT organizations and research departments that do experiments and proof of concepts to see what the technology can do. But if you're talking to an enterprise, they've both got deadlines, and they've got reliability issues. Once it's launched, it has to be reliable. And that's a difference between experimental products, early products that haven't seen production level requirements or deployments, and the difference between products that have a clear product direction and a clear product focus. And then the third area is predictability. It's interesting, looking at some of the NoSQL products, that a lot of times they'll run really, really fast, but not terribly predictably. There's a real problem in production for enterprises being able to say, I have to get a response time within a half a second, and I need 96% to 99% compliance on the NSA. And if I choose a technology that's really, really fast, but not predictable, then every time I go to change what I do in that application or in that deployment, I'm putting my production level requirements in SLAs at risk. These are the three things that are long-term factors and have a much higher value in my mind than simply performance and features. You can do feeds and speeds, but as soon as you make a decision based only on feeds and speeds, the next one's going to come out with better speed or better feeds, better features. So when you're looking at, as an enterprise, at what product do I pick? I mean, there's 40 NoSQL technology vendors, at least. How do I pick one? Think about how the feeds and speeds affect your application, but moreover, think about how it affects the long-term investment in your IT infrastructure and in your production environment. So if I believe that, then our product should reflect that. So let's talk a few minutes about how Oracle NoSQL tries to address these issues. So the first is we're part of a large infrastructure. We're part of an ecosystem of technology. So the Oracle NoSQL database out of the box needs to integrate. Needs to integrate with things like Hadoop, because that's part of the big data processing environment that we live in. And you want map-reduced jobs to be able to transparently access data that's in the NoSQL database. You want to be able to use the NoSQL database, for example, for large RDF spatial and graph triple stores, and be able to issue sparkle queries against it. And you absolutely want to be able to integrate the data that you've got in your NoSQL database with the data that you've got in your Oracle database. And out of the box, we have three different integration points with the Oracle database. You can load data via the Oracle loader for Hadoop. So you can bulk load data into the NoSQL database, into the Oracle database, if you need to. You can use the Oracle external tables to issue SQL queries in the Oracle database, which access data that's stored in NoSQL. And in 12c, you can write Java stored procedures in the Oracle database that directly access information and data that's stored in NoSQL. So you can dynamically pull it back for a given query as well. That's kind of our integration story at the back end. How do we access data, kind of, and move data around? And what do we do at a storage level? But more importantly, when I talk to customers, is what do we do at the top end, at the application integration level? And I have had a lot of customers who've come up to us and said, I'm a Fusion middleware customer. I use Oracle coherence. So for those in the room that don't know what Oracle coherence is, it's an in-memory cache grid. Very popular, very fast. You talk to application developers in many corporations, and they say, oh yeah, we use coherence. We're really comfortable with coherence. We manage all of our application objects in coherence. It does a great job at managing that cache. Wouldn't it be nice if I could use coherence to manage NoSQL data as well? So one of the things that we support is integration with coherence out of the box. You can manage both objects in the NoSQL database as well as objects in the Oracle database from a common interface using a tool set and a package that the application developers are familiar with and comfortable with using. We had the same kind of conversation with customers who were building event processing systems saying, look, I've built all this event processing logic. And the event's processing logic is, here's an event. Here's a set of rules that have to get executed. And those rules today access the Oracle database. Well, what if not all of my data is in the Oracle database? What if some of my data is out in NoSQL? I don't want to change my application, but I do want to extend it to be able to access data that's not in the Oracle database. One of the integration points we have is with the event processing module so that the same rule that accesses data in Oracle can access data in NoSQL. In terms of talking about reliability, enterprise applications and enterprise class data storage is what Oracle's about. We don't necessarily release the newest, latest, greatest, hottest, most at risk software, but you can be guaranteed what we do release works. And we've been doing that for decades. So if you wanted to talk to a vendor about what kind of experience do you have building and deploying enterprise class software and solutions, you probably couldn't come to a vendor with broader and deeper experience than Oracle. And the technology we're using inside the NoSQL database is based on BerkeleyDB, which has been running production systems for over 15 years. I mean, Amazon uses it, LinkedIn uses it, Cisco uses it. So we're using a storage technology that is fundamentally used by lots of large production, mission-critical applications. And then finally, everybody knows that we have Oracle support for enterprise addition, Oracle NoSQL database. What Andy Mendelson announced yesterday is we've announced support, Oracle support, for community addition as well, which allows enterprises to say, oh, I'm going to start with the community addition. I'm going to start with the free version of the product. And I can get support on it on a per-annual, per-server, annual basis. And then if I need the features that are an enterprise addition, I can upgrade to that if I need to. But I don't have to buy enterprise addition up front. And the last one I wanted to talk about, so we talked about reliability. We talked about integration. The last topic I wanted to talk about was predictability. And this was an interesting experiment we did. We took our NoSQL database and another NoSQL product, put it on the same hardware, ran a YCSB load on it, and basically said, let's do a bulk load. Let's load a bunch of data into the database and see what happens. Now, we were pretty confident that we'd be faster. And in fact, good news is we were. But more importantly, and this was the thing that got me excited, was the first 20, 30 minutes of the test, the other product looked like this. It was constantly doing whatever it was doing in the background in order to handle that ingestion of data. Now, imagine that you're an IT director. And you're told, you have to load this new set of customers. I got 50,000 new customers coming online. And you've got to add those to the database. Well, and you need to add them to the database by tomorrow at, you know, by today at 5 o'clock. If you're running a production system and you try to load that data into the database and it produces this kind of performance disruption, that's a huge problem. What you want is a product that you can say, no, I can go ahead and load that. And the disruption I'm going to have or the impact I'm going to have on my production running queries is minimal. I'm not going to see this variability. And that's part of what we do. That's part of how the product is engineered. Let's look at another scenario. In this case, the question was, let's take a fairly large cluster of 144 nodes. And Christmas is coming up, so I want to add 50% more capability capacity to my system. And I want to go from 144 nodes to 216 nodes. I'm adding 50% more disks to my system. And I'm just going to tell the system, go do this. Expand yourself, which means in the background, we're going to take 50% of the data and we're going to distribute it across this new hardware. So we're going to do two things, right? In the background, we're going to add the hardware to the system and we're going to move the data from the old nodes and distribute that evenly across now the complete cluster of 216 servers. What's the impact on a production running system? If I've got YCSB running in the background, how big an impact is that to the queries that are running while I do this? And the answer, not after some pain and investigation, was that the impact during the first 20% of the migration was less than 10%. So we saw less than a 10% decrease in throughput and less than a 10% increase in latency. And as soon as we got to 40% of the migration complete, we actually started to see more throughput and lower latency than when we started out. And by time we got to the full system, we were seeing significantly decreased latency with significantly increased throughput, which is exactly what you want to see. If you're an IT manager and you've got a cluster and somebody says, look, we've got a big event coming up, Christmas, or a big data load coming up and you need to increase the size of the cluster, you don't want to do that with the fear that what's going to happen is your existing throughput, your existing customer performance is going to be impacted. And what we did in this last bar here was we said, OK, here we have 50% more hardware, but we have the same number of client threads. So what happens if we increase the amount of client processing by 50%? So increase the number of client threads accessing the database, it did exactly what we expected, which was it gave us 50% more throughput with the same level of latency. That's the production graph that an IT department wants to see. They want to know, I can add hardware, it's not going to affect my existing system. And at the end of the process of adding this hardware and redistributing the data, I actually get the performance boost and decreased latency that I expected. The differences here we had, so we started out with 360 client threads. So here we have 50% more hardware, but the same amount of 360 client threads accessing the system. So for that same client load, we saw decreased latency and increased throughput. But the question is, I've got 50% more hardware. How much more can I push through the system? So we increased the number of client threads from 360 to 540, essentially increasing it by 50%. And lo and behold, we got what we expected, which is 50% more throughput with the same latency we had started out with. And that's about predictability. That's about if I do this, it's not an experiment to find out how's the system going to behave. This is not the time to experiment. This is, I'm going to do this, and the result of this change in topology and capacity is going to deliver exactly what I expected. Because tomorrow morning, when those customers come online, I don't want to find out that, in fact, it didn't deliver what I expected. Yes. So the question is, why didn't we increase the number of client threads to begin with? We could have done it either way. It was a question of how much time we had to work on the test and kind of separate the two. If anybody wants to try this at home, you're welcome to do that. Be interested in the results. So think about Oracle NoSQL Database. I'm going to be mindful of the time. I'll take questions at the end. If you're interested in details about the product itself, we can certainly talk about those at the booth. We're in booth number eight. But think about Oracle NoSQL Database when I'm happy to have you compare features and performance. Stacking us up against product X or Y in terms of features and performances is an exercise we welcome. Talk to us about what you need, what we might not have, what we do have. Talk about what performance you see or don't see. We're happy to talk about that. But I encourage people to think about the long-term costs of picking a technology partner in terms of integration, reliability, and support, and predictability. So one of the things I said in the beginning of the talk was, we've moved out of the experimentation phase. We're not trying to experiment with NoSQL to figure out what it does. What we're actually starting to see is the understand the types of applications that are most commonly used and associated with NoSQL. So a year ago or two years ago, I would have stood up here in front of you and said, NoSQL could do this. And NoSQL could do that. And NoSQL could do the other thing. They were all potentials. It was a question mark in most of our minds about what kinds of applications actually would benefit from NoSQL. I think what's different this year is after talking to customers and working with customers to go into production, we have a much better grasp on specifically the types of applications that benefit from NoSQL data management technology. In particular, what we've seen, we've kind of tried to bucket them together, is web scale transaction processing, which is, when you think about it, it's high volume rights. High volume rights with guaranteed durability. And these are some of the applications that do that. The second area where we have seen a lot of growth in terms of customer applications is what we're calling web scale personalization. But think of it as large volume, large scale reads. Essentially, trying to put different types of data together in a single repository and providing that information either to in-house or external customers and users. And the types of applications in this space are often things like product recommendations, profile management. We have several customers working on kind of the 360 profile management, customer view kind of applications. And then the third area where we've gotten a lot of traction is in real time event processing. And think about that as I have a rule, I need to read some data. So it's processing incoming events and then doing low latency reads of kind of historic context in order to process that event. And the types of applications we've seen are on medical monitoring. We just finished a POC with a very large manufacturer who basically gets lots of sensor data and wants to look at that sensor data as it's coming in, utilities, and so on. Real briefly, some specific use cases of how we're seeing those SQL get used in the field. And again, these aren't, it could be used for this. These are real applications and not potentialities. These are what customers are finding. So here's a good example of a application that was focused on product recommendations. They wanted upsell and cross sell. The most important thing for them was when a customer called their customer service center that they had a set of recommendations that they could make to that customer that had high attach rates. And the way they did this was they had the customer profiles in a NoSQL database for scalability and distribution. And what they would do is they had this Hadoop process that would crank out the analysis and the product recommendations and then push those into NoSQL. We had a customer doing something very similar in the sense that they already had a bunch of systems installed that had specific information about customers, but they didn't have anything that had a consolidated view of all of the activity for a given customer. And I've talked about to customers who are in the entertainment business, as well as the insurance and financial business. Insurance companies are a great example of this. They have 50 different products to sell you. Home insurance, car insurance, fire insurance. You name it, they have it. And they can sell you each type of policy. And those policies are typically managed by different specific systems. What they don't have is a 360 view of that customer. So what they're trying to do is understand what the customer is doing with the product and using NoSQL to drive that. And we're seeing an emerging trend of customers who are combining both use cases that are combining the, OK, how do I bring in, from my legacy systems, a consolidated view of the customer? And then how do I build recommendations that get pushed into here into that consolidated view? This is the problem that we're talking to with real customers. This isn't a potentiality. This is, OK, we figured it out. This is a consolidated view of the customer. These legacy systems stay in place because they work just fine. But we're combining what we know about the customer with what we want to tell them. How do you build something like that in Oracle NoSQL database? Essentially what you do is you think of a customer or a customer profile as an evolving set of data characteristics. So your primary identifier for the customer is clearly the customer ID. And today you might have the profile information about that customer, the web traffic, the purchase history. But what you might have in the future is completely different. And you might have their interests, if they provide you what their interests are. But basically the idea is that you want a system that allows you to identify what the major key is, or what the major identifier or access point for the data is, and then a flexible set of characteristics around that major key. And you want a system that provides fast access to all of that, which is what NoSQL does. And you want it to evolve. You don't know what kind of data you're capturing in the future. So you need this system to be able to evolve as your application and business evolves. Sometimes it, I'm sorry, five minutes, good. The third use case I was going to talk about was real time event processing. If any of you have built those, you'll understand that as events come in, you probably have an event buffer of recent history that you can access information from. But what a lot of these systems are doing is they're starting to add more and more context to that set of rules or set of information they're using to evaluate that event. And they need slow latency access to a large and ever-growing and changing repository of contextual data. So we have a customer doing credit card information, credit card authorizations. Essentially, you run your credit card through the machine. And they have to go look up that particular credit card, the history on that credit card, your history, the history for that store, the history for the region. And based on that set of rules and that set of information, they're going to say this is either fraudulent or not fraudulent. So it's a point of sale transaction. They have to authorize the transaction within a half a second. The amount of information they want to access is growing exponentially and getting richer. The more information they have, the smarter they can be about it. And so they're using a combination of Oracle coherence for some of the fast profile lookups and rules management, as well as the Oracle Database, the Oracle NoSQL Database, for historic and contextual information. Why is this important? Well, you might have a credit card where $1,000 or $2,000 purchase is normal. But you're buying $2,000 worth of products of a 7-Eleven? Probably not a good thing. So understanding the context not only of the credit card, but the context of the vendor, the context of the location, the time of day, a series of background information provides a much richer fraud detection system. But it requires low latency access to a growing repository of historical data. This is my hardware plug slide. So when you're looking at Oracle NoSQL Database, you have a choice. You can deploy it on your own do-it-yourself commodity cluster, or you can look to Oracle to provide a pre-engineered system. This was one of the things that Andy Mendelson announced yesterday, our Big Data Appliance, which is our pre-engineered system for Big Data, has been configured so that you can make it a NoSQL appliance. The Big Data Appliance starter rack is a six-node dual-socket, six-server dual-socket system in a box with high-speed InfiniBand connectivity between it and other devices, other clusters, and provides a huge boost and a great platform for NoSQL. And what Andy announced yesterday is it's the first system that's kind of been announced that can be used as a NoSQL appliance. So in summary, if you're thinking about choosing NoSQL technology, think about the long-term costs and think about Oracle NoSQL Database in the context of what are my integration needs, what are my reliability and support needs, and what are my predictability. And don't think about it in terms of, hmm, this is an interesting experiment. Think about it in terms of the applications that can take advantage of this technology, because we're done experimenting. Customers are now looking for, how do I deploy this into applications that benefit my business? So think about it in terms of, I'm building an application that fits this web scale transaction processing, web scale personalization, or real-time event processing paradigm. And I'm looking for use cases that look very much like what you saw in the preceding slides. And I guarantee you, you'll find both the short-term value in terms of performance and features, as well as the long-term value of integration, reliability, and predictability. And that's it. Do we have time for questions or should we follow up outside? Dave.Seglo at oracle.com. And if anybody who wants to send me the million dollar lottery winnings, I get enough of those already, but I could always use more. Oh, by the way, thank you. I appreciate everybody for coming in this morning and sitting through the talk. Hopefully it was useful. In the back we have some Ironman posters. You're welcome to pick them up. If we run out, there's certainly more in the box. So if you have any questions or if you have any questions or if you have any questions, pick them up. If we run out, there's certainly more in the booth, especially for those of you who show of hands who has kids. Big hit with the kids, right? Going home with the Ironman poster is, you scored. Thanks. Any other questions? Yes. Yes. Yes. So we spend, so it is a Java-based technology. It does run inside the JVM. Yes, we have garbage collection issues. We spend an inordinate amount of engineering resources, both balancing garbage collection requirements, frequency of garbage collection, how big our objects are, whether our objects are in the JVM cache or in a separate cache, how often we age them out. So one of the things you kind of get is we understand the problem and we dedicate the engineering resources to make sure that the problem is manageable. And that's why you see that predictability graph where we load huge amounts of data and you see very little change in the throughput or response time because we've spent that time with the JVM and tuning the database to make sure that we're doing small Java collection operations frequently rather than doing a huge one every five minutes because the huge one immediately grinds that particular storage node to a halt. And that's the aspect of predictability that I think is important is understanding, yes we're Java based, these are the predictability elements we need to work with and we do spend a lot of time on it. We just spent a huge amount of time looking at the size of the objects we manage in memory because anything we can do to decrease the size of the object increases the efficiency of the JVM and of the garbage collection and allows us to have more objects in memory that don't necessarily need to get flushed. It's a challenging problem but one that I think we care about deeply. So the question is, can you deploy the database and the application in a single JVM? And the answer is you could. Most customers aren't interested in that particular model because it affects predictability of performance. You probably want to have these things be in separate JVMs on separate machines so that you're in more control of what the behavior is going to be and you don't have one affecting the other. We get the same question about can I deploy no SQL database on a VM? When we say sure, there's nothing in the product that disallows you from using VMs as opposed to real hardware. The challenge with a VM is response times can be less predictable, right? Given what the different processes inside the VM are doing. So it's a question not of can the technology do this? It's what are the requirements of the application in production? And does it make sense to do that? It may make financial sense. Does it make SLA and production sense, right? Yes, yes. So in 12C, the JVM is now in the database. So you can make Java calls from inside the Oracle database and there is an API, it's a get put API that you can use to access the data in no SQL. There are iterators that allow you to bring back multiple records or scan the entire store as well. Yes. No, no, you can, so we have customers, they're very interested in using Fusion Middleware because they have existing apps that do, but we have other customers that are just writing applications either in Java or in C. Outside, not in Fusion Middleware, but for customers who have existing investment in Fusion Middleware, this is a great combination of, I can use the same application and the same platform to access data in both the relational and non-relational systems. Yes, on our website. It's a JAR file linked to the application. So it's running with the application wherever that might be. It might be in an app server, it might be on a client. So basically the JAR file, the driver has a copy of the topology and knows what the state of the different nodes is. And so from a driver's perspective, oh, the application just asked to look at this data, the best place for me to get that data is over there. And it makes that decision based on latency and based on the number of queries that it has currently open for that database. Yes, yes, yes. On most platforms, don't know, sorry. Yeah, yeah, unfortunately I don't manage that product so I don't know for sure. Any other questions? Okay, thank you very much again.