 and in camp and I'm the executive editor of DataVersity. We would like to thank you for joining today's DataVersity webinar, NoSQL, growing up at Oracle, sponsored today by Oracle. Just a couple of points to get us started, due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them via the Q&A section in the bottom right-hand corner of your screen. Or if you like to tweet, we encourage you to share highlights or questions via Twitter using hashtag DataVersity. As always, we will send a follow-up email within two business days containing links to the slides, the recording of this session, and any additional information requested throughout the webinar. Joining us today is Robert Green, the NoSQL Product Manager at Oracle. Robert Green is a Principal Product Manager Strategist for Oracle's NoSQL Database Technology. Prior to Oracle, he was the VP of Technology for a NoSQL Database Company, Versant Corporation, where he set the strategy for alignment with big data technology trends resulting in the acquisition of the company by Acti and Corp in 2012. Robert has been an active member of both commercial and open source initiatives in the NoSQL and object relational mapping spaces for the past 18 years, developing software-leading project teams, offering articles and presenting at major conferences on these topics. In his previous life, Robert was an electronic engineer developing first-generation wireless speed spread spectrum security systems. And with that, I will give the floor to Robert to start the presentation. Hello, and welcome. Thank you, and thank you, everyone, for joining us today. We'll talk about NoSQL today, and in particular, what's drawing NoSQL data management needs. What are some of the lessons that we've been learning here over the last couple of years at Oracle as we've been putting technology out into our customer base and talking a little bit about what we see in terms of the features and the architecture that's becoming really important and that we think is going to be something which is going to have a lasting value to the management professionals. We're going to talk here with talking about these modern workloads. So there's a new workload which is hitting us, which is different than it used to be in the past. And that's adding a lot of the NoSQL activities. And really what it is, it's more of a right-intensive workload. If you look back at the more Oracle business systems, it was a lot of reading that was going on. It was pretty much, you know, 100% writes. You had a lot of people, oftentimes, putting data in these systems, and then there was a lot of read activity that was going on and utilizing those systems. Where today, you shift to a much more right-intensive type of workload, which is needing to be highly concurrent and at the same time, there's a new shift for highly available systems. So let's talk about these workloads. The client case that I think everyone is familiar with is Amazon, you know, which is in the which is in the retail section. And so there's a very, very read-intensive activity going to a website, browsing around and eventually buying something. But as it turns out, because of the way that these retail sites are trying to understand the consumer base, it's also a very, very right-intensive type of location. So as you're clicking around from page to page, then capturing what's going on with that activity and capturing all that data, capturing the activities is eventually picking baskets for you and then even doing things like trying to figure out whether or not there's fraud activities going on. So they're taking a look at what you're doing in your activity and comparing that to things that have gone on in the past. So you get this very mixed workload between reads and writes. And what you're doing is it's also at the same time driving a change in the concurrency requirements of the systems that are responding to that. And when we look at the implications of that on the database implementation, what we find is that it's much more difficult to deal with that kind of massive concurrency when you've got tons of people who are both writing and reading your database ultimately in the back end because the implementation of the older relational space is something that it's doing with runtime calculation of the relationships. And because it's doing this runtime calculation of the relationships, it's having to deal with much more complex interstructures in order to materialize that set of relationships. And then when you're trying to write that at the same time, you end up having to do typically more than one IO operation in order to get that write-in effect. So there's probably indexes and other things that are being maintained. And because you're drawing the relationship between two data by value, you're often updating more than one table, which will become a data that you join on later on in read access. And so there's a lot more going on there in terms of code path. So what we do is that people are in order to deal with this mixed 50-50 kind of workload in really concurrent conditions, they're moving to a much simpler data model for some of the applications where this makes sense. And so we're looking much more like what we see on the right, just a simple key set of keys which are pointing to a set of views. And basically, those views are for each one. So that data is aggregated together in such a way that you can single type of IO operations in order to get the information out. It's more navigational for one type of operation. And also these workloads need to be available. And what we mean by that is built in. I mean, people are expecting this kind of built-in, high available capability in their data management platforms. And this stuff clearly is available in related technology as well. But it's usually an option, it's something which you add on and it gives you some additional capability which allows you to improve the performance of your reports or I should say it retains the performance of your operational systems by offloading things for reporting. And if you're willing to go, you can get things like active-active, just relational sets as well. But again, you have to set this stuff up in addition to the original base deployment and enable it. Whereas the newer systems in the NoSQL space, people just expect this stuff to be always on right out of the box, everything built in, all the administration online, all the evolution of the scheme online, any kind of upgrades, any kind of patching, any stuff to be operational and running. One of these characteristics is the system architecture becomes really the key in order to achieve these things. And so you've seen that shift for NoSQL systems. And you start looking at characteristics like the ability to scale linear, locating data for reliability and for higher concurrency on resources, changing the way that your transaction semantics are done by using a synchronous type of business model, and ultimately even doing things where even though you're distributing things across lots of systems, you look for ways to get data localization. So looking at this, these set of characteristics, linear scaling, where your data is split out across multiple processes that reside on multiple machines, and it gives you some sort of isolation so that if you have any kind of failures, the system would keep running. And the way that it does that is it not only spreads the data out in an automatic way across these multiple processes, but it also then replicates that data. And by replicating the data, you have multiple columns and you can implement the software system in such a way that a particular process goes down and other processes can sort of take over for some of the operations that were being held by the other process. And sometimes we're also seeing, at least in the Oracle Node SQL Database architecture, you're seeing the introduction of one of those which is sort of elected to deal with the transactionality of the system. And so write operations are typically directed at and driven through that, which is replicated to other nodes. And when the system, when it comes to reading, a client can read from any node in the system because the system is being created to lots of spots and you can sort of choose how it is that you want to access that data in terms of its consistency and talk about further on in the presentation. The point is that if you use those systems, these Node SQL systems, more hardware, they will allow you to really expand. They will enhance the data that's being managed by the system and they will in essence improve your SLAs, improve your write throughput by giving you a broader number of processes that are consuming your data and give you a larger number of processes that you can read from. So reading your readings in these. And it's all just being introduced. Now the other important about the architecture is this number of performing operations in an asynchronous manner. And as well, very commonly across many of the vendors, you need to choose things in terms of durability so that you really have to do full synchronous type of operations and sync your data out to disk. And we see that you get both a base, basically available soft, eventually consistent type of operation by being just able to write things into memory and asynchronously return and expecting the data in the system to take care of the rest of the durability to the point where you can also specify a full sort of acid transaction and in which case you'll do things like blocking and pushing things all the way out into the disk subsystem. There's a spectrum where you can choose the kind of durability that you want. And on the right side, it's the same thing. You can say I just want to read whatever the latest status of the database and I don't care that it's 100% consistent and so just give it to me from any memory space that's responding the most rapidly. You can say no, I absolutely need to make sure that I'm getting consistent read in which case things get flushed and your operations get directed in such a way that you know you're going to get a consistent result on read. Finally, data local. We talked about these architectures trying to optimize IO in such a way that while you're sharding things, you are trying to take this national set of data and localize it into the same physical nodes because that way when you get a request where you get sort of a root level piece along with a bunch of related items right into, you know, do the related join across the system. It's not a relational one, it's all related. It may even be embedded into the same data structure and so you have a very efficient IO operation to read things out. But what's also important, and it's not necessarily the case, but what is important is if you're trying to do things like add secondary indexing to be able to also localize your indexes to where your data is at because if you can do that, then it's a good idea to optimize on IO as well as disk IO and also at the same time, you get some ancillary benefits that you don't get, for example, the possibility of index divergence where something is going wrong when you're updating your data and your index in separate aspects of the system and machine fails or something goes wrong and the systems are meant to be resilient. They'll switch over and other processes will continue to handle requests but you can get this divergence between index structures in your data. So you don't get those things localized into the same stocks. Even at the same time, you can get much more effective indexings. You can have low cardinality index results. You're not representing things as, for example, just another distributed preference table inside your underlying distributed system but you get as a real index that's localized to the data that you're looking for. So you can find the three low cardinality matches out of 400 million record slash value data sets easily and effectively and you can still do things using system-wide ordering because if you keep the index as local, you can keep the ordering local and so all you do is a quick sort merge at the client in order to get system-wide ordered result sets which are typically very useful for programming a lot of applications. So having the distributed architecture but yet getting data local storage becomes really, really important. Also, at the same time, it gives you a way to interact with transactionality at some level across values by having this kind of data localization. A brief overview of what you see has been driving things. There's a real range in workload but things are much more concurrent. There's this expectation about systems being always on and your architecture is playing a significant role in delivering that kind of a system. So as we've built that system and others have built that kind of system and we've been developing it now for a couple of years, we were seeing some things, learning some things and applying what we learned into future design work and so we'll talk about that a little bit. There are things that we'll talk about in terms of lessons learned. First of all, we gave the Amazon case which is the web retail case. So the fact that this is going on, it's way beyond web retail and we'll talk about the reasons why. And the fact that these systems don't stand alone, they often involve a lot of type of data management technology so they need to be integrated. I know a lot about the fact that it's easy to get started with and you can achieve some amazing things in first level designs but one of the issues that is coming up is that as you go to extend these systems later on and you go to version 2, version 3, version 4 and you're trying to add use cases in, they can have some challenges because it's very difficult to foresee all of the use cases up front when you do your data modeling. So, we see the manufacturing automation industry using technology a lot and basically there's sensors all over the place that are having to capture staging data, at what stage are various products in the manufacturing cycle and what is the health of the system that is manufacturing automation. We see things in terms of real-time dashboarding, whether it's business operations dashboarding or it's online trading where you're able to capture all the rights that are going on from a trade activity but at the same time you're trying to read that activity back over a period of time in order to understand trends. You've got, again, that mixed work list of both right and reads. We're bringing, you know, it would be, you know, that in logistics, logistics management where you've got real assets that are moving around and you're needing to match up, you know, assets with demand and doing this batching basically to put those things together. So a lot of times where you see, again, you need to have a real-time visualization of what's going on with your system. NoSQL tends to be playing a pretty strong role in that area. We see a role in gas. This is another case of sensor analysis. We're seeing it, you know, for example in the drills where, you know, they're trying to understand what's going on with the horizontal drilling and thinking about what's going on with the drills and, you know, how aggressively they can push to drill further and drill deeper. We're also seeing it on the discovery side, you know, the ships that are moving through the oceans and they're dragging these big long lines behind them with sensors that are bouncing, you know, wavelengths off of the ocean floors to figure out, you know, where the ideal drilling spots are. And so there's this, you know, sort of sensor data, kind of analysis that has to go on. That's much more right-intensive. They're actually doing a lot of the analysis after the fact. But it suits well for NoSQL-based systems because it's a very high-intensity data capture which has kind of a time analysis, time series analysis component to it. And these systems tend to do very well for that kind of a use case. And then we see it in communication, mobile personalization. So it's, yeah, this is somewhat related to retail these days. But most of these telecom providers, for example, that are exploiting capabilities up into our consumer-based applications that are on the mobile devices. So they're doing analysis of what's going on with overall mobile population and building protocols and then allowing retailers to work through in order to push customer content up and into the apps that are being developed with telecom providers. We see multi-changing. We see people who are yet in data from various retailers and doing market segmentation on that. And we're trying to use that information to mobile personalization as well as web-based personalization in the ad space and even some print type of media. So it's like the point that we're looking at is that we've seen this, although NoSQL started in the retail space, the online retail space, we're really seeing a push all beyond that into pretty much every vertical market. Finance and banking in various other areas, which is what I want to talk about here in order to help articulate the integration story. So what we're seeing with these NoSQL systems is that they just don't stand alone. And I don't know that that should be unexpected, but it's made up to be the case that many of the systems that are being fronted by NoSQL because you get this real low latency characteristic, which is important to people, especially in consumer-facing applications. But at the end of these systems, it's just sort of the emergence, or the big data where, you know, it doesn't define what big data is, but it's giving data for, you know, a much more, I would say, organizationally real-time purpose where you're giving data from a lot of different sources now, public data repositories, streaming data from websites, you know, you just didn't use your data before, like the web blogs and all of that. And you're doing analysis on it and capturing patterns, capturing rules, capturing different segmentation, a system that's important, and you end up pushing all that stuff up into the NoSQL database systems where it's a real-time response. But it's an integrated application. It's not like, you know, that these things are sitting alone. It's back-end systems are gathering data from lots of different sources. They're bringing this, in this case, it's a fraud system up into social rules up into a NoSQL database. And so as transactions are coming in, as people are helping their clients back in and it's pulling the history of activities up for that card and it's looking at the rules that were produced by the external systems and it's making a new decision, right? So it's capturing that new activity. It's having to do a right. And it's doing a pretty intensive reading and analysis to figure out how to respond to that. And at the time, that new information, that the new activities that are going on in that system are also written then into these data warehouse, which oftentimes is a relational-based data warehouse. And also it ends up being Hadoop-based systems because, you know, often you're looking for longer trends which involve a lot of data, especially in things like fraud where to see the important patterns that emerge over time, you have to look at more you have to look at larger and larger data sets. And so now having Hadoop-based systems which can sort of have large data and things like that as a whole across the system that works very well in systems like Hadoop. So all these technologies start to work together. And so you really need your NoSQL system to be integrated in such a way that it can work well with these technologies. So as we talked about, the fact that you just don't know about your use cases up front and that starts to become a bit of an issue and it's just impacting the overall designs of the technologies. An easy example is a system where, you know, it's a class IMDB and, you know, you go in and you want to look at everything related to some particular scene or episode of movie because it's your favorite. And so you can see how that fits really, really well into a NoSQL database because you've got this hierarchy of data from the seasons to all the episodes, the actors, and all the other data that's relevant to that episode. And so you can store all that stuff as some sort of nested data to go to get it. It's a low-impact IO operation to get all of it and pull it out and then, boom, you can materialize it in various ways before the end user of the system. But what happens later on, you come back and some business user requests that you add a new capability and they say, you know, I want to, you know, that these people really liked this series and this movie and it was all because they thought this particular actor did a really good job. So now they want to know what are all the other movies that this actor is in. And if you hadn't anticipated that, you probably would not have mulled it in such a way that you could easily do that in a NoSQL system. And it's very burdensome to go back after the fact in any use case because you don't normalize your actors and your actors are sitting in, you know, all of the various movies that they're but there's no way to get sort of that cross-system look at all the movies that the actor belongs to because it's the other direction in the data model. And so, you know, in various systems, it's pretty easy to, you know, to add that capability to have another, an SQL query, you know, because you're run time binding these days together and it's fairly trivial. But with the NoSQL debates, it's easy to do. And so, you know, those are some of the things that we're realizing. And that's where you see features evolving in the technology which are helping it do some of these add-on use cases in a more serious way. But the burden there, if you step too far outside of the more simplistic type of use cases. So, this thing is kind of a no-SQL technology which are helping deal with, again, the fact that architecture is being built in such a way that it can support these workloads and the fact that it has some particular implication on the types of use cases that these things are really all suited for. There's some things that have been changing over the last, I'd say 18 months or so, in particular, that is happening across the spectrum of vendors in the no-SQL space. And we think you'll see more of that. And it's important because the ones that are moving in these directions will, we think, be affecting no-SQL technologies. One is transactions. Transactions have in the past been very important and they continue to be important. And, you know, there's evidence of that we'll talk about. There's more of a move toward standards. Surprisingly, there's a lot of embracing of SQL that's sort of coming full circle and leveraging indexes, secondary indexes. So, again, it's become very important even in no-SQL technologies as interactions like, you know, like bang, start with technology. They're demanding more and more security capabilities in the systems and this built-in high availability. The high availability is, one, is not important and it's important for the system but there's a notion of all being highly available for recovery and also for globalization of access since so many corporations are global in nature and their consumers, their users, tend to be not just localized to one continent. On the transaction side, it's being evidenced by the way people are doing it. If you look at Spanner and if you look at F1 and those papers, one of the major things that they're doing is they're taking big table and they're reintroducing transactionality into that. You can get a proper transaction even in a globally distributed database environment. They're doing that because they found it was very difficult to have some corner set of use cases to have transactions and we're calling this the 5% case. Now having a base database available, soft state, eventually consistent system is really good for a large number of operations but when you need the transaction it's very difficult to deal with that so the kind of coding that you have to put in place becomes absolutely complex and so by having transactions you get the simplification of the development process but the simplification of business processes because oftentimes your software can't completely compensate for the fact that what should be the transaction goes bad and so you have real physical processes. You've got items that have to go out and check things and cross check and make sure that things are hooked up and then if it's not somebody has to take an action to get in somewhere and decide what is right what is wrong so you know you get a lot of process improvement by having the ability to when you really need them do some transactions you have indexed divergence if you have a system where you really need you know your search you can't afford some of your data not not under that index and materializing itself in results and you can't afford the overhead because you're dealing with very very large data sets to access and rebuild them all the time transactions become important in that overall data consistency so you just materializing and trying to do with Manhattan you see it happening at Google you see a number very very brand new new no-SQL no-SQL entrance into the market which are also introducing transactions because they're they're and you'll see more of that and they're a very important part of lasting no-SQL solutions the other is is a convergence back to tables even in the key value space it's become you know it's sometimes difficult conceptually to do your data modeling and so in order to help people with that there's been this sort of abstraction away from the storage model and the user sort of metamodel overlay that showing itself in the APIs you can you can do all kinds of interesting storage implications right and get very you know column oriented type of storage compression and you know serialized storage capability you can change the different to get at and improve an underlying retrieval mechanism you may even you know be like like building that dedicated to SSDs for example and completely bypassing memory operations lots of choices but how you materialize that to the end user how they interact with the system how they conceptually data model if people are really really comfortable with this notion of tables and so you've seen that come out or 18 months or so you see more of the of the implementations using a table metaphor and that's so now you get the improvement in data modeling but you also start to get improvements in integration capabilities so when systems need to act need to interact with real systems for example it has to move market segmentation data and all it has to move from one system to the next and states that are happening and then a SQL system needs to move back into one of these more structured systems you know as you're talking table metaphors on both sides it's much easier to implement the data integration transformation layers and also we can move back to an SQL type of query and you see many SQL out there moving back to a query syntax because ultimately you add more use cases you end up needing this way to declaratively say what it is that you come out of your database that's separate from its ad storage implementation and if you don't use SQL you end up inventing something that looks very much like SQL so you know and I already know SQL so it just seems to be coming back around today on that way of accessing your data even though it's fundamentally a very different architecture very different being supported different workloads still oftentimes the access methodology is moving its way back to SQL this one either or as you see these together to deliver a solution you start into the point where you want to do real-time and cross the systems you're wanting to do SQL experiments that are both your relational technology either to do and your SQL database at the same time if you find a way to create a unification of these different computational paradigms and database type of paradigms then you're going to be able to much more effectively turn out but even though we have tables here the vendors you know the value that no SQL brought in terms of late binding in terms of schema and having the application decide what is it that I actually did read database under my key or under my index and so getting all caught up in the fact that you're missing some some features that is existing so you're not in a more effective way to you know integration modeling and access you're introducing structure now so in if we're a forget about document stores for a minute but if it were a pure key value store it's very difficult to add indexes if you don't have structure right so once you introduce the tables now you have structure and to your data even though it's a very structure you have indexes and you can leverage those indexes in that weren't modeled through the key space at the beginning or weren't modeled through what ends up being table hierarchies when you've got a type of model representation is on these things as well because if you have hierarchically related table structures then you can access some documents so it's really easy to take which is the data that's flying back and forth across people's JavaScript clients these days and many systems quite frankly it's just easier to use XML and have that stuff just come back and easily morph into a table representation so again not getting caught up in what the model is that the end users are using versus the implementation and frankly we talked about things needing to be more and more secure and this is you see all of the nosy vendors further and further into security features you know last location and secure wire protocol and you know SSL and push into authorization that you can start to and again here's where you need structure to be of a table or some kind of structure so that you can say ah you know I want to give some user access to a set of tables this set of structures and other users access to that and then being able to control that on a more and more granular basis and beyond auditing I think it can be used especially in you'll see more in more of these technologies as remain around in these areas talking about what we see is a lasting nosy technology is a technology which it's very simple in terms of its key access the architecture that we talked about which is going to help that right read workload under high concurrency it's but yet not getting caught up in the again large implementation versus the end user in basic value just highly concurrent capable and the sharding that comes with the replication it comes with the ability to expand them into larger clusters as you as you need to maintain SLAs and as more data goes into the system or more concurrent users it's the system being able to throw more data at it and that SLAs going over higher concurrency integrates we talked about with these other technologies like Hadoop and your relational databases very few we found very few of these environments that are completely isolated I'm not sure if there aren't any because there are there have been we've seen in high the sensor data capture in logistics and some of the some of the managing stuff there are completely dedicated NOSQL systems but even in some of those places where it looks like it's just going to be high speed sensor data capture if it's managing processes typically when monitoring the completes it's philotories and inventory tie into building systems and so you need a NOSQL technology which is integrated and working well with all of the data management technologies and you know that's something there these systems being complex highly distributed the more automation the more manageable they are across the board so not so many pieces not technologies which are put together and on different versions hard to upgrade one without knowing the dependency as on the other et cetera you start seeing these issues that when you want to do an upgrade from version two or between click of a button and not have to think about it at all how is it that I manage that product what services go first or second or third but just have the system take that for you towards the end here or I'd like to open it up for questions but we're coming up on behalf of Oracle I'm talking about the NoSQL space in general but I'd like to give a quick plug for Oracle NoSQL database there's many of the things that we just talked about it's an advanced type of database and when we it's a key database well it holds the notion of tables though in the implementation that's exposed to an end user so you get to think about things like primary keys and some other concepts that go in there to help you with the automation of distribution like your keys that become a port of primary key but it deals with things like data locality so things are parted into into a particular database and a data that's related hierarchically gets brought along with that and you get the choice of fully embedding data or whether you want to localize but yet still links because maybe you're bringing some more interesting access like the use case we talked about where the after needs to look up and find all the movies that he's in and doing other kinds of services on indexes and supporting both events and full asset transactions so we call it an advanced key-valued database because we don't really care what the value is it can be binary it can be on it can be type in nature we track that away and what we focus on is the architecture which gets you the availability and the scale out and the data center support and all of that stuff that's embedded in these technologies and allows you to pretty much or anything you need to that warrants the kind of technology to help you manage the workloads that SQL technology use to encourage you to get involved and then I'd like to open it up and take some questions Robert, thank you so much for this fantastic presentation we have a lot of questions coming in already which is great I'll give everyone a couple minutes here one of the most key questions that comes in is if people are going to get copies of the slides and the recording and again I'll send those out within two business days so by the end of Thursday I will get that out with this nice list of information there that Robert's showing right now and also just let everyone know that you have been as a registrant you're entered to win a pass to our NoSQL mail conference which will be held in San Jose in August August 20th starting August 21st Oracle of course is a platinum sponsor there and you can meet Robert in person so to get the questions started Robert let me start at the top here our auto charting algorithms built into these systems if so are there advantages of one algorithm over another? so yes they are built into these systems you can really say if one particular algorithm is better than another I think most people are using a regular MD5 hash so it's it just you know what becomes important is how does your system rebalance the data underneath because you're going to end up with hotspots in the data and so that part of the implementations seems to be more important than the ashram algorithm but I know that there are implementations out there which are also doing things like programming prefixes so that you as a technique to not just auto chart but kind of a little bit of localization to the data you know for example prepend key space with U.S. or you know D.E. for Germany or C.A. for Canada and so then data in these different regions all the time it ends up localized regionally from a perspective for localized access I think that's the answer but again the most important thing is the system implementation so that you can avoid hot spots by balancing the system when it needs to scale Thanks and the next one is a bit more of a comment than a question but maybe you have something you'd like to add to it the comment is I always consider the database technology to be a soft schema it is the policies of the organization and a few bad practices that may database rigid you think of that? so one part of that is there is an element of truth to that statement in that it's like you can't add a column to an additional database and and eventually expose that up into the application so in fact you can even implement things in such a way that you're using level structures that sort of mask some of the underlying structures so that you can add new keys and values into maps basically but and user perspective it's just a map so now there's more things that they get to deal with and you could be storing that in relational structures but in general you hack out and change the application one or the other and so on the all of your SQL statements add the right ordering and are consistent with the columns and in the no SQL space you have to go out and you have to do something to recognize the fact that there's more data there that you might want to use and so there's more application logic desk written in so nothing is for free here there's definitely an element of truth to that statement but I can tell you from the you can simply change the schema just like that and you don't break any applications so the applications which are running against the older schema aren't going to start having errors they're not going to start dying and that's really the goal the implementation is done well you won't see any ill effect because you changed the schema whereas in relational space it's much harder to do that you have to be really careful because of the way that there's order dependencies on the SQL statements depending on your you can get into more or less trouble so you know there is better flexibility in the NoSQL systems from from that perspective but it's but not for free there I said there so then as long those lines are visualization tools typically no SQL technologies or additional visualization tools required you know visualization so operational visualization tools things like the real-time dashboarding that's typically not built into the products and there are other technologies which come to bear to do that but from an operational perspective and a management perspective I think all of the vendors have some sort of visualization capabilities which allow you to look at the statistics and and if they don't have their own console then the Oracle they probably have standards-based statistics capabilities so they're producing JMX or SNMP stats that can get wired into other tools very very easily so hopefully there are questions sort of two elements to that you know what is special for their apps and then there's the in-tooling stuff which most of the vendors have. Perfect. The next question is not necessarily Oracle-related but an interesting question have you had any experience with a property graph database and do you see it as a lasting no-sequel technology? I do and I know I didn't talk about that very much in this presentation Oracle has a graph capability Oracle Spatial & Graph which is RDF for property graphs so it's RDF tuples and the process of implementing products as well so it's in the plan it's part of the product planning so you know I can't say exactly what dates that thing but it's I think it's a representation of data access and which is which is non-traditional and important and emerging when it's the relationships between the data that becomes the focal point for the kinds of questions that you want to ask and not so much the data itself the graph technologies are very interesting and they work really really well to solve that class of problems and so I do think that they are lasting technologies that will be around and so Oracle Spatial & Graph actually works on Oracle's NoSQL database so that I didn't talk about it in the presentation here but that's one more example of how we talk about abstracting away the the storage and the API that the end-users is experiencing so it's very easy to represent the practical models in a key value type of underlying storage structure that's great some great information and then I'll get up to the information on that particular database as well and what is the downside of implementing key value schema in an RD-BMF? you know I think that the only downside is probably cost right and part of what this is this evidence and technology will become about because of there is nothing that prevented from implementing there in a single table shorted implementation in Oracle and for huge Oracle users so related database users they connect that and the downsides are the same kind of downsides that you have in maybe SQL technology which is if it's a key well access something in a different way later on but it's a blob you know how do you do that now well now you need to start introducing structure so that you know you get a data overlays that help you deal with that right and you can do it right but then you have to ask yourself why but when you have such a simple access pattern and a simple data model suffices why for having this awesome data management computational platform can do amazing things I mean what what you do with the relational databases are just phenomenal right but you don't necessarily need all that power for the kind of workload and the kind of access pattern that people are deploying these NoSQL technologies for so it's it's it's really ends up being cost more than anything else questions and the next one is do you think since NoSQL seems to be adding regular rdbms functionality actually will they both be merged I'm sorry I missed the the question can you repeat it do you think since NoSQL seems to be adding regular rdbms functionality do you think eventually they will both be merged no that's I also that's a complicated question yes and no that's part of having this webcast to give people a feel for you know quite frankly I think the relational database will take on the other way around it'll take on a number of capabilities that will that will allow it to subsume what goes on some of the NoSQL technologies these days and in particular you know the relational database can allow the XML capabilities and bring XML management into that that kind of stock market from getting interesting I think the same thing you'll see happen quite honestly in the pure document space I think dedicated document management type of databases will still be interesting you know when they're specifically addressing that issue but at times people are using these technologies will be a relational database will do just fine it's so it's not it's not the case where the workloads are really demanding this it's offering 50-50 read write operations and high concurrency and those things that really arrive in NoSQL workloads I see the database relational taking some of those things that people are putting into their NoSQL now for very practical reasons as they become more in tune with what they're really good for but then there's this class of NoSQL which we're talking about here today which is really this special simplified access patterns more than basic ware clauses and index access but in distributed scale out type of systems where you have this this 50-50 kind of read write workloads and under high concurrency that the availability side of this is just everybody's moving towards that NoSQL and SQL systems are pushing the availability envelope so I see them not becoming the same thing these two systems will remain yet integrated each handling the kind of workload that they're most suited to handle and the equal side will clear everything the key and even the graph workloads will remain important and relevant and those systems will remain identifiable with separate systems and some of the some of the other stuff that's going on out there I think will get consumed by the relational capabilities and then we are just running out of time and just wrapping it up and of course we get another question reminder that I will be sending a follow-up email within two business slides from this presentation as well as the recording and all the additional information provided final words you want to wrap up with the audience for their time everybody I know is very busy and hopefully you've got some value out of this feel free to please join our community and reach out and I'm happy to answer other questions if you have them so again appreciate your time thank you again and let me reiterate Robert Cinnamon and thanks to everyone who's attending and all these fantastic questions that came through today and just a reminder that you can meet Robert in person at NoSQL now happening in August 21 see everybody there I'll get that email out to everyone by end of Thursday I'm tired already hope everybody has a great day and Robert has a fantastic presentation