 But whether your internal process is too Ladies and gentlemen, please welcome oracles vice president global business development and product marketing Harish Venkat We plan to go over the details of several engineer systems What they are and how customers are realizing value out of them today? I have a panel of experts with me each of these Distinguished individuals have been integral in the development of these products on top of that They've been working directly with customers to maximum to ensure they're maximizing the value of each of these products It is my privilege that I want to introduce Wim Koch Hertz Ashish Ray Bob Tom and Mike Orkman, thank you gentlemen for joining me. It's a pleasure to have you guys here Let's start off talking about Oracle database appliance ODA is a great appliance customer can deploy all of their applications and database in one single box Bob, can you talk a little bit more about ODA? Sure. So the database appliance? We're now on the fourth generation I believe with the database appliance and it provides in a single package everything that you need to Deploy a high availability database or application. We have two servers We have storage we have networking everything you need is in kind of a single package and we kind of talk about it as it's complete It's simple reliable, and it's affordable Complete means that we have everything that you need for both database and application So everything you need it's there simple means that it's easy to deploy We have put wizardry and automation and other intelligence built-in best practices all these things are built right into it So that you don't need to have all these specialized skills and expertise in order to deploy a high availability database It's reliable We have done lots of testing to make sure that the whole package itself works together before you deploy it One talked about you want to be doing what thousands of other people have done That's what you get with the database appliance a configuration that thousands of other people have tested Of course, it has all the reliability. It runs the Oracle high availability software real application clusters and lastly, it's affordable it It's affordable in a couple different ways. We have capacity on demand licensing to make it less expensive to deploy and We have a lot of automation that leads to simplicity less time spent managing the system So simple reliable affordable and complete Thank You Bob virtual computer appliance is part of a growing infrastructure in convergent for structure market When talked to us about VCA, how is it different than the rest of the products and the integrated infrastructure market? So when we started working on VCA, this really explains what it is and why we built it is we wanted to make sure that From a customer point of view you basically get a complete appliance that provides the networking the storage the compute power the virtualization layer the operating system everything together and the reason we started building this because we have experience in doing Linux support operating system support virtualization support and most of the issues we had with customers was they purchased Storage from one vendor use service from another vendor then get the software try and install it the wrong device drivers And and all this work of getting these things installed was very complex and error prone And we spent most of the problem solving around getting different pieces of the stack to work together So the virtual computer appliances in many ways a best practices implementation of that entire stack The networking is configured the right way the compute nodes are configured the right way and we bring everything up from our side We have all the patches installed all the Orchestration of a new node comes in is done automatically by us So from a customer point of view you buy a big rack you power it on and we do everything the hardware gets Configured the software gets configured and all you care about is deploying virtual machines with any x86 operating system inside Thank you when flash storage is everywhere Mike talk to us about FS1. What is it and why did we build it? Well, it's a flash storage array It's highly scalable on two to sixteen nodes high availability nodes and it scales up to petabytes enormous amounts of flash and about 80 gigabytes per second in a two-node configuration are in a 16 node configuration and And it it's very flexible It we call it an all-flash array because it was designed for flash It's different than one of the old style arrays in that it was designed for flashes as opposed to disk Therefore the the topology the architecture is meant to express that performance the reason why people are interested in flash Obviously is for performance replacing electromechanical structures disks with with solid state and The idea of the system including quality of service storage domains a lot of other features Is to be a primary workhorse for the data center you can deploy all flash? Configurations at the same time as using disk so you get the economics of disk the performance of flash and it's What we call because of of that flexibility and a dynamic range of scalability the industry's first mainstream flash array Excellent. Thank you Mike Sheesh, what are some of the conventional issues we're doing back up in recovery that led to Oracle to design and create the recovery Appliance or Zito data loss recovery appliance. It's a good question Harish See backup and recovery as an industry has existed since the several decades, right? What has also happened the problems have also existed at the same time? Fundamentally, it's a problem of data loss, right? Your data is as good as the last backup whenever that could be could be hours could be days Larry alluded to that So the problem is if you take your backups and then you want to restore it later on you lose all the intervening Transactions, right? So that's the area of data loss lately we have seen some innovations in the space of deduplication But the way the deduplication solutions work is they work on a stream of bytes and That's what it is a stream of bytes Which means at their level they do not distinguish that the stream of bytes come from a busy transactional database Netnet this means the duplication ratios often are unpredictable and poor When database backups and database blocks are concerned and finally we have this nagging issue of backup windows see today a business aspires to be 24 by 7 Despite that when a backup job kicks in that means at that time frame no critical business activity can run which means Productivity suffers Thank You Ashish. Let's switch gears. Let's talk about how customers are using these products when you're out there talking to customers All around the world talk to us about how VCA is helping them solve real-world business problems Sure. So there are a number of different kinds of scenarios. So the first one is the ISPs So service providers out there that in primarily host Oracle software or host a multitude of applications Including Oracle products and they want to make it one very easy and quick to get going in Deploying new customers and adding new infrastructure to their environment and one of the advantages of the virtual computer appliances You get a big rack you power it on in about two hours later You're up and running and you can start provisioning software and in the end That's what the ISPs and the business for is to deploy new application stacks not to spend more money on figuring out which components work together and Having says admins and hardware admins on site to to maintain all these things and so forth So that's been one example is sort of the ISP community that hosts different customers Then the second one is the virtual computer appliance is sort of the general purpose appliance in front of all the other Engineer systems where purpose built in that we use the same principles and the same software stack as the rest of the engineer systems But it's general purpose run Windows run Linux run Solaris run your own app run any application inside that environment So what we typically see is customers that are virtualizing databases in the more generic way have third-party Applications or Windows stuff around it and then have a big rack put all these things together and then deploy that really efficiently Thank you when Mike talked to us about FS one where where is the official FS one being used In particular flash and what sort of problems is FS one solving today? Well, obviously flash speed and performance But agility of flexibility Deployment as a single platform that does the job that competitors take up 10 products to cover There's applications where people are deploying the FS one as an all-flash array Those are really interesting speed performance scalability Total capacity of even a two-note array is up to 175 terabytes Which is an enormous amount of flash relative to most people's two-note arrays top out 10 or 20 That's an interesting application playing to the flash At the same time people are using it for archive We have customers buying it as an all-disk solution and that's fine because we can meet the economics that people expect with disk That's terrific. Perhaps the most interesting to me is the ones where people are using the storage domains feature for the product What that is is essentially it's a the FS one is a virtualized storage array But we step back just one step from that and allow people to define physical Domains within the storage pool that act as sort of virtual FS ones so you can have you know two five 25 domains each of these domains have their own resource their own auto-tearing their own quality of service They look like a little FS one They can be an all-flash domain and all this domain and I think that the most interesting ones are where we have people who are configuring the machine to do disparate jobs OLTP with all-flash two tiers But with the four tiers you can do a great general-purpose machine and build archival solutions Using flash for index and metadata those sorts of things and and use capacity disk or performance disk for the sort of the Midground the bulk of the storage and and then we also have people using backup targets as just capacity disk So I think those are the most interesting because they Illustrate the flexibility and the power of the FS one in the data center indeed. Thank you, Mike She's talked to us about some of the top features of the Back of ZDL RA and how does it differentiate from similar products in the marketplace? Sure, Harish So one of the most innovating features in for the recovery appliances How we have integrated with real-time redo transport? See the redo block is a fundamental unit of change within the Oracle database So what happens with the recovery appliance? Within a protected database as these changes occur through redo blocks These redo blocks are shipped directly from memory from the HGA directly to the recovery appliance It's like an extended mem copy over the network Thereby these protected databases are really protected till the last sub second, right? So from a recovery point objective perspective, this is huge The second thing is we are minimizing any production server impact when backup jobs kick in because now all the backup Operations are consolidated on the recovery appliance The protected databases they are enabled with an incremental forever strategy Which means only the changes are shipped no more full backups after the first initial full backup and Finally what can also be enabled with the recovery appliance through the integration with enterprise manager cloud control The IT executives IT administrators They have a real-time recovery view of the entire enterprise of all databases across the enterprise How these databases are doing from a recovery window perspective from a data loss threshold perspective And it doesn't matter to your zero to your one to your two all these databases are now protected in a Very streamlined manner with the recovery appliance. Thank you. Ashish ODA Bob We know it's a great platform for database through your customer interaction Tell us a little bit about how customers are using ODA. Okay? So as you said the Dane database appliance gives away part of the story It makes a great database platform For running your LTP databases your data warehouses or even your mixed workloads great platform Especially when you want to deploy high availability databases because remember we have everything there in order to run our high availability Real application cluster stack so you need a high availability database no easier way to get it However, there's more than that. We have a lot of customers who are actually looking at it as a consolidation platform Why the x5-2? Database appliance has three times the number of cores of the original database appliance is actually we used to think of it It's a little baby box. It's no longer a little baby box It's a pretty capable system and a lot of customers can consolidate a lot and a lot of databases Inside a single database appliance so it makes it very cost-effective for doing consolidation The other thing that we see people do with databases doing is using it for test and development It has integrated snapshot capabilities stored snapshots. You can quickly and You know take a snapshot of a database and deploy it to a test or development Use case and that's you know, that's something that's built right into the system There's nothing else to buy a single command and you've got yourself a database Remember the database appliance runs the same stack as exadata So it's great for backing up either or testing either an exadata or another database appliance lastly we have It's more than database appliance. We have the ability to run applications Virtualization is integrated into the system and because of that we can run applications and the database inside the same box We got a lot of ISVs embracing this. They're able to deploy solutions in a box And they can you know save a lot of money that way by deploying a single solution to their customers We've also worked with Oracle application. So all of our applications run there in there as well Thanks, Bob. We spoke a lot about acquisition costs and performance today Let's talk a little bit about the ongoing cost associated with management. We know that's a bulk of the cost So how does VCA help alleviate or some reduce some of these ongoing management costs for him? So it's certainly one of the reasons that we think it's a great solution for customers, right? So there's probably three aspects to it the first one is the hardware then it's the software stack and then it's the Combination of hardware and software one of the advantages of VCA is that when you buy the base rack It's actually completely cable top to bottom so you can go up to 25 compute nodes in the rack But you can purchase it with as few as two so you buy the rack it has to then you need to add nodes Well all the cables to add that node are already in there and they're labeled So from a hardware admin point of view if you need to add more capacity from a maintenance point of view You buy an extra node you plug it in. We have the cables already there We they're numbered and labeled so it's very easy to plug them in you power it on and you're done You don't have to go and install the operating system or the virtualization software We already do that for you the VCA orchestration software will detect there's a new node added It will install Oracle VM on it It would it will install all the right networking components on it Discover it in the same server pool and bring it up So you really don't have to do anything other than physically plugging in extra compute when it's needed Then on the software side We provide bundles of software for a patch that basically takes everything into account operating system updates firmware updates the updates for the storage appliance updates for the networking updates for the Virtualization layer all the device drivers that need to be updated They're all bundled together very similar again to what we do with all the other engineer systems Basically get one patch bundle you update it and we automatically start installing this in fact We do it in a rolling fashion one of the things we mentioned earlier was every component is HAA We will bring down one management node Switch to the other one apply the updates to one when it's updated We switch back into the updates on the other one once we start updating compute components will migrate VMs to another node We'll update a compute node migrate the VMs back So it's a completely hands-off mechanism where you don't have to worry about a new version of the software Which firmware will work on which server that I purchased which version of this virtualization software will work with my storage or the storage? Plugins, so we basically take all that out of the picture and make it really easy for people to just focus on what they need They want to bring up new VMs really quickly, and they want to not maintain the box themselves. It just has to come automatically Thank You win Mike. We know FS one is Oracle's flagship offer for sand sand is all about sharing storage resource for Application workloads. What do we have in FS one that delivers great application performance as well as its ease of manageability? Well, there's quite a few things, but I'll focus on three first and foremost the the FS one is built around a Framework called quality of service. It actually has been changed. That was the original We actually change it to be quality of service plus At the core of quality of service is the fact that there's an enormous amount of control over raid levels sequential caching network compute Etc in the machine that sounds a little daunting, but I'll get to why you don't have to worry about that so much above quality above that control over the hardware is a prioritized queue essentially storage since Ramac used first in first out queues and the FS one has replaced that with a prioritized queue And just simply put that allows you to align business priorities with the execution of the storage system So that it does first things first if you walk out of your house in the morning and your plan was to rake some leaves Even if that doesn't sound like a good plan if that was your plan and you noticed that there was a broken pipe You probably ought to drop the rake and go fix the pipe And and that's what we all do in life We do first things first when there's a OLTP running your web store You probably better do those IOs before you do test and dev IOs and the quality of service plus In the FS one allows you to align those business priorities with the resources and the execution of storage transactions in the FS one the second thing is storage domains I talked about that already that allows you to go one step further than aligning priorities It allows you to physically isolate resources and the third Really important one is is given that capability of the FS one to do so many different things on four tiers of storage Auto-tearing and all of that stuff. How can I make it simple for the administrator? And the answer is we have application profiles So when you deploy an Oracle database or Windows exchange server on the FS one There's a drop-down box that goes in and it sets all those configurations to pre-tested configurations that are optimized to match the attributes of the way that the storage Handles the workload which are disparate between different applications and the way the FS one is configured to give you the best Dollars per IOP and the best dollars per terabyte. That's what the thing does So those three things in concert make a very powerful core storage system for your data center Very very compelling. Thanks Mike Bob ODA is known to reduce database administration and lowering costs How do we actually make this magic happen? Well, we already talked about some of things simplicity. Remember, I said it's complete simple reliable and affordable rights The key here is simple. We've baked in a lot of best practices a lot of automation It was funny the description whim gave of turning on the VCA It's actually very similar to many of the things that we've done with the database appliance We've made it very very simple. We've we've automated things We've taken patch bundles and we tested everything and make sure that it works together So that you don't have to spend a lot of time researching best practices researching patch Compatibility researching this researching that right we've we control the stack. We made sure that it works together So Imagine that you have a you're an ISV and you suddenly have an application and you want to ship it out You know it to a customer rather than sending somebody on site for weeks at a time to try and build up a hardware stack Or get it running on the customer's equipment You can actually almost drop ship something in there and the less time that somebody spent on site Trying to work on things the less you know, that's not a huge opportunity cost as well as just you know A reduction in time spent in hotels and other such things. So big savings there I'm fact solution in a box is one of the use cases. I didn't get to that, you know really allow people to People are really especially ISVs are really excited about There's other ways that you know, you can save money just the pure cost of somebody doing something you have administrators They're expensive skills. Sometimes they're hard to find you're paying them a lot of money You do you want them sitting there doing mundane deployments and kind of patching exercises over and over or do you want to spend those resources on higher value More valuable tasks to the organization you probably want to spend them on those more valuable things So time that's not spent deploying patching scratching your head and figuring out why things don't work why there's incompatibilities That leads to real savings. Thank you, Bob And finally Ashish talk to us about the overall value of the recovery appliance for any mission-critical business sure see As I mentioned before there are a lot of critical gaps in this area of data protection and recovery appliance fills those gaps I Talked about how it eliminates data loss exposure within the enterprise So any costs associated with data loss those costs are now eliminated with recovery appliance I also talked about how by offloading backup operations to the recovery appliance and also enabling an incremental forever strategy We can eliminate this whole concept of backup windows. So that improves productivity throughout the enterprise I also talked about how IT managers IT administrators IT executives could now have an end-to-end view of the state of data protection and the state of data recovery real-time manner across the entire enterprise that improves the Manageability costs and finally as data explosion happens as your business grows as number of databases grows What happens with the recovery appliance? You can scale out very easily dynamically But not only just adding capacity we can scale out with increased compute and Increased networking bandwidth without any impact on existing production servers. So that really is huge See Harish at the end of the day We can talk about a lot of innovate innovation lot of new features and the benefits they provide If you look at this state of data protection and backup and recovery if you ask any IT administrator What is your number one pain point? My chances are that my the chances are that he or she will probably say backup and recovery management What we have done with recovery appliance is eliminate the cost Associated with fragmented backup and recovery management We have made it vastly streamlined and much more standardized Kind of like a global standard that both Larry and Juan alluded to thank you Ashish And I want to thank the rest of the panelists as well. So I hope you guys found this informational and very Insightful I want to thank you guys for your participation both online as well as in the auditorium and enjoy the rest of the day. Thank you Ladies and gentlemen, please welcome or close vice president of big data and advanced analytics Neil Mendelsohn So thank you for hanging in there. Hey Love talking about this topic. It's my favorite topic big data and as you heard Larry talk this morning I'm actually going to focus mostly on the gravy right rather than on the meat, right? I love the meat. I love the gravy after I got married. I learned to love tofu, right? I actually enjoy it now. Believe it or not. So so let's let's get started here So if we take a look for a minute Well, could you advance the slide, please? Okay, if we back up just a moment and ask ourselves What is this big data stuff all about right or really big data and digital transformation really go hand-in-hand Right, these two things are really much part of a much larger economic story Right where businesses and organizations and governments are looking to leverage information data about people and places and things right to drive new businesses, right to increase efficiency and to focus on customer experience in fact these results came from a Sloan survey Right a number of years back when they were asking Customers or people where they expected to focus their big data efforts, right the number one Area that came out was here in customer experience, right? And in fact we all are customers ourselves, right? We experience that both in shops and online and we're always endeavoring to get that better experience, right? Why is that more important today than ever before because truly competition, right? It is greater today than it perhaps it ever was before in the past and not just like competition, right? Not just banks competing against banks, right? but telcos now offering banking services and banking Banking companies willing to essentially pay your telco bill in order to business with them, right? So we're getting unlike competitors that are really mixing it up, right in the operational efficiency area We're seeing big data playing a role there as well, right? So let's take an example from the financial services industry. So increasingly Regulators are asking banks and financial services companies to keep more and more information Online and available for stress tests and for audits now from the financial Institute's perspective This is just an added burden cost, right? So what they're trying to do, right is to adhere to the regulatory requirements But to do so without breaking the bank, right and big data the usage of these technologies, right? These commodity these industry standard technologies are helping people drop the cost, right? But still be able to keep more data online than they would otherwise be able to do before and Finally, we come to new net new business models, right? And we see these cropping up all over right where businesses that didn't exist before Right are now making businesses out of selling information and selling data, right? In fact, we're seeing increasingly Jockeying right for who's got more data about you or me, right or others Right, we're seeing examples again in the financial services industry where You might see the banks themselves looking to offer services to their clients you're seeing the credit card companies wanting to offer services to those banks and Everybody is looking to disintermediate everyone as it comes to the information because the information is really that Informate is really that capital, right that new asset that they're looking to take advantage of right? So we started off looking at customer experience And we have a short video to really take a look at one such customer that used a big data appliance in order to approach their customer experience Data where a big data project and why don't we play the video? The Pescape is a multimedia publishing company active in the Netherlands and Belgium both Europe We publish eight newspapers in both countries more than 25 magazines We have some huge websites more than six million unique visitors per day on all our websites together We also are active in broadcasting with television and radio commercial both in the Netherlands and most of those activities in Belgium The company strategy is in fact a customer centric strategy So we want to get a 360 view about our customers and about our prospects and the big data project Helped us to achieve that goal You know one of the areas in which we're able to achieve beautiful results using big data is the charm prediction Based on all the data in that we collect on websites and on your behavior payment behavior and so on We're able to make a prediction model Which with an accuracy of 92% is able to predict that you probably won't renew your newspaper anymore So our approach to renewable is completely different to the people in that segment and towards the other people And this has brought us a lot of value and a lot of customers who didn't stop the newspaper or else they would have done so The selection of the big data appliance was quite easy We went for the Oracle big data appliance as it was very quick to install very easy to install as well And it was far more cheaper than building our own Hadoop cluster. So it was a factor non-brainer We could of course have built our own Hadoop cluster and we did the exercise and we did the mats Would have needed at least 12 servers Would have to support the servers they had to be software on those servers and so on and We compared that solution with the Oracle big data appliance I must honestly say I didn't think at first place that this would be affordable to us But when we compared both solutions the Oracle solution was by far cheaper than the Hadoop cluster and Required less management as well. So that's why we went for the Oracle solution. Great So that's a really good example right from another industry in this particular case the multimedia industry Incredibly competitive right here. They are with newspapers right and other media projects right dealing with how do they essentially keep customers in the fold Right, how do they identify them? How do they predict when someone is about to churn and give them an offer right of perhaps something on the web? Other content they have in order to continue to have them as a customer and of course I think one of my favorite parts is you can hear Luke kind of like almost laugh a little bit thinking that Well, we didn't think we could afford this right from Oracle right and that's really Larry's original point right is You wouldn't think right that less expensive that lower initial cost of that machine would come from Oracle Right traditionally we focused on value right and now right with Big data appliance and other engineered machines. We get the opportunity to compete both on price and on value both on the meat And on the gravy right it's an exciting time. So It's just moving forward So how does one put together one of these configurations? How did the peers group actually start? So if you could advance Thank you. So here's our latest offering right the x5-2 Right now this latest machine, right? We've more than compared to the previous model the x4, right? We've more than doubled the number of cores, right? 2.25 times more right double the amount of memory We have the latest version of clad era Hadoop on the machine and it's exactly the same price as the previous model right, so we've taken the advantage right of these Industry standards right in memory and in the two-socket CPUs as two-socket servers that Larry talked about and we're able to pass on right those Increases in power increases in memory to our customers right while keeping the price constant So an even more powerful box than before right and you can see We spent a lot of time trying to balance the amount of CPU that's in the box the incredible networking that's in the box Right as well as the storage that's in the box as well, right? And talk about a little extra things that we provide a little the gravy as Larry said right on The hardware side from a support point of view these machines are able to essentially dial home when they have issues, right? I mean I do that. I'm sure you probably do as well, right? And as things are starting to happen, maybe a disk is beginning to show signs of a potential failure in the future It will phone home and let our support center know that it's in that condition, right? And that allows us to essentially dispatch a technician to replace that disk before it fails, right? Big data being used to apply on a big data machine, right? Try doing that right when you build your own cluster on the software side We provide a full complement of software right starting with Oracle Linux Continuing on to cloud eras enterprise data hub addition now We spent a good amount of time looking at the various Distributions that were on the market right as well as taking it directly from a patchy right and from our Evaluation cloud era offered two things right both we thought right the stability of their business model and The depth of their technology to really end up being Here with us going forward in the future, right? And last year you saw a major investment from Intel right along with others in cloud era So now we have the opportunity to not only partner with cloud era But also to partner with our long-standing partner Intel in conjunction with cloud era as well So it's kind of an exciting triad that's developed here So in addition to cloud era, we have the our distribution if you're not familiar with our When I went to university if you were in a math or a physics or an engineering program and you were learning statistics You are likely to be introduced to SAS today if you're in university You're going to be interested. You're going to be introduced to our right It's an open-source framework for statistical and predictive analytics and we provide that on the box along with our no-SQL Distribution right and again from a software point of view right a lot of these distributions Right come from small startup companies and these small startup companies are all largely in the United States so if you're an international customer, right and You have an issue that happens on a national holiday perhaps in the United States and you need support or it's the weekend, right? Who are you going to call right in our case? We make that support available to you seven days a week 24 hours a day 365 days a year right it's important for us to be able to make these machines Enterprise ready and enterprise worthy Okay, and again the value goes beyond hardware and software right the machine itself is Pre-configured and pre-tuned right There are dozens and dozens and dozens of Unix parameters alone That are necessary to be tuned in order to get an efficient use out of your Hadoop cluster, right? We've done that work for you right and we provide integrated management so you could manage this Environment as a single entity right in fact if you're used to using enterprise manager right across your other Oracle properties engineered machines and alike You can use enterprise manager to check the health of this machine along with everything else So the people in operations that are used to monitoring machines Right can monitor it in exactly the same way that they're used to monitoring every health of every other machine that they have from Oracle right But we also offer you something that in a world like Hadoop and no sequel is really important And that is a single command line to be able to do both patching and upgrades There are hundreds and hundreds of components that are necessary Software components firmware components right all kinds of different Linux components Hadoop components all kinds of different Components that come together right that are necessary in order to make this Thing actually work right to be able to make it perform right and each one of these Individual software pieces should you decide to build it yourself issue their own independent patches? issue their own independent upgrades Right, it's really difficult for any one customer to figure out Right when there's a problem which pieces they're an issue with right or can I actually? Go ahead and patch the operating system and what would end up doing to my no sequel database We've taken that Those issues we've taken those problems off the table and we do it for you right all in a single entity Right and we keep up the pace right one of the questions that were asked often as well You know if you're doing this right and you're providing the upgrades How long does it take for you to keep current with the latest cloud-air distribution? Right because this world is moving very quickly and the answer to that is we generally release on the order of something Like anywhere from three to six weeks after a major release right and we've kept that track record now for a number of years And in fact, we're releasing every quarter right net new software as well as patches right On the security side when we talk about Enterprise ready we've got to talk about security right so what we try to do again is to build security right into the box Right we provide encryption as data lands on the box both at rest and on the wire We provide various authentication services as well And we also integrate into other oracle security services like audit fault Right now if you're looking to take Hadoop out of the laboratory and move it into real production Right then you've got to be able to have mature security that you can depend upon right for all this data That you're accumulating together and again. This is an area that we've tried to focus on as well so You've built a system Let's see if I can move to the next slide you've built a system right you've got it up and running right and Believe it or not this could happen really fast We had an independent organization look at typically how long it takes someone to build up their own Hadoop cluster Right and the answer to the question was that by the time they end up deciding what they need to build Buying and acquiring all the various pieces configuring it together tuning it together right and actually putting an application on It takes on the order of something like six months right in our case We can literally do this in a matter of days right stand up a machine It's ready to load data and when it comes to loading data We also offer a full integrated suite of products for both data integration and data governance right now Oracle has long offered of various technologies in this area We've extended these technologies into the Hadoop platform as well. They run natively on Hadoop They take native advantage of many of the technologies that we're seeing memory technology like spark and other Technologies that we can leverage directly on Hadoop directly in order to help do this Right as well as providing the data cleansing and data validation that is necessary So we stood up a system. We've loaded some data. What's next right? The next step Tends to stop a lot of people in their tracks Right. We've got data right lots of data right maybe it comes from a number of different data sources How do I make sense of it right that for a lot of people is a really a vexing problem so What we've been what we're offering here is something called big data discovery Right and the idea here is that we're going to use the machine Right that as data begins to land on the machine We're going to use machine learning algorithms to begin to look at the data profile the data catalog the data Right and it allows a user to easily begin to put together data sets that they might want to analyze Right rather than having to do this all manually by writing code. It's a beautifully Stunning visual product right fact the guys call it the visual face of Hadoop Right. I thought that was kind of clever Right So you've got this data right it's been profiled right now what I want to do is I want to find some correlations I want to be able to see The data right not by looking at long tabular reports scrolling across the screen as fast as they go What can I derive out of that? I want to be able to see it right? So here in the same product I get to bring up right a visual interface that allows me to see and correlate Various pieces of information in order to begin to better understand what I actually could do find better insights Discover new realities right or discover what the reality happens to be right Now I built the system. I loaded some data Right. I'm learning about the data. I've discovered some data Right now what I want to do right is I want to take that next step Right and that next step Luke talked about in terms of being able to identify predict those customers that might churn so for prediction and statistical Analysis right we offer The Oracle R distribution now for many of you may know that are from open source Right comes out of the box running on a single node right now The beauty of a dupe is that it scales out horizontally right and by default the our distribution does not scale across multiple nodes Right, so we've taken the our distribution and we built on top of it Right so that many of the algorithms that are offered within are we parallel lies for you And it allows to scale out and take full advantage of the Hadoop system Right, but not only can we run our in Hadoop directly. We've built our directly in the database Right, so you can build a model on your Hadoop system Right, and then when you want to go test it you can essentially stick it right into the database in order to test it You don't have to rewrite it right years ago when I was at a startup We used our right in order to do our predictive modeling But then when we went to operationalize it We then had to take the our model and write it in code right in order to actually get it to scale Right here. You don't have to do that right you can get it to scale on Hadoop And you can get it to scale within the database right and in addition to our right. We also offer SQL-based data mining right so you know this has been used Successfully by many companies right to build Data mining algorithms directly into the applications themselves in fact if your users of oracles applications You'll see any number of cases in the sales predictive side of our CRM applications that they're using this technology to build directly into the application Right using a SQL-based technology right the ability to discover and to model and to predict right So Just moving on Next slide, please okay, so We've talked a lot about about Hadoop and we've talked about some of the challenges right now One of the biggest challenges in data management is that frankly life used to be a lot simpler, right? When I began originally at oracle right I like to tell people I started at oracle before SQL was a standard you figure that out right? There was relational databases and there was SQL right and SQL was the language that you talked to relational databases right and life was good right and over the years SQL has been dramatically enhanced and We've added additional capabilities inside of the language right a lot of people think of SQL as just being a language that can talk to Structured information that's actually not true years ago We began to expand this these SQL language and not just oracle but the industry itself to be able to talk to unstructured information Semi-structured information right XML information in the latest release of the oracle 12c database We can now talk directly to JSON documents that are in the database itself So SQL is not simply a language to talk to structured information It can talk to both structured and unstructured Okay, so we've got that SQL we've got it right relational We've got it, but now we have these other technologies right we have Hadoop and we have no SQL right? If you could advance, please. Thank you so In Hadoop Classically we think of how do you actually talk to Hadoop right in the beginning right? MapReduce is probably the program the programmatic framework that's most associated with Hadoop itself Right, it's a programming framework right so in order to be able to get data out of Hadoop to query information You have to know how to write MapReduce Now today there are a lot of other ways to be able to get information out of Hadoop other programmatic interfaces But the key is they're programmatic interfaces right and what this ends up leading to is Yet again silos of information information in Hadoop information in no SQL informational in Relational databases right and what we want to do is to bridge that right now over the last six to eight months Right we've seen something interesting happen in the industry Which is now all of a sudden we're seeing SQL a huge resurgence of interest in SQL right of course from our perspective It never never went away right I tell my son look you know sequels cool again. He looks at me You know oh you see him just over the computer screen. He goes you know dad you were never cool It's like yeah, that's true right but sequels cool right so now Right what we need to be able to do is we need to extend it over there, right? So we're now seeing the emergence of SQL based query engines running on top of no SQL So you can get a Apache project that will run SQL over Cassandra right and Mongo's offering something similar right and to you could also find Technology SQL engines that are running on Hadoop as well right But those SQL engines first and foremost only run on top of that framework Right, so if you run that SQL engine You're only gonna get it get information from that data source itself So you're only gonna get information from Hadoop you're only gonna get information from no SQL What you won't have is you won't bridge it across and in fact It's ironic that if you go back only a few years when we said SQL We generally assumed that it reached a certain level of capability right The lowest level of capability that we used to talk about was SQL 92 and now that's long ago been surpassed And what's interesting about these new arrivals on these no SQL and Hadoop platforms is while they say they're SQL based and they are They're a radical subset of the SQL language So you can't express in the SQL dialect the kind of power Right that you're used to when using a modern SQL interface. So what did Oracle do? We're providing as Larry talked about big data SQL right that bridges across these three divides And it allows you right in one query right one fast query right The same query that you use today right if you've got bi tools and you've got applications That are written on top of SQL today that you're running against an Oracle database And now you're putting data in Hadoop and no SQL those tools and those applications Right out of the box without any changes will run directly across these new technologies as well Right spanning that for you and they're going to do it fast. Right. So let's look at a use case Okay, so in this particular example, we've got customer information in the Oracle database. It's pretty typical Right information about customers transactions about their previous behaviors and so forth And we've got web logs that are going into the Hadoop system right and the objective here is not to move all the information from the Customer databases over to Hadoop so that you can now query it together Nor do we want to move all the information from Hadoop right these new data sources over to the relational engine Right, the volumes are just too big right. We want to leave it where it is And we want to query it in place right and we're using a technology that came originally from the Exadata machine Right smart scan and what smart scan does on Exadata is that it pushes Right the predicates it pushes the filtering down toward the storage layers and it takes what is a large amount of information Right and pulls back a small amount of that filtering it right to feed it to into memory and to have it processed by the CPUs Right, we took that same innovation Right and that's what inspired big data sequel moving that over to Hadoop right and no sequel so what you're able to do is when you execute that query right We're going to be able to automatically figure out where the information lies It is in Hadoop is it in no sequel is it in relation relational and we're going to parse out that query And we're going to do the filtering and the predicate push down very very close to the data nodes on the Hadoop system Right as well as on the relational system and because we're able to do that and we're processing far less information It goes a whole lot faster right so one fast query over all your data Right and not just the query itself right, but the same database security policies that you have in place Which may include advanced things like redaction are now applied across no sequel and Hadoop as well Okay, so to sum it all up Larry began really his conversation talking about that remarkably lower initial cost of ownership right and it's absolutely true here as well right the initial purchase price of A big data appliance is less than you can build it yourself Right as he says even without all the gravy right that initial purchase price And then when you take a look at the price over a three-year period of time it gets even better right and Not just in terms of the price right either initial or over a longer period of time But also because of the gravy right because of all the extra stuff that we built in Right the time to be able to get one of these systems up and running The time to be able to start providing value back to the business is cut by a dramatic amount And that's what we need to be doing today right in information technology We have to be delivering benefits to the business faster, so lower initial price lower initial cost and Dramatically faster time to benefit So just from a summary point of view big data It's last name is data right data is what we've been all about from the very very beginning Right we have a history of taking disruptive technology Right whether that disruptive technology came in the form of many computers Client server the internet and now cloud and big data We have a track record of taking that disruptive technology and turning it into Enterprise productivity and value for a business and that's exactly what we're doing Right with our engineered systems and the big data appliance and with that I thank you and have a good afternoon