 Earlier this week, Oracle announced the new X9M generation of Exadata platforms for its cloud of customer and legacy on-prem deployments. And the company made some enhancements to its zero data loss recovery appliance, ZDLRA, something we've covered quite often since its announcement. We had a video exclusive with Juan Loiza, who was the executive vice president of mission critical database technologies at Oracle. We did that on the day of the announcement and got his take on it. And I ask Oracle, hey, can we get some subject matter experts, some technical gurus to dig deeper and get more details on the architecture, because we want to better understand some of the performance claims that Oracle is making. And with me today is Subhan Raghunathan, who's the vice president of product management for Exadata Database Machine. Bob Tom is the vice president of product management for Exadata Cloud at Customer. And Tim Chin is the senior director of product management for ZDLRA. Folks, welcome to this power panel and welcome to theCUBE. Thank you, Dave. Great to be here. Subhan, can we start with you? Juan and I, we talked about the X9M that Oracle just launched a couple of days ago. Maybe you could give us a recap. Some of the, what do we need to know? Especially I'm interested in the big numbers once more so we can just understand the claims you're making around this announcement, we can dig into that. Absolutely, Dave, very excited to do that. In a nutshell, we have the world's fastest database machine for both OLTP and analytics, and we made that even faster. Not just simply faster, but for OLTP, we made it 70% faster. And we took the OLTP read IOPS all the way up to 27.6 million read IOPS. And mind you, this is being measured at the SQL layer. For analytics, we did pretty much the same thing and 87% increase in analytics and we broke through that one terabyte per second barrier. Absolutely phenomenal stuff. Now, while all those numbers by themselves are fascinating, here's something that's even more fascinating in my mind. 80% of the product development work for Exadata X9M was done during COVID, which means all of us were remote. And what that meant was extreme levels of teamwork between the development teams, manufacturing teams, procurement teams, software teams, the works. I mean, everybody coming together as one to deliver this product. And I think it's kudos to everybody who touched this product in one way or the other. Extremely proud of it. Yeah, thank you for making that point. I'm laughing because it's like either the same bolt of mission critical OLTP performance. You had the world record and now you're saying adding on top of that. But okay, so there are customers that still build their own. They're trying to build their own Exadata. What they do is they buy their own servers and storage and networking components. And they do that when I talk to them, they'll say, look, they want to maintain their independence. They don't want to get locked in Oracle or maybe they believe it's cheaper. Maybe they're sort of focused on the CAPEX, the CFO has them in the headlock. Or they might, sometimes they talk about they want a platform that can support horizontal apps, maybe non-Oracle stuff. Or maybe they're just trying to preserve their job. I don't know. But why shouldn't these customers roll their own and why can't they get similar results just using standard off-the-shelf technologies? Great question. It's going to require a little involved answer, but let's just look at the statistics to begin with. Oracle's Exadata was first productized and delivered to the market in 2008. And at that point in time itself, we had industry leadership across a number of metrics. Today, we are at the 11th generation of Exadata and we are way far ahead than the competition, like 50X faster, 100X faster, right? I mean, we are talking orders of magnitude faster. How did we achieve this? And I think the answer to your question is going to lie in, what are we doing at the engineering level to make these magical numbers come to four, right? First, it starts with the hardware. Oracle has its own hardware server design team where we are embedding in capabilities towards increasing performance, reliability, security and scalability down at the hardware level. The database, which is a user level process talks to the hardware directly. The only reason we can do this is because we own the source code for pretty much everything in between, starting with the database, going into the operating system, the hypervisor and as I just mentioned, the hardware. And then we also work with the firmware elements on this entire thing. The key to making Exadata the best Oracle database machine lies in that engineering where we take the operating system, make it fit like tongue and groove into a bit with the hardware and then do the same with the database. And because we have got this deep insight into what are the workloads that are running at any given point in time on the compute side of Exadata, we can then do micro management at the software layers of how traffic flows are flowing through the entire system and do things like prioritize OLTP transactions on a very specific queue on the RDMA or Converged Ethernet fabric, be able to do a smart scan, use the compute elements in the storage tier to be able to offload SQL processing, take columnarized formats of data, extend them into flash, just a whole bunch of things that we've been doing over the last 12 years because we have this deep engineering. You can try to cobble a system together which sort of looks like an Exadata. It's got a network and it's got storage tier and compute tier but you're not going to be able to achieve anything close to what we are doing. The biggest deal in my mind, apart from the performance and the high availability is the security because we are testing the stack top to bottom when you're trying to build your own best of breed kind of stuff, you're not going to be able to do that because it depends on the red hat to do something and HP to do something else or Dell to do something else and a brocade switch to do something. It's not possible. We can do this. We've done it. We've proven it. We've delivered it for over a decade. End of story, as far as I'm concerned. Thank you for that. I mean, you know, I remember when Oracle purchased Son and I know a big part of that purchase was to get Java but I remember saying at the time it was a brilliant acquisition. I was looking at it from a financial standpoint. I think you paid seven and a half billion for it. Automatically, when you're when Safra was able to get back to sort of pre acquisition margins, you got the Oracle uplift in terms of revenue multiple. So then that standpoint, it was a no brainer. But the other thing is back in the Unix days, it was like HP Oracle was the standard in terms of all the benchmarks and performance. But even then, I'm sure you work closely with HP but it was like to get the stuff to work together, you know, make sure that it was going to be able to recover according to your standards. But you couldn't actually do that deep engineering that you just described. Now, earlier, Subhan, you stated that the X9M, you get OLTP, IOP reads at 27 million IOPs. You got 19 microseconds latency. So pretty impressive numbers. And you kind of just went there. But how are you measuring these numbers versus other performance claims from your competitors? What's, you know, are you, are you stacking the deck? Can you share with us there? Sure. So short answer is we are measuring it at the SQL layer, right? This is not some kind of an IO meter or a micro benchmark that's looking at just the flash subsystem or just the persistent memory subsystem. This is measured at the compute not doing an entire set of transactions and how many times can you finish that, right? So that's how it's being measured. Now, most people cannot measure it like that because of the disparity and the number of vendors that are involved in that particular solution, right? You've got servers from vendor A and storage from vendor B, the storage at a network from vendor C, the operating system from vendor D. How do you tune all of these things on your own? You cannot, right? I mean, there's only certain bells and whistles and knobs that are available for you to tune. But so that's how we are measuring the 19 microseconds is at the SQL layer. What that means is this, a real world customer running a real world workload is guaranteed to get that kind of a latency. None of the other suppliers can make that claim. This is the real world capability. Now, let's take a look at that 19 microseconds. We boast and we say, hey, we had an order of magnitude, two orders of magnitude faster than everybody else when it comes down to latency. And one thinks that this is voodoo or magic. While it is magical, the magic is really grounded in deep engineering and deep physics and science. The way we implement this is we first of all put the persistent memory tier in the storage. And that way it's shared across all of the database instances that are running on the compute tier. Then we have this ultra fast 100 gigabit ethernet RDMA overconverged ethernet fabric. With this, what we have been able to do is at the hardware level between two network interface cards that are resident on that fabric, we create pads that enable high priority, low latency communication between any two end points on that fabric. And then given the fact that we implemented persistent memory in the storage tier, what that means is with that persistent memory sitting on the memory bus of the processor in the storage tier, we can perform a remote direct memory access operation from the compute tier to memory address spaces in the persistent memory of the storage tier without the involvement of the operating system on either end. No context switches, no interrupt processing latencies and all of that. So it's hardware to hardware communication with security built in, which is immutable, right? So all of this is built into the hardware itself. So there's no software involved. You perform a read, the data comes back 19 microseconds. Boom, end of story. Yeah, so that's key to my next topic, which is security, because if you're not getting the OS involved and that's very often times, if I can get access to the OS, I get privileged, I can really take advantage of that as a hacker. But before I go there, Oracle talks about it's got a huge percentage, I think 87% of the Fortune 100 companies run their mission critical workloads on Exadata. But so that's not only important to the companies, but they're serving consumer, me, right? I'm going to my ATM or I'm swiping my credit card. And Juan mentioned that you use a layered security model. I just sort of inferred anyway that having this stuff in hardware and not have to involve access to the OS actually contributes to better security. But can you describe this in a bit more detail? Sure. So yeah, what Juan was talking about was this layered security set differently. It is defense in depth. And that's been our mantra and philosophy for several years now. So what does that entail? As I mentioned earlier, we design our own servers. We do this for performance. We also do it for security. We've got a number of features that are built into the hardware that make sure that we've got immutable areas of firmware. For instance, let me give you this example. If you take an Oracle x86 server, just a standard x86 server, not even expressed in the form of an Exadata system. Even if you had super user privileges sitting on top of an operating system, you cannot modify the BIOS as a user, as a super user. That has to be done through the system management network. So we put gates and protection modes, et cetera, right in the hardware itself. Now, of course, the security of that hardware goes all the way back to the fact that we own the design. We've got a global supply chain, but we are making sure that our supply chain is protected, monitored, and we also protect the last mile of the supply chain, which is we can detect if there's been any tampering of firmware that's occurred in the hardware while the hardware shipped from our factory to the customer's docs, right? So we know that something's been tampered with the moment it comes back up on the customer end. So that's on the hardware. Let's take a look at the operating system. Oracle Linux, we own Oracle Linux, the entire source code, and what's shipping on Exadata is the Unbreakable Enterprise kernel. The kernel and the operating system itself have been reduced in terms of eliminating all unnecessary packages from that operating system bundle when we deliver it in the form of Exadata. Let's put some real numbers on that. A standard Oracle Linux or a standard Linux distribution has got about 5,000 plus packages. These things include like print servers, web servers, a whole bunch of stuff that you're not absolutely going to use at all on Exadata. Why ship those? Because the moment you ship more stuff than you need, you are increasing the target that attackers can get to. So on Exadata, there are only 701 packages. So compare this, 5,413 packages on a standard Linux, 701 on Exadata. So we've reduced the attack surface. Another aspect on this, when we do our own Stig SCAP benchmarking, if you take a standard Linux and you run that SCAP benchmark, you'll get about a 30% pass score. On Exadata, it's 90 plus percent. So which means we are doing the heavy lifting of doing the security checks on the operating system before it even goes out the factory. And then you layer on Oracle Database, transparent data encryption. We've got all kinds of protection capabilities, data reduction, being able to do authentication on a user ID basis, being able to log it, being able to track it, being able to determine who access the system when and log that. So it's basically defined at every single layer. And then of course, the customer's responsibility doesn't just stop by getting this high secure environment. They have to do their own job of then securing their network parameters, securing who has physical access to the system and everything else. So it's a joint responsibility. And as you mentioned, you as a consumer going to an ATM machine and withdrawing money, you withdraw 200, you don't want to see 5000 deducted from your account. And so all of this is made possible with Exadata and the amount of security focus that we have on the system. Yeah, and the bank doesn't want to see it the other way. So we're geeking out here in the queue, but I got one more question for you. Sure. Juan talked about X9M best system for database consolidation. So I kind of, you know, it's built to handle OLTP analytics, et cetera. So I want to push you a little bit on this because I can make an argument that this is kind of a Swiss army knife versus the best screwdriver or the best knife. How do you respond to that concern and how do you respond to the concern you've put in too many eggs in one basket? Like, what do you tell people to fear you're consolidating workloads to save money, but you're also narrowing the blast radius? Isn't that a problem? Very good question, Dave. So yes, so this is an interesting problem. And it is a balancing act as you correctly pointed out. You want to have the economies of scale that you get when you consolidate more and more databases, but at the same time, when something happens, when hardware fails or there's an attack, you want to make sure that you have business continuity. So what we're doing on Exadata, first of all, as I mentioned, we're designing our own hardware and we're building in reliability into the system. And at the hardware layer, that means having redundancy, redundancy for fans, power supplies. We even have the ability to isolate faulty cores on the processor. And we've got this a tremendous amount of sweeping that's going on by the system management stack, looking for problem areas and trying to contain them as much as possible within the hardware itself. Then you take it up to the software layer to use that reliability to then build high availability. What that implies is, and that's fundamental to the Exadata architecture is this entire scaled out model. Our base system, you cannot go smaller than having two database nodes and three storage cells. Why is that? That's because you want to have high availability of your database instances. So if something happens to one server, hardware, software, whatever, you've got another server that's ready to take on that load and then with real application clusters, you can then switch over between these two. Why three storage cells? We want to make sure that when you have got duplicate copies of data because you at least want to have one additional copy of your data in case something happens to the disk that has got that only that one copy, right? So the reason we have got three is because then you can stripe data across these three different servers and deliver high availability. Now you take that up to the rack level, a lot of things happen. Now when you're really talking about the blast radius, you want to make sure that if something physically happens to this data center that you have infrastructure that's available for it to function for business continuity to be maintained, which is why we have the maximum availability architecture. So with components like Golden Gate and Active Data Guard and other ways by which we can keep two distance systems in sync is extremely critical for us to deliver these high availability pads that make the whole equation about how many eggs in one basket versus containing the containment of the blast radius a lot easier to grapple with because business continuity is something which is paramount to us. I mean, Oracle, the enterprise is running on Exadata. Our high value cloud customers are running on Exadata. And I'm sure Bob's going to talk a lot more about the cloud piece of it. So I think we have all the tools in place to go after that optimization on how many eggs in one basket versus blast radius. It's a question of working through the solution and the criticalities of that particular instance. Okay, great. Thank you for that detail, Subin. And we're going to give you a break. You go take a breath, get a drink of water. Maybe we'll come back to you if we have time. Let's go to Bob. Bob Tom, Exadata cloud of customer X9M earlier this week. Juan said, it's kind of cocky. What are you worth bothering comparing Exadata against your cloud of customer against Outpost or Azure Stack? Can you elaborate on why that is? Sure. First of all, I want to say, I love AWS Outposts. You know why it affirms everything that we've been doing for the past four and a half years with clouded customer. It affirms that cloud is running, that running cloud services in customers data center is a large and important market. Large and important enough that AWS felt that the need to provide these, these customers with an AWS option, even if it only supports a sliver of the functionality that they provide in the public cloud. And that's what they're doing. They're giving the sliver and they're not exactly leading with the best they could offer. So for that reason, you know, that reason alone, there's really nothing to compare. And so we give them the benefit of the doubt and we actually are using their public cloud solutions. Another point, most customers are looking to deploy to Oracle cloud of customer. They're looking for a performant, scalable, secure and highly available platform to deploy what's offered their most critical databases. Most often they are Oracle databases. Does Outposts run Oracle database? No. Does Outposts run a comparable database? Not really. Does Outposts run Amazon's top OLTP and analytics database services, the ones that are top in their cloud, public cloud? No. That we couldn't find anything that runs Outposts that's worth comparing against ex data cloud of customer which is why the comparisons are against the public cloud products. And even with that still, we're looking at numbers like 50 times, 100 times slower. Right? So then there's Azure Stack. One of the key benefits to, you know that customers love about the cloud that I think is really underappreciated it underappreciated is really that it's a single vendor solution, right? You have a problem with cloud service could be as pass as doesn't matter. And there's a single vendor responsible for fixing your issue. Azure Stack is missing big here because they're a multi-vendor cloud solution. Like AWS Outposts also, they don't exactly offer the same services in the cloud that they offer on-prem. And from what I hear, it can be a management nightmare requiring specialized administrators to keep that beast running. Okay. So, well, thanks for that. I'll grant you that, first of all, grant you that Oracle was the first with that same, same vision. I always tell people that, you know, they say, well, we were first. I'm like, well, actually, no, Oracle was first. Having said that, Bob, and I hear you that right now Outposts is a 1.0 version. It doesn't have all the bells and whistles, but neither did your cloud when you first launched your cloud. So let's let it bake for a while. Then we'll come back in a couple of years and see how things compare. So if you're up for it, I am. You just remember that we're still in the oven too, right? We're gonna be building on that lead. Okay. All right. Good. I love it. I love the, the hoodspot. Juan also talked about Deutsche Bank. And I mean, I saw that Deutsche Bank announcement, how they're working with Oracle, they're modernizing their infrastructure around database. They're building other services around that and kind of building their own sort of version of a cloud for their customers. How does Exadata Cloud of Customer fit in to that whole Deutsche Bank deal? Is this solution unique to Deutsche Bank? Do you see other organizations adopting cloud of customer for similar reasons and use cases? Yeah, I'll start with that. First, I wanna say that I don't think Deutsche Bank is unique. They want what all customers want. They wanna be able to run their most important workloads. The ones today running their data center on Exadata and on other high-end systems in a cloud environment where they can benefit from things like cloud economics, cloud operations, cloud automations, but they can't move to public cloud. They need to maintain the service levels, the performance, the scalability, the security and the availability that their business has come to depend on. Most clouds can't provide that, although actually Oracle's cloud can, our public cloud can, because our public cloud does run Exadata. But still, even with that, they can't do it because as a bank, they're subject to lots of rules and regulations. They cannot move their 40 petabytes of data to a point outside the control of their data center. They have thousands of interconnected databases and applications like a rat's nest, right? And this is similar, many large customers have this problem. How do you move that to the cloud? You could move a piecemeal, I'm gonna move these apps and not move those apps, but suddenly you end up with these things where some pieces are up here, some pieces are down here, the thing just dies because of the long latency over a WAN connection just doesn't work, right? So you can also shut it down, let's shut it down on Friday and move everything all at once. Unfortunately, when you're looking at a state besides that most customers have, you're not gonna be able to, you're gonna be down for a month, right? Who can tolerate that? So it's a big challenge and Exadata Cloud of Customer lets them move to the cloud without losing control of their data and without having to untangle the thousands of interconnected databases. So, that's why these customers are choosing Exadata Cloud of Customer. More importantly, it sets them up for the future with Exadata Cloud of Customer, they can run not just in their data center, but they can also run in public cloud adjacent sites, giving them a path to moving some work out of the data center and ultimately into the public cloud. You know, as I said, they're not unique. Other banks are watching and some are acting and it's not just banks, just last week Telefonica, a telco in Spain announced their intent to migrate the bulk of their Oracle databases to Exadata Cloud of Customer. This will be the key cloud platform running in their data center to support both new services as well as mission critical and operational systems. And one last important point, Exadata Cloud of Customer can also run autonomous database. Even if customers aren't today ready to adopt this, a lot of them are interested in it. They see it as a key piece of the puzzle moving forward in the future and customers know that they can easily start to migrate work to autonomous in the future as they're ready. And this of course is going to drive additional efficiencies and additional cost savings. So Bob, I got a question for you because, you know, Oracle's playing both sides, right? You got a cloud, you know, it's got a true public cloud now. And obviously you have a huge on-premise state. When I talk to companies that don't own a cloud, whether it's Dell or HPE, Cisco, et cetera, they make the point, and I agree with them by the way, that the world is hybrid, not everything's going into the cloud. However, I had a lot of respect for folks at Amazon as well, and they believe long-term, they'll say this, they've got them on record of saying this, that they believe long-term, ultimately all workloads are going to be running the cloud. Now I guess it depends on how you define the cloud. The cloud is expanding and all the other stuff. But my question to you, because again, you kind of on both sides here, are hybrid solutions like cloud a customer, do you see them as a stepping stone to the cloud or is cloud in your data center sort of a continuous sort of permanent, essential play? That's a great question. And as I recall, people debated this a few years back when we first introduced clouded customer. And at that point, some people, I'm talking about even internal Oracle, right? Some people saw this as a stop-gap measure to let people leverage cloud benefits until they're really ready for the public cloud. But I think over the past four and a half years, the thinking has changed a little bit on this. And everyone kind of agrees that clouded customer may be a stepping stone for some customers, but others see it as the end game, right? Not every workload can run in the public cloud, at least not given today's regulations and the issues that are faced by many of these regulated industries. These industries move very, very slowly and customers are content to, and in many cases required to, retain complete control of their data. And they'll be running under their control, they'll be running with that data under their control in the data center for the foreseeable future. All right, I got another question for you. Kind of as if I could take a little tangent, because the other thing I hear from the on-prem, don't know what a cloud folks is, it's actually cheaper to run in on-prem because they're getting better at automation, et cetera. You get the exact opposite from the cloud guys, they roll their eyes, are you kidding me? It's way cheaper to run in the cloud. Which is more cost-effective? Is it one of those, it depends, Bob? You know, the great thing about numbers is you can make, you can kind of twist them to show anything that you want, right? That's a, have spreadsheet, I can sell you on anything. I think that there's customers who look at it and they say, oh, on-prem is cheaper, and there's customers who look at it and say the cloud is cheaper. If you, you know, there's a lot of ways that you incur savings in the cloud. A lot of it has to do with the cloud economics, the ability to pay for what you're using and only what you're using. If you were to like kind of, if you size something for your peak workload and then on-prem, you probably put a little bit of a buffer in it, right? If you size everything for that, you're going to find that you're paying this much, right, all the time. You're paying for peak workload all the time. With the cloud, of course, we support scaling up, scaling down. We support, we support your paying for what you use and you can scale up and scale down. That's where the big savings is. Now there's also additional savings associated with, you know, the cloud vendors like Oracle, we manage that infrastructure for you. You no longer have to worry about it. We have a lot of automation, things that you used to either, you know, probably what used to happen is you just have to spend hours and hours or years or whatever scripting these things yourselves. We now have this automation to do it. We have UIs that make things, if you ad hoc things, as simple as point and click. And, you know, that eliminates errors and it's often difficult to put a cost on those things. And I think the more enlightened customers can put a cost on all of those. So the people that were saying it's cheaper to run on-prem, they either, you know, have a very stable workload that never changes and their environment never changes or more likely they just really haven't bought through the all the hidden costs out there. All right, you got some new features. Thank you for that, by the way. You got some new features in Cloud of Customer. What are those? Do I have to upgrade to X9M to get those? All right, so, you know, we're always introducing new features for Cloud of Customer but two significant things that we've rolled out recently are Operator Access Control and Elastic Storage expansion. As we've discussed, many organizations are using X9M to Cloud of Customer. They're attracted to Cloud economics, the operational benefits, but they're required by regulations to retain control and visibility of their data as well as any infrastructure that sits inside their data center. With Operator Access Control enabled, Cloud Operations staff members must request access to a customer system. A customer IT team grants a designated person specific access to a specific component for a specific period of time with specific privileges. They can then kind of view audit controls in real time and if they see something they don't like, you know, hey, what's this guy doing? It looks like he's stealing my data or doing something I don't like. Boom, they can kill that operator's access, the session, the connections, everything right away. And this gives everyone, especially customers that need to, you know, regulate remote access to their infrastructure. It gives them the confidence that they need to use Exadata Cloud of Customer service. And the other thing that's new is the Elastic Storage expansion. Customers can now add additional service to their system, either at initial deployment or after the FAP. And this really provides two important benefits. The first is that they can right size their configuration. If they need only the minimum compute capacity, but they don't need the maximum number of storage servers to get that capacity, they don't need to subscribe to kind of a fixed shape. We used to have fixed shapes, I guess, with hundreds of unnecessary database cores just get the storage capacity. They can select a smaller system and then incrementally add on that storage. The second benefit is kind of key for many customers. You're out of storage, guess what? You can add more. And that when you're out of storage, that's really important. Now to get to your last part of that question, do you need a new Exadata Cloud and Customer XIM system to get these features? No, they're available for all Gen2 Exadata Cloud and Customer systems. That's really one of the best things about cloud. The service you subscribe to today just keeps getting better and better. And unless there's some technical limitation that, you know, which is rare, most new features are available even for the oldest Cloud and Customer systems. Yeah, that's cool. And you can bring that on. My last question for you, Bob, is another one on security. Obviously, again, we talked to Sumon about this. It's a big deal. How can customer data be secure if it's in the cloud? If somebody other than their own vetted employees are managing the underlying infrastructure? Is that a concern you hear a lot and how do you handle that? You know, it's always something because a lot of these customers, they have big security people and it's their job to be concerned about that kind of stuff. And security, however, is one of the biggest but least appreciated benefits of cloud. Cloud vendors such as Oracle hire the best and brightest security experts to ensure that their clouds are secure. Something that only the largest customers can afford to do. You're a small, small shop. You're not going to be able to, you know, hire some of this expertise. So you're better off being in the cloud. Customers who are running in the Oracle Cloud can also use Oracle's data safe tool, which we provide, which basically lets you inspect your databases and ensure that everything is locked down and secure and your data is secure. But your question is actually a little bit different. It was about potential internal threats to companies data given the cloud vendor, not the customer's employees have access to the infrastructure that sits beneath the databases. And really the first and most important thing we do to protect customers data is we encrypt that database by default. Actually, Subin listed a whole laundry list of things, but that's the one thing I want to point out. We encrypt your database. It's, you know, it's encrypted. Yes, it sits on our infrastructure. Yes, our operations persons can actually see those data files sitting on the infrastructure. But guess what, they can't see the data. The data is encrypted. All they see is kind of a big encrypted blog. So they can't access the data themselves. And, you know, as you'd expect, we have very tight controls over operations access to the infrastructure. They need to securely log in using mechanisms like UB keys and stuff to prevent unauthorized access. And then all access is logged and suspicious activities are investigated. But that still may not be enough for some customers, especially the ones I mentioned earlier, the regulated industries. And that's why we offer operator access control, as I mentioned. That gives customers complete control over the access to the infrastructure, the when, the what-ops can do, how long can they do it? Customers can monitor in real time. If they see something they don't like, they stop it immediately. Lastly, I just want to mention Oracle's data vault feature. This prevents administrators from accessing data, protecting data from rogue operators, rogue operations, whether they be from Oracle or from the customer's own IT staff. This database option, audit vault, sorry, database vault, data vault is included when running a license, included service on ex-data cloud of customer. So basically, you get it with the service. Got it. Oh, Tom, thank you so much. It's unbelievable, Bob. I mean, we got a lot to unpack there, but we're going to give you a break now and go to Tim. Tim Chen, zero data loss recovery appliance. We always love that name, the big guy we think named it, but nobody will tell us. But we've been talking about security. There's been a lot of news around ransomware attacks, every industry around the globe, any knucklehead with a high school diploma and become a ransomware attacker to go in the dark web, get ransomware as a service, put a stick in and take a piece of the VIG and hopefully get arrested. When you think about database, how do you deal with the ransomware challenge? Yeah, Dave, that's an extremely important and timely question. We are hearing this from our customers. We just talk about HA and backup strategies and ransomware has been coming up more and more. And the unfortunate thing that these ransoms are actually paid in the hope of the ability to access the data again. So what that means tells me is that today's recovery solutions and processes are not sufficient to get these systems back in a reliable and timely manner. And so you have to pay the ransom to get even the hope of getting the data back. Now, for databases, this can have a huge impact because we're talking about transactional workloads. And so even a compromise of just a few minutes, a blip, can affect hundreds or even thousands of transactions. This can literally represent hundreds of lost orders, right? If you're a big manufacturing company or even like millions of dollars worth of financial transactions in a bank, right? And that's why protecting databases at a transaction level is especially critical for ransomware. And that's a huge contrast to traditional backup approaches. Okay, so how do you approach that? What do you do specifically for ransomware protection for the database? Yeah, so we have the zero data loss recovery appliance which we announced the X9M generation. It is really the only solution in the market which offers that transaction level protection which allows all transactions to be recovered with zero RPO, zero again. And there's only possible because Oracle has very innovative and unique technology called real-time redo which captures all the transactional changes from the databases by the appliance and then stored as well by the appliance. Moreover, the appliance validates all these backups and redo so you wanna make sure that you can recover them after you've sent them, right? So it's not just a file level integrity check on a file system, it's actual database level validation that the Oracle blocks and the redo that I mentioned can be restored and recovered as a usable database. Any kind of malicious attack or modification of that backup data in transit or if it's even stored on the appliance and was compromised would be immediately detected and reported by that validation. So this allows administrators to take action just as removing that system from the network. And so it's a huge leap in terms of what customers can get today. The last thing I just wanna point out is what we call our cyber vault deployment, right? A lot of customers in the industry are creating what we call air-gapped environments where they have a separate location where their backup copies are stored physically network separated from the production systems. And so this prevents ransomware from possibly infiltrating that last good copy of backups. So you can deploy recovery appliance in a cyber vault and have it synchronized at random times when the network is available to keep it in sync, right? So that combined with our transaction levels zero data loss, validation, it's a nice package and really a game changer in protecting and recovering your databases from modern day cyber threats. Okay, great. Thank you for clarifying that air gap piece because there was some confusion about that. Every data protection and backup company that I know has a ransomware solution. It's like the hottest topic going. You got newer players in recovery and backup like Rubrik, Cohesity, they raised a ton of dough. Dell's got solutions, HPE just acquired Zerto to deal with this problem and other things. IBM's got stuff, Veeam seems to be doing pretty well. Veritas got a range of recovery solutions. They're sort of all out there. What's your take on these offerings and their strategy and how do you differentiate? Yeah, it's a pretty crowded market. Like you said, I think the first thing you really have to keep in mind to understand that these vendors, these new and up and coming vendors start in the copy data management we call CDM space and they're not traditional backup recovery designed or purpose built. So the purpose of CDM products is to provide these fast point in time copies for test dev non-production use. And that's a viable problem and it needs a solution. So you create these one-time copy and then you create snapshots after you apply these incremental changes to that copy. And then the snapshot can be quickly restored and presented as like it's a fully populated file. And this is all done through the underlying storage of block pointers. So all of this kind of sounds really cool and modern. It's like new and upcoming and lots of people in the market doing this. Well, it's really not that modern because we know storage snapshot technologies has been around for years, right? What these new vendors have been doing is essentially repackaging the old technology for backup and recovery use cases and having sort of an easier to use automation interface wrapped around it. Yes, you mentioned copy data management. Last year, Actifio, they started that whole space from what I recall. At one point, they valued more than a billion dollars. They were acquired by Google. And as I say, they kind of created that category. So fast forward a little bit in nine months a year, whatever it's been, do you see that Google Actifio offer in customer engagements? Is that something you run into? We really don't. Yeah, it was really popular and known some years ago. But we really don't hear about anymore. After the acquisition, you look at all the collateral and the marketing, they are really a CDM and backup solution exclusively for Google Cloud use cases. And they're not being positioned as for on-premises or any other use cases outside of Google Cloud. That's what 90 plus percent of your market there that isn't addressable now by Actifio. So really we don't see them in any of our engagements at this time. So I want to come back and push a little bit on some of the tech that you said is kind of really not that modern. I mean, if they certainly position it as modern, a lot of the engineers who are building, there's new sort of backup and recovery capabilities came from the hyperscalers, you know, whether it's copy data management, you know, the quote unquote modern backup recovery. It's kind of a data management, sort of this nice all-in-one solution seems pretty compelling. How does recovery appliance specifically stack up? You know, a lot of people think it's a niche product for really high-end use cases. Is that fair? How do you see it, Ted? Yeah. Yeah, so I think it's so important to just, you know, understand again, the fundamental use of this technology is to create data copies for test dev use, right? And that's really different than operational backup recovery in which you must have this ability to do full end point in time recoverability in any production outage or DR situation. And then more importantly, after you recover and your applications are back in business, that performance must continue to meet service levels as before. And when you look at a CDM product and you restore a snapshot, you say, with that product and the application is brought up on that restored snapshot, what happens? Well, your production application is now running on actual read-writeable snapshots on backup storage. Remember, they don't restore all the data back to the production level storage. They're restoring it as a snapshot onto their storage. And so you have a huge difference in performance now running these applications with a instantly recovered, if you will, database. So to meet these true operational requirements, you have to fully restore the files to production storage, period. And so recovery appliance was first and foremost designed to accomplish this. It's an operational recovery solution, right? We accomplish that, like I mentioned with this real-time transaction protection. We have incremental forever backup strategy so that you're just taking just the changes every day and you can create these virtual full backups that are quickly restored, fully restored, if you will, at 24 terabytes an hour. And we validate and document that performance very clearly in our website. And of course, we provide that continuous recovery validation for all the backups that are stored on the system. So it's a very nice, complete solution. It scales to meet your demands, hundreds of thousands of databases. These CDM products might seem great and they work well for a few databases, but then you put a real enterprise load in these hundreds of databases and we've seen a lot of times where it just buckles. It can't handle that kind of load in that scale. And this is important because customers read their marketing, read the collateral, they say, hey, instant recovery, why wouldn't I want that? Well, it's nicer than it looks. It always sounds better, right? And so we have to educate them and about exactly what that means for the database, especially backup recovery use cases and they're really handled well with their products. Yeah, I know I'm like way over. I had a lot of questions on this announcement. I was going to let you go, Tim, but you just mentioned something that gave me one more question if I may. So you talked about supporting hundreds, hundreds of thousands of databases, petabytes. Do you have real world use cases that actually leverage the appliance in these types of environments? Where does it really shine? Yeah, let me just give you just two real quick ones. We have a company, Energy Transfer. It's a major natural gas and pipeline operator in the US. So they're a big part of our country's critical infrastructure services. We know ransomware and these kinds of threats are very much viable. We saw the colonial pipeline incident that happened, right? And so the attack, right, critical services. Well, Energy Transfer was running lots of databases and their legacy backup environments just couldn't keep up with their enterprise needs. They had backups taking like over a day, they had restores taking several hours. And so they had problems and they couldn't meet their SLAs. They moved to the recovery appliance and now they're seeing back as complete with that incremental forever in just 15 minutes. So it's like a 48 times improvement in backup time. And they're also seeing restores completing in about 30 minutes, right, versus several hours. So it's a huge difference for them and they also get that nice recovery validation and monitoring by the systems so they really know the health of their enterprise at their fingertips. The second quick one is just a global financial services customer and they have like over 10,000 databases globally and they really couldn't find a solution other than throw more hardware kind of approach to fix their backups. Well, this amounted to failures, amounted to issues. So they moved to recovery appliance and they saw their failed backup rates go down dramatically. They saw four times better backup and restore performance. And they have also a very nice centralized way to monitor and manage the system. Real-time view, if you will, that data protection health for their entire environment. And they can show this to their executive management and all the new teams. This is great for compliance reporting. And so they've finally done that. They have north of 50 plus recovery appliances a day across that global enterprise. Love it. Thank you for that, Tim. Guys, great power panel. We have a lot of Oracle customers in our community. The best way to help them is to, I get to ask you a bunch of questions and get the experts to answer. So, Suman, I wonder if you could bring us home. Maybe you could just sort of give us the top takeaways that you want your customers to remember and in our audience to remember from this announcement. Sure. Sorry. I want to actually pick up from where Tim left off and talk about a real customer use case. This is hot off the press. One of the largest banks in the United States, they decided that they needed to update, so perform a software update on 3,000 of their database instances, which are spanning 68 Exadata clusters. Massive undertaking, correct? Yeah, great. They finished the entire task in three hours. Three hours to update 3,000 databases and 68 Exadata clusters. Talk about availability. Try doing this on any other infrastructure. No way anyone's going to be able to achieve this. So that's on terms of the availability, right? We are engineering in all of the aspects of database management, performance, security, availability, being able to provide redundancy at every single level is all part of the design philosophy and how we are engineering this product. And as far as we are concerned, the goal is forever. We are just going to continue to go down this path of increasing performance, increasing the security aspect of the infrastructure as well as Oracle Database and keep going on this. You know, while these have been great results that we've delivered with Exadata X9M, the journey is on and to our customers, the biggest advantage that you're going to get from the kind of performance metrics that we are driving with Exadata is consolidation. Consolidate more, move more database instances onto the Exadata platform, gain the benefits from that consolidation, reduce your operational expenses, reduce your capital expenses, reduce your management expenses, all of those. Bring it down to Exadata. Your total cost of ownership is guaranteed to go down. Those are my key takeaways, Dave. Guys, you've been really generous with your time. Subin, Bob, Tim, I appreciate you taking my questions and willingness to go toe to toe. Really, thanks for your time. You're welcome, Dave. Thank you. Welcome, thank you. And thank you for watching this video exclusive from theCUBE. This is Dave Vellante and we'll see you next time. Be well.