 Live from the Oracle Conference Center in Redwood Shores, California. It's theCUBE at the Next Generation Engineered Systems launch event. Brought to you by headline-sponsored, Oracle. Okay, welcome back everyone. We are here live in Silicon Valley. This is theCUBE, our flagship program. We go out to the events and they strike the ceiling with noise. I'm John Furrier with Dave Vellante, my co-host, our next guest, Mike Workman, Senior Vice President of Oracle Storage. Welcome back to theCUBE. We really appreciate seeing you here. Larry's going to be talking about engineering systems on stage at one o'clock here. Storage is a big part of that. Yes it is. But in a platform, it's not a pure play, storage kind of approach. It's got to be integrated. What's the big thing today for you guys? What are you guys announcing? We're announcing a lot of different, in fact it's a pretty extensive content rich announcement. There's a lot of different products in there I won't go over all of them. But the FS1 is a critical piece of the puzzle there. There's a core of the data center. There's sort of your compute and your storage platform. FS1 is that. You'll see Larry doing some announcements around that area. And then you'll see a weeding in of the various engineered systems, database appliance, et cetera. And I don't want to steal Larry's thunder, but as if anyone could, right? I don't worry about that. But seriously, there's a very wide range of products the FS1 plays into it because at the core of the data center is compute and store. So everybody talks about NextGen, NextGenNist, 2.0. What is NextGen today? I mean obviously there's a big flash component. You're talking about integrated systems or engineered systems. What makes something NextGen these days? That's a good question. And I think it's a part of what keeps us all alive is figuring that out, right? And it's like either trying to anticipate it or trying to get ahead of it, you know? Today for me, there's a transition going on. Building a flash storage system, what we're finding today with customers is fascinating. They buy an old flash array from Oracle and they put it in the data center and their infrastructure doesn't hold up. So the infrastructure and data centers was designed around HDD centric storage systems. It really was. So in some sense, the industry is trying to catch up with the capabilities and the technologies that vendors can supply. And it's a normal game, right? That pretty soon they'll get complacent with it and they'll complain it's not fast enough. Right, it's more data, more complex apps. This is the way it works, right? I mean, one day the 10 megabytes was a lot on a PC and then we have a gigabyte and all of a sudden it's not enough because or it's too much for a while until the apps catch up and start using it and demanding more. And this is sort of the stair step that we take in this industry. And so next gen, you know, the cheaper storage gets and the more CPU horsepower that you have and the faster the networks, people come up with applications that challenge all of that. So then we go the next step and provide better infrastructure, people outfit it and et cetera. And it's just a stair step. I think I see it all the time. It's pretty amazing industry, isn't it? We double capacity every 18 months. We cut cost in half. And it's never enough. And it's not good enough. So we have a question from the crowd. Any news on the Solaris zones or containers but then I was asked about storage containers and then the comment says Oracle Database 12C is all about containers. What about the storage domains in the flash storage system? Are those important? As data isolations and security concerns increases, is containerization with flash storage becoming critical? Is that what that C stands for? I thought it was cloud. It's containers, right? Containers are all the rage. We have an engagement container, Docker's got cloud containers. Container is essentially not a new concept, but it's all the rage. But talk about that specific point with the flash system. Okay, John. You asked, there's a lot of stuff in that. I'll try to make it brief. Yes. It isn't yet. Yes, it's good though. Next question. Next question, yeah. So seriously, the idea is that with containers or data centers the way they are today, that people want to have control over what I would call storage domains. They want to be able to see their objects show up in different areas, like in some ways, different machines. The problem with different machines and islands of storage is that nobody wants to manage that, right? Nobody wants to have six storage products in order to do the job. What we've tried to do with the FS1 is to build storage domains, which allow users to essentially, customers to encapsulate storage tiers within a domain and to essentially build a storage machine within a machine. So if you take virtualized storage today, in a sense, in my opinion, we've gone a little too far. We've in most virtualized storage systems, which most are today, they completely remove any knowledge of the physical locality of data within that system. And that means that if you have forensics to do or the accounting department wants their own set of assets, it's really hard to do. Black box. Yeah. And it's sort of like, well it goes in and it comes out and it all intermingles. And what we decided was we can take it just a step backwards just a little bit and allow physical domains to exist, up to 64 of them in an FS1, allow those physical domains to exist such that different applications can build different resources around their needs. So there's disparate needs, there's archival needs, there's transaction processing, OLTP kind of environments, and you can see those as different machines within the FS1. And so that plays into this whole way that our database and its ADO and the various features and functions that we use, big data, for example, needing archive, needing metadata on Flash, the various tiers, the structure of the FS1 is built in order to solve disparate problems at the same time on one platform. So performance and management is the key things you've just seen. Yeah, yeah, it's to be able to get some control over all of this stuff without having six different products or 11 in EMC's case. So that brings the notion of quality of service then. That's something that when you were, Peller, you guys really were focused on that. Talking about quality of service, a lot of people talking about quality of service in the industry, it ties into the whole software defined meme. How should we be thinking about quality of service and how does it relate to the container discussion? I have to laugh a little bit, I apologize. But when we introduced quality of service into the industry in 2005, everybody, everybody pretty much just panned it, said, oh, that's ridiculous, who would want that? And now everybody claims they have it. In fact, there's even people who claim that others have false quality of service, others like the people who invented it. I honestly, I mean, that's funny. So our quality of service is critical because it's very sensible because when you get up in the morning and your plan was to rake some leaves maybe before the sprinklers turned on, that may have been a good idea until as you were walking out of the house, you noticed that there was a broken water pipe. And when there's a broken water pipe, you probably have something more important to do. So when your business is like that, right? First things first, you might want to do web store where the revenue comes in before you do test and dev. You might want to prioritize that way. And that's what we do. We make sense out of business priorities and align them with the way that the storage system performs. So it's execution, it's prioritization of its cues, they all align with the business priorities that you set for it. The noisy neighbor problem in virtualized environments, everybody knows that one. And that comes from the fact that in most systems, there's no way to control some guy who's making a lot of requests and generating a lot of business for a storage system. But he's not the most important guy. You can make a commercial out of that. Everyone knows that. 15% of the performance. The noisy neighbor. Everyone knows that. Did you know? Not nosy neighbor. Noisy neighbor. The nosy neighbor is another problem. Yeah, that was another issue. That's around security. That's a management issue. So anyway, QoS allows you to essentially align the way you want the storage system to align its execution and its resources against the way your business is structured. And I do that through software. You do it through a software management system. You set priorities. You say, I don't care, this guy's making all the noise. This guy's paying the bills. So I'm going to pay attention to him when he speaks. Does he pay the bills? Some of them. Absolutely. We like to spend. The West Coast bills. So Matt East was on. Matt East from IDC was on. He was saying, I'll see Moore's Law, doubling performance, but the data growth has been significant. So on the FS1, what is the big data piece of that? And can you, is anything new there? And give us an update on what's going on with the big data piece of it. Yeah, okay. Big data is an interesting beast because it's a little bit like the cloud in the sense that no matter who you talk to, they kind of give you a different definition for what it is. And that's okay. Because there's a lot of different paradigms and environments where it applies or doesn't apply or means something to one person, not another. Here's one of the salient attributes, low cost, because you're not storing petabytes of stuff at the price of flash, right? So what you'll see in the industry is in general, if somebody has an all flash array, they're not talking about big data. Because you don't think, you know, an all flash array does not support two petabytes or three petabytes in the case of FS1, two node system alone. And tape maybe. Right? Yeah, tape. So, but the FS1, unlike our competitor's all flash arrays, gives you the performance of flash, it supports that fourth tier that gives you the dollars for terabyte that you need for big data. This is why I called it a chameleon when I first saw it. Yeah, and I love that actually. I'm not sure everyone understands that. It could be a bad thing in politics. Well, but yeah, yeah, yeah, right. We're not in politics though, thankfully. Yeah. Yeah. But the point is you can make that system whatever you need it to be, right? And keep it deep or high performance. But when you think about it, all the metadata in these systems, it all needs to be stored on something very quick. You need a fast access to index. And on petabytes of containers, those have big, there's a large amount of index data, right? And so what a better solution could you have than to have SSD on one side, flash performance on one side and have the giant, low-cost capacity containers on the other. Perfect. Will we ever have more metadata than data? Like, is that day going to happen? Well, you know, I suppose you could. It is metadata. It's good news. Yeah. 200 words to describe one. I wonder if we could talk about the competition a little bit. So you guys did a little smackdown at Oracle Open World. You took direct aim at EMC and Extreme I.O. What was that all about? I mean, you and I talked about this. You put sports, some I.O.P.s, figures. I was asking about latency. I was asking about all flash array. Where are we at in that whole urinary Olympics? Well, look, it's real simple. I mean, we weren't doing it to just make noise. It's important that if you have an all flash array, you call it a flash storage system. It might be a valuable thing to not go out and compete against yesteryear's HDD-centered design. I mean, if that's what you do, everyone's going to go, and this is your flash array, right? Well, obviously that wasn't the target. The target is to go out and compete head to head, which we do with all flash arrays. EMC went and bought a solution. IBM went and bought a solution, right? HP went and bought a solution. People went out and bought solutions that were their performance solutions, right? So we compared ourselves against Extreme I.O. What happened was, is that immediately EMC took all the data offline and said, oh, that's old data. It had only been up for three weeks. It's old. You're talking about the data that you were attacking in the station. Comparing it, yeah. And people complained, oh, there's fine print on the chart. Well, the fine print was a link to that data, which then when you went and went to it, you got a 404 error. Anyway, that was just hours after Larry gave the pitch and did that comparison. The point is we compared against people's most performance solutions. That's what we went and do. But the EMC guys said, well, that's not fair. They actually said that. So they did respond. That's not a fair comparison because you have HDD. Well, but for performance, we'll compare against your best. For HDD, we'll compare against your best. So if you want a petabyte solution, VNX 8000 or whatever, we'll compare against that. It's a flash array. We're going to compare performance. We're happy to compare against any metric that people want to look at. So FS1, block-based stores. Where are you competing with the EMC? Is it VMAX? Is it VNX? Is it Extreme I.O.? All of the above. What are your thoughts on VNX? I mean, maybe we can talk about that. Yeah, thanks for that question. Because the answer is, is that for people who are buying all flash arrays, they look at the FS1. We compete there. For people who are buying general purpose arrays that need a little bit of flash and some disk, we can be with the FS1. We don't have six products or 11 products. We don't need them because we have a new architecture that allows you to express performance of flash, the economics of disk on one platform under one management umbrella. We compete against archive storage with just giant two petabytes of relatively slow disk. It's an economical solution for customers and it has the same management interface, the same look. VNX2, it's VNX2. I mean, you change the Chrome. It's still got two operating systems. It's still got windows in it and Dart from Solera. It's still all these different things to do different solutions kind of bolted together. That's what we're trying to get away with. So you guys are going to basically have a block platform and a file platform and extend those lines. You don't see having, I mean, Joe Tucci says it's better to have overlaps than gaps. That's kind of how he answers that stovepipe question. But you feel as though you can address that market and we've seen NetApp having to diverge from its single OS strategy. You guys feel like those two are going to allow you to cover the tan? I think so and considering that both are unified. In other words, you can do NAS with the FS1 and SAN with the ZS system. And what we view is that you've got a platform that does one as principal use in the data center. Our core SAN offering is the FS1. Our core NAS offering is the ZS system. The reason is the architectures are different. They have different fundamental topologies and architectures to support the different requirements of file versus block. And I think that that's a good choice. That's not to say you can't do NAS with the FS1. You can, it has its own filing system. It's not ZS inside. It's built in natively on top of the structure of the software that makes up the FS1. The same is true with ZS. It can do SAN. It has different kind of attributes and characteristics than the FS1 which don't make it quite as well suited for SAN as the FS1. But it can do it. So if you have a little bit of SAN in a NAS environment, you can do that. What do you think of the Flash? There's no gap. Okay, all right. Fair enough. What do you think about the Flash startups? You're out here in Silicon Valley, John. You are as well. You've seen crazy valuations. But sometimes crazy valuations turn into big exits. But David Scott said in the Cube to us, he doesn't feel as though the Flash startups will be able to get escape velocity because IBM's made a choice. You guys haven't had to buy, HP's got the three-par, et cetera, et cetera. EMC with Extreme I.O. That the startups won't be able to get escape velocity. You buy that? I mean, I know Pure doesn't buy that. What do you think? Well, he wouldn't expect them to buy that. Scott Deeson's not going to agree with that statement. No, and I have to agree with him. I mean, frankly, the problem is, is that everybody, look, you're either going to go public or you're going to get bought. There's no one to buy you now. I mean, everybody's sort of made their play, right? So they've done that and now you have to go public. The problem with going public with an all-Flash array that isn't a mainstream solution is that everybody else covers that in one way or another and has no gaps and can do everything with their solution meet you head on for performance. And so your differentiators are disappearing every day. It's tough, Mark. We got a break there. Larry Ellison's about to come on stage. We're getting the hook. This is theCUBE live here. It's looking at Mike Workman, Senior Vice President in Oracle Storage. I'm John Furrier, but Dave O'Lothi. We'll be right back. Thank you.