 Live from the Moscone Convention Center in San Francisco, California, it's theCUBE at Oracle Open World 2014. Brought to you by headline sponsor Cisco Systems with support from NetApp. And now here are your hosts, John Furrier and Jeff Frick. Okay, welcome back everyone here live in San Francisco for Oracle Open World 2014. It's theCUBE. We go out to the events. Extract the city from the noise. We're live on the floor inside the Cisco booth. We also have another CUBE broadcasting live as well. Double the trouble, Jeff Frick, my co-host here inside theCUBE. I'm John Furrier, co-host. Our next guest is Raghunath, an NVR distinguished engineer, data center business group at Cisco. Welcome back to theCUBE. Great to see you again. Thanks John. Now this is pretty exciting. I can't wait to hear that 40,000 plus attendees in San Francisco this week. Yeah. Oracle is chouting benchmarks. I love Larry Allison, Jeff, when he gets on stage. We are the fastest. Like he's always like, whether they're the fastest or not. Well they have a big ship out front, John. Whether they're the fastest or not, we don't want to, but they always find a benchmark for them. But the reality is that people do care about benchmarks and we want to jump into that on the industry standard benchmarks around big data. What are you guys reporting with Cisco? What's some of the numbers you have? So if you look at the benchmark landscape, we have mature benchmarks for system performance from the spec organization and the database performance like a TPCC, TPCH from the Transaction Processing Performance Council. I mean, benchmark is pretty important for the vendors and the customers, and also for engineering organization. I want to talk about two things. One is, if you look at UCF, right? Last five years, we have done extremely well in the marketplace. So one of the things that we are able to demonstrate in the industry is the leadership of Cisco UCS with the 95 plus industry standard benchmark parts. I mean, that is on the system performance side and database performance side. So now, everybody talks about big data. It is becoming a pretty major part of enterprise IT ecosystem across all the major verticals. When you talk about Hadoop and noise quality, it's pretty exciting from a technology innovation perspective. But several of our customers are challenged with how do you size your infrastructure for your application or a new benchmark? What is the right configuration terms of performance, price, performance, energy efficiency? So the TPC has recently announced industry standard benchmark for benchmarking big data systems. I would say this is the first industry standard benchmark for benchmarking big data systems. With this, you'll be able to compare performance, price performance, and energy efficiency of various platforms in a veneral, neutral way. There is no publications yet. I mean, we expect a few publications coming up this year. So how about the interaction with Oracle? Because obviously, you guys have a lot of benchmarks certainly on the Cisco side, different environment. But specifically for Oracle, what were some of the considerations you guys have found that customer scenarios would be most likely the use case for? I mean, the industry benchmarks are great because it's good proxy. But how does that translate to Oracle interactions with the customers you see? Yeah, if you look at today's enterprise application ecosystem, I would put the applications in three buckets. One is the traditional kind of transaction processing and enterprise data warehousing. Okay, UCS, V-Cities and CCDs are ideal platform for those. So we have demonstrated that using several benchmarks, you know, the TPCC benchmark, TPCX benchmark, and two E-Business benchmark that we published this week claiming number one record. So that is on the traditional application space. Then you have this emerging big data space, you know, where Hadoop and NoSQL distributions play a very important role. From a UCS perspective, you know, we provide the best platform for the enterprise application IT where you have traditional applications and emerging big data applications co-exist and complement each other. So I want to add one more thing, traditional big data, and that is another category of, you know, analytics or computing at the edge, like, you know, internet of things. So we have announced our new generation platform for, you know, internet of things, internet of things gateways and, you know, processing at the edge with our UCS mini platform. It's basically a 6RU form factor blade chassis in which you have a compute connectivity and storage all in. And, you know, it's primarily for, as I said, internet gateways and branch offices, but it can be managed centrally with the UCS manager. I think the thing with those three, two of them are relatively new. Right, they'll play data space as well as the internet of things space. The other thing we hear a lot about is really the data center transformation driven by power. And a lot of times power is the biggest gate to data center expansion at a particular location. So talk about how the power landscape is really changing in the evolution of the data center and how benchmarking is impacting that. And then the other thing we hear too now is about different densities within the data center based on the workload. How does that factor in? Sure, I mean, you know, energy efficiencies, top concentrations for, you know, every idea manager. If you look at our, you know, UCS portfolio, we have a significantly improved on energy efficiency. Okay, if you look at our previous platform versus the current platform, one of the consideration is, you know, 30 to 40% performance improvement at the same power footprint. That has been, you know, something that we have made sure that our new platforms deliver. And on a benchmarking side, there are several industries under the benchmark for power. One is SPECT Power, very popular benchmark. And, you know, look at the TPC benchmarks, energy efficiencies, one of the components that, you know, vendors can report part of their benchmark publications. So talk about the show here. What's your take of Oracle 14? I mean, more buzz this year. I mean, every year it kind of has its own vibe. Four years ago, it was like, dead, quiet, it was like, what's Oracle gonna do? And then all of a sudden, within a year, they just changed the game. A lot more engineered systems. The Sun Acquisition came into play. These have a sort of more geeky hardware piece. A lot more developers. I mean, I think, I mean, this is another exciting, you know, Oracle Open World. But let me look at this in a two different perspective. One is on the application perspective. So a lot of applications are moving on the cloud. Oracle made some announcements last night about their commitment to cloud. I think this is very much in line with, you know, the announcement we have made like six months ago about our private cloud strategy, public cloud strategy and inter-cloud strategy. So where you can run your application in your data center, in your public cloud, or like a private cloud in a comprehensive way you can manage your application. You can move up applications from your private cloud to public cloud or public cloud to private cloud. And, you know, I need to understand a bit more on the Oracle announcement. One of the things that are interesting, you know, came to me is the ability to run, like, Oracle Database in a multi-tenant environment. You know, I think that would be the most exciting thing from a software perspective, Oracle perspective in this announcement. And I also see like a lot of companies, you know, on the storage space made significant improvements on performance, especially with the flash storage. So we have done the same thing with our own flash solution as well as, you know, flash solution in partnership with our storage vendors. So what are some of the next big hurdles to cross to increase those benchmarks inside the data center? What are the stuff when you're looking down the road? You're thinking a few steps ahead of most people in the game. You know, on a benchmarking perspective, yes. I mean, we want to stay ahead of our competition so that we can deliver the best to our customers, you know, as soon as, you know, we can. You know, the two things. One is on the compute density, okay? So we have the capability to take a full advantage of the internal process of family. You know, if you look at the process of performance, it has been going in line with the workflow. So to complement that, so we have increased our memory footprint on our server platforms. If you look at our B460 and C460, these two systems are very popular machines in the database space, S54 Oracle. We support up to six terabyte of main memory with the capability to support, you know, 15 gigabytes per second IU bandwidth. This is going to be, you know, kind of the next generation machines for not only for our customer deployment, but to demonstrate, you know, the performance, scalability, and the price performance given to the support of other database management systems. So you touched on it. Talk a little bit about flash and the impact of flash that you guys are seeing as it propagates further and further inside the data center beyond just the super high value, low latency application. Sure, I mean, if you look at, you know, enterprise flash is pretty much becoming a standard, especially for transaction processing, transaction databases. The advantage that comes from two things. One is performance, okay? If you look at the IOs, IOs per second, you know, capability, the flash technology can deliver 1,000 to 10,000 times more IOs per second than spinning media. And on the energy efficiency side, it is one-tenth or like one-twentieth. So, I mean, if you look at five years from now, I would say 75% of all the, you know, transactional and real-time database will be running on flash. And of course, you know, if the capacity is important, like you want to build a petabyte scale database, you know, where Hadoop and NoSQL distributions, you know, have an important play. I think the spinning media is going to be more effective in terms of dollar per terabyte. But overall, flash is going to be the future of storage. So Larry was talking on this keynote about big data, and he basically throws it out there like, it's included. It's included. You can't ignore it. You can't ignore it. It was just a direct quote, which I was so, like, happy, but also rolling and laughter because, of course, it's included. Big data is everywhere now. It's part of the platform and everything. But Oracle tries to compartmentalize it like, oh, that's the solution. How do you guys look at how you guys are building out the big data piece of it? Because big data is a big part of the analytics involved in software for the networking stuff, but also the Anthony of the data. So how do you sort out that big data piece relative to the Oracle ecosystem? Sure. I mean, as I mentioned before, Oracle is primarily solving your traditional transaction or data management problem. One of the innovations that Oracle has done is ability to query your data set, outside Oracle database, residing on Hadoop. So treating your data on Hadoop as part of your database table. Okay, so not only you have access to, you know, the data inside your database, but also having access to the data that is residing on Oracle database or other logical distributions. It's very powerful. And the exolytics, how do you think that's going to fit into the whole product issues around customer choice? Sure, yeah. I mean, exolytics is a high-performance analytic solution for in-memory processing. As I mentioned before, large memory is one of the areas that we are focusing on infrastructure perspective. You know, our B460 and C460 supports up to six terabytes of memory. And the new generation of C220 and C240 and B200 platforms, two socket systems powered by Intel E2600 CPUs, we support up to 1.2 terabytes of memory. So in infrastructure perspective, we are fully in line with taking full advantage of like technologies coming from Oracle. So you recently joined as chairman of the Big Data Benchmarks Standards Committee. Congratulations. Thank you. And also being at Cisco, you're covering the data center. I want to take the perspective of looking at why benchmarks exist to share data with potential performance issues and environments that most CIOs or enterprises have. So in your opinion, in your expert opinion, what do you think the biggest driver is in the data center right now for folks out there who are looking at it and realizing that they've got to move to the cloud? They realize there's some transformation going on. In your opinion, what are the big critical things for the transformation, the data center to the cloud? I think the main challenge is your ability to move data back and forth, okay? While meeting all the security, privacy and compliance requirements, there are a lot of things happening in the industry to solve that problem. And one of the major initiatives is our InterCloud also. The ability to provide end-to-end isolation for your application when you send data or application between your data center to another public or private server. So the question I always get, and I'll ask you because you're the expert chairman on the benchmarking committee is, how do you make a benchmark that satisfies all the different audiences and use cases? Because it's hard. You can't always get every use case, but you try to figure out a benchmark that's representative of what the environments do. So how do you guys wrestle with that? So if you look at the industry standard committees like SPEC or TPC, so pretty much all the major vendors are represented in the committee. So all the decisions are taken through a democratic process, okay? The recent... A democratic process by the vendors. Sorry. By the vendors who want the benchmarks to be in their favor. So I mean... Or not, is that true? Of course, I mean like if you're representing yourself in the industry standard in Socia, you wear two hats. One is representing your company. Yeah, of course. The second hat is, you know, your industry standard hat so that when you develop a benchmark, it is vendor-neutral so that your end user customer can compare different architectures, different topologies. So that is the end goal. Yeah, yeah. And you know, what we have seen historically is that the industry standard benchmarks enable healthy competition. Yeah. End conversations too. Yeah, absolutely. Yeah, so... That debate. Debates too, yeah. Debates healthy. At the end of the day, our customers are going to get the most benefit because when there is competition, the performance will go up, price performance will go down. So benefiting the customers. Yeah, yeah. But the key is getting some signal out there that says, okay, here's how generally things look with data to give some navigation to... Yeah, and another thing is, you know, the benchmarks define a level playing field. Okay, so that, you know, architecture X can be compared with architecture Y. Product A can be compared with product B in a vendor-neutral manner. So, I mean, that's the key. That's the key. So I'm going to shift gears a little bit. I don't know if they let you out of the lab and go out to talk to customers, but I'd love to hear your perspective on some things that you've seen recently where people are putting all this power to work in ways that they couldn't do it before. Do you have any good stories from the field? Sure, I mean, I interact with our customers, our top customers all the time. Oh, good. You know, let me give you a good example. Okay, I have been involved in a research paper. I have two interns working on it, looking at the big data in the health care sector. So one of the amazing things was, you know, HGP, the Human Genome Project, that is one thing that has output form more slow. If you look at the... The scale of that thing is huge, right? Yes. Give us some numbers that kind of map out the scale of what the... If you look at the 15 years ago, the cost for genome sequencing or a fraction of a DNA, I think the metric that they use is Sinti Morgan. To process that would foster something like $10,000 in 2001. Now it is in less than 10 cents. Okay? So if you look at the product against more slow, it is significantly beating more slow. I mean, that is one thing. I mean, the great potential of big data and the genomics combined in the health care space, especially for personalized medicine, you know, that can significantly change the health care industry. So we can only live longer, 1,500 years more. Yeah? Well, I don't know. Two years more. Okay. I'll get you the final word in this segment. Thanks for coming on the queue. Share it to folks out there if they were to ask you, what's the big deal this year at Oracle? What's the big technical focus at the show? I think what I've heard so far, the main focus is a database in cloud, a database as a service, you know, in a multi-tenant environment with, you know, all the necessary isolation and, you know, security and privacy-challenging address. Okay, security. Okay, we are here live in the Cube. This is the Cisco booth. This is the Cube. We are broadcasting for Two Straight Days. I'm John Furrier with Jeff Frick. We'll be right back with our next guest after this short break.