 Okay, we're back here live in Oracle Open World. We're live in San Francisco, California, for Oracle's big 60,000 plus Oracle Open World event. This is theCUBE, our flagship program. We go out to the events that strike the signal from the noise. I'm John Furrier with Mike Go's Dave Vellante. And go to siliconangle.com for the reference point in tech innovation. Go to wikibond.org for free research. Gartner Group, IDC, they all charge for reports. Not at Wikibon, it's all free. SiliconANGLE, all free content. Of course, this is theCUBE, we go to the events that strike the signal from the noise. And the big news here is Oracle Open World is pushing the cloud, they're pushing social, and the America's Cup race is going on where Larry Ellison is attending and blowing off the keynote to attend the sailboat race. So let's hope that they can pull a comeback. It's been a great run. Dave, they were down eight to one. Literally, they had to call the race where they were about to lose the cup and had a chance to come back in and just run the table on the Kiwis. And so it's been a very interesting time. Other news that's going on around the web is obviously Amazon's announcing a new Kindle. That's going to change the content game. And all these trends point to one thing. The consumer is connected in the crowd. You got Twitter, you got Facebook, you have the social networks, mobile apps are the key. Funding all that's going to be the cloud. It's going to be all the infrastructure. It's all under the hood. The little things matter. Fiber channel adapters, high speed connectivity between storage, compute, and networking. Software defined everything. Software defined networking, software defined storage, software defined compute. So Dave, that's kind of where we're seeing all the action here is under the hood, there's a lot of action going on. That's enabling an entirely new breed of tech user, an IT broker, programmers, rails, Python, a whole new computer science method of developing. We're watching it. The SAS model, the past model. What was your take on it? Yeah, so John, you mentioned under the hood and we're here in the Q-Logic booth. This is the third, now fourth year we've done Oracle OpenWorld thanks to Q-Logic who gives up a major portion of its booth so that theCUBE can broadcast live from OpenWorld. And when you talk about under the hood, Q-Logic is a company that makes the adapters, makes the technology, that brings the storage and the networking together that allows the bits to fly through the system. And if anything goes wrong, allows systems to recover them. David Ard is here, he is the head of OEM marketing for Q-Logic. We have a demo here. We're going to talk about Oracle Rack with Fabric Cache, product that you guys announced recently. So tee up what we're going to see here. Yeah, that's great. Thanks for the introduction and thanks for having us back on the show. It's a great show for us. So what we're talking about here today is a collaborative solution between Oracle and Q-Logic where we brought in the ability to really enhance the Oracle Rack environment with a new product that we've launched. It's called the Q-Logic Fabric Cache solution. And essentially that solution gives customers, it's a unified solution that gives customers ability to really be able to meet the growing demands of their business and continuing to keep up with the data needs that they have in that environment using that caching capability. So what we've got actually is, the Fabric Cache product is actually a product that is a fiber channel HBA, but it includes the solid state disc technology in there and it gives customers the ability to have a clustered cache capability on the server side of their environment. So that gives them a much better performance, so up to 82% performance increase in their applications. And it also gives them the ability to be able to reduce the latency on that back end sand interconnect. So it gives them essentially the opportunity to do more work, to get more productivity out of that current infrastructure. Okay, great. So now the demo that you have teed up. Yep. So let's go right to the demo. Yep. So we'll switch over. I'm going to just tee it up again here quickly, just to give you an idea of what the configuration looks like. It's a four rack, Oracle rack configuration. Every one of the nodes in the cluster actually has a Fabric Cache HBA installed on that. What that actually creates is a cluster of caching that gives customers the ability to be able to take advantage of caching all of the hot data essentially on the server side. So if an IO request comes through, that Fabric Cache card will actually go and look at the cache first, transparently to the servers. If it's there and the cache returns it and that speeds up the application performance significantly. So why don't we- So yeah, we were talking about this the other day, Monday on theCUBE, that multiplicative effect of all the IO's increasing. And this is a way in which you're dealing with that. Yeah, absolutely. So what we're, I'm going to flip over real quick. And actually what I'm showing, we're using, what we're emulating here is actually a business analytics type application where you've got up to a thousand concurrent users. We're using a swing bench tool that was actually developed by Oracle and really does a good job of testing the overall and stressing out the overall Oracle rack environment. And what we see here is a couple of different charts and it shows essentially the number of transactions per minute that the cluster can handle. And what you see on the right side with the overall CPU-TPU is it actually shows with cache turned off the number of transactions and then about a four X performance increase and the number of transactions that the cluster can handle whenever you turn that cache capability on. So those valleys are the cache capability off and the spikes our cache has turned on. When you turn it on. So we've just got a script running in the background that turns it on, turns it off. And what it's showing is over time, whenever our cache is off, I'm getting around 220, 320 transactions per minute. Whenever I turn it back on, you can multiply that by four. And essentially that's all you're doing is enabling cache. There's no other tuning on the database. There's no other hardware you're throwing at other than enabling that cache to be able to provide. So the Oracle Rack is, you know, for the highest end production systems, it is Oracle's, you know, red butter, right? Now, so talk about the cache a little bit. We're talking about read and write cache. It's right through cache. And it's essentially, you know, all the hottest data on the, on the sand is actually going to be cached on the server side. So what that does is that allows you to be able to access that from the cache pool. And it reduces that, that read right through back to the, to the sand and gives you better overall production. So right through cache, meaning you write to the SSD, so you signal the old mainframe turn device end, you know, you signal the write occurs when you have it in the cache and you probably replicate it. And then, then you trickle it asynchronously to the backend disk at some point. Yes, exactly, exactly. So yeah, and so, you know, there's tremendous, you know, if you look at just transactions per minute that you can get whenever you start to run this type of solution through there, you see up to five times increase in the number of transactions per minute that you can deal with. And also application response time comes down significantly, about 75% savings on your application response times, which makes, essentially all of this means you can get more done in the same amount of time, giving you the ability to drive more revenue or cut your cost significantly. So okay, so in a lot of ways, we've talked about this a lot, you know, here in theCUBE and in other places, Wikibon and SiliconANGLE, getting rid of the, minimizing, I should say, the impact of the spinning disk. So there's a big part of what you're doing and you're kind of shifting the bottleneck, right? Back into, closer to the CPU, right? So where's the bottleneck now? Is it the amount of cash? Is it the ability to de-stage that SSD? Or what are you learning from that? We've done a tremendous amount of research on, okay, what's the average amount of hot data, I guess you call it in a storage area network? And it's about 20 to 30% of the overall database environment is hot. And so the cash sizes that you have as a part of this solution are representative of about 25 to 30%. That's basically how you scale and determine what size cash you need. So 200 gig, 400 gig type cash, and then you pull that, you cluster that across the nodes in your server cluster and then you've got the ability now, shared cash up to, you know, if you've got four servers with 200 gig up to 800 gigs of cash on that side. Now what does this mean for the end customer? Otherwise it means better application performance. But what does it mean in terms of what he or she has to do to configure their systems? Are they, do they need to identify tools? Is it all automated? What do they have to do? Now all it is, is it's an HBA. And this actually, so from the server perspective, it looks at this as just an HBA. It's completely transparent. It runs based on the same set, base driver set that customers are accustomed to using with Q-Logic. There's no management layer software that they have to install. It's completely transparent to the operation of the server. So essentially those IO requests go, they route directly through the fabric hash. If the data sits in the fabric hash pool, then it will return it there. Otherwise it goes all the way through. And this is something I can buy today? That's available today, yes sir. Great, awesome. Is there anything else you need to show us? I'm always cutting that demo's off. I don't want to do that. That's fine. Now this just gives you a sense, and we're really just trying to show the sense of the difference in the performance that you get just by enabling that cache capability by using the fabric hash solution. So now to talk a little bit more about sort of the uptake, you're obviously targeting this to your OEMs. What's been the feedback? You know, you're shipping, are you in production? Give us the status. Great question. So we actually started shipping this product earlier this year about March timeframe. We've had a lot of customers who are doing testing evaluation in their environments to make sure that they can really optimize it and it makes sense for their environment. We're now at a point now where a customer is starting to roll this out. So we're talking 10, 15 to 20 customers who have been doing some pretty significant proof of concepts and now they're starting to roll this in. And so it gains a lot of attention whenever you start talking about maximizing the investment. They don't have to add a lot of additional hardware. So we're starting to see a lot of value there. So how is this being charged for again at the customer or maybe talk about your pricing model just in conceptual terms? Yeah, so I mean, from a pricing perspective, it's much less than if you were to go buy a new Santa ray to put in your sand or if you go add a new node in your cluster. So it's definitely priced at a price point below that. Obviously, the pricing is going to be handled through our OEM customers, but there's less pricing that is at a significant advantage from some of the other things that you may want to do to go drive this kind of performance. So that's sort of from an end customer's perspective, that's the benchmark is I can stave off the having to buy more giant equipment, complex sands. I can do stuff closer to the CPU that's going to extend the life of my existing infrastructure. So really is an asset leverage play in one part. And also it speeds up the applications. That's really the two main value props, I guess, in improving efficiency of the system overall. Absolutely, exactly. All right, David, any last word, final advice for people or what do you want people to take away from? No, I think the key is we've got a product now that works with Oracle Rack and we've proven that it has the ability to be able to give you the, it maximizes the investment that you have in that overall environment with Oracle Rack and continue to get better productivity out of that without having to make those heavy investments in other IT infrastructure. Excellent, Oracle Rack, real application clusters for you geeks out there, the mission critical, the most production oriented systems that are out there. Thanks again, David, for coming on. Thanks again for having us in the booth. Absolutely, thank you. Key logic is four years in a row. The Cube's been here at Oracle Open World, it's been a great run and we're going to continue the tradition. Stay here at theCUBE, we'll be right back with more day three coverage live in San Francisco. This is theCUBE at Oracle Open World. I'm John Furrier with Dave Vellante. We'll be right back after the short break.