 Hi everybody, we're back. This is Dave Vellante of wikibond.org and it's my pleasure to be here at theCube, Silicon Angle.com's continuous coverage of Oracle OpenWorld live in San Francisco, California. We're in the Moscone Center, we're on the floor, we're in the QLogic booth. We've been here three days. This is our third year at Oracle OpenWorld. Talking about the transformation of Oracle from a pure software player to a full line systems company. And one of the areas that they've been focused on is performance. We've talked all week about how Larry Ellison loves speeds and feeds. He's kind of a geek in that regard. But really he's been putting forth how his company, Oracle, has been focusing on improving the performance. He loves to use adjectives like lightning fast and so forth. And one of the areas that he's been talking about in great detail is flash. Now earlier this week we had Ryan Klein on from QLogic talking about their QLogic's Mount Rainier initiative. And so it's my pleasure to welcome back Ryan and we're going to talk a little bit more about that. We're going to show you a demo. So Oracle DBA is out there. You're concerned about performance. So here is a new innovation that QLogic has developed that essentially bring the best of the flash DAS performance together with the convenience of managing sand. So we're going to talk about that, Ryan. Welcome back to theCube. Thank you. Appreciate being here. So why don't we set it up here? We've been talking all week about flash. Its importance to the Oracle DBA. Set up Mount Rainier, what that is and why it's important to Oracle customers. Sure. So back in September of this year we announced a technology called Mount Rainier. And Mount Rainier technology is essentially a series of three different capabilities that allow us to have a single server and multi server shared caching. And we also have the ability to share those data lines externally. So fundamentally we're bringing SSD flash drives into the server themselves and getting all the benefits of sand. Okay. So you've got a demo of Mount Rainier. You've had that in your booth all week. There's been a bunch of traffic coming by. So take us through what you can with the demo. Sure. So essentially what we wanted to be able to do is be able to take a technology that allows us to bring flash SSD drives inside of the server, share them on the sand, but more importantly have them apply to a real world application. So essentially what we're doing is we're now having the ability to support clustered applications. And Oracle Rack is clearly one of the most popular clustered applications out there. So what the demo and what we'll show you here on the slide is we have the ability to introduce SSD caching called this Mount Rainier technology inside of an Oracle Rack deployment. So let's go ahead and bring up these slides. And what this will share with everybody is the topology that we're looking at. So essentially what you see is a two node Oracle Rack configuration. Each of those servers has a Mount Rainier adapter inside of it. And there is a cluster of cache shared amongst those two Mount Rainier servers. Now the important thing here is this cache is shared but it's also transparent to the server. What that means it allows us to speed up read activity to the database which is then hosted on the pillar axiom array. Now why this becomes a very compelling situation is it allows us to speed up read activity quite a bit and we'll show that here in a second. But it also allows us to be able to leverage a pretty powerful infrastructure. So these pillar boxes, these are 15K drives, really fast drives. We have 198 gigs of memory in these servers. So fundamentally we have a really powerful environment that we're actually leveraging with this Mount Rainier technology. The choice of pillar architecture is interesting. Of course, pillar data, a company funded by Larry Ellison, acquired by Oracle recently. Well, we're here at Oracle World and we want to build a configuration that's relevant to most of the attendees. Went in Rome. Yep. All right. Good. So you've got a couple of other slides to show before we go into the demo. Yeah, so let's go over to look at the demo itself. So what we're going to show you here is us streaming live from the Broomfield Labs. What we've done is we're working very closely with the Oracle teams out in Broomfield and essentially we've brought up a rack environment in their environment. And what this has allowed us to do is partner with Oracle to get their expertise on rack configuration as well as the Q-logic expertise on shared caching. And in the demo, what we do is we focus on response time as well as the latencies associated with IO and fundamentally the transaction time. And what customers are seeing is that when you integrate a cash pool into an Oracle rack environment, you're reducing the response time. So that allows us to drive more transactions in the environment at the same time. And really what this fundamentally comes down to is the ability, when you drive more transactions, to conduct more business. Yeah, so talk a little bit more about the impact to the user or you say conduct more business. But what have customers been telling you? How are they going to use this? And what is going to be the bottom line impact of their business? So everybody's trying to squeak out more performance in their environments. And when you have situations like this, DBAs like to tune their environments to drive as many transactions as they possibly can. At that point, what we do is we introduce this Mount Rainier technology and it allows us to get more transactions in the environment. Okay, good. So I think we got the demo ready. So let's bring that up if we could. So demos are never supposed to work right this time. These desk build gates. Right. So the screen that you're seeing now is actually a tool called Swing Bench. This is actually an Oracle tool that they use. And what we're simulating is a thousand person environment. And you see a number of graphs moving here. And what we're simulating is a very read intensive application. Processing orders, browsing orders, looking at warehouse queries. And the two things that we look at here, as I mentioned before, is disk I.O. and then overall response time. And disk I.O. when you see a comparison of before and after has dramatically reduced. Essentially what that allows us to do is more with that storage. So we can now apply more server connectivity to that storage, do more with that storage allowing the data that's being read to be put into that cache pool. And when you move that data into the cache pool, what you're doing is you're reducing response time to the end the customer or the person driving the transaction. So you see more transactions occurring at a faster response time. And if we switch back over to the last slide that we wanted to share, it's important to do a comparison. You know, what did I have before and what do I get after Mount Rainier? And what you'll see in the slide here is that we ran a baseline configuration where we ran with a standard fiber channel infrastructure. We didn't have Mount Rainier running. And what you'll see in the same thousand concurrent user configuration on the slide is that the response time is increased by 82%. So we're basically getting a 5x performance benefit for peak transactions. Now if we look at average transactions, we're seeing a 57% increase using this cache pool. Now the really important thing here is that the difference in configuration before and after is physically just changing out an adapter. Yeah, so I wanted to ask you. I mean, a lot of customers are going to be concerned about the changes that they have to make to their infrastructure. You're saying that you can just drop this into your existing infrastructure. No changes, no fundamental architectural changes. That's the key behind this technology and why it's going to be successful. And fundamentally what we've done is we've used the Q-Logic driver architecture that has qualified for 10 plus years, a management infrastructure that's well known by most IT administrators, and have the same look and feel and architecture that they architect the standard HPA. So literally the configuration that you saw running here allowed us to install a standard fiber channel adapter, get a baseline performance run, physically remove that adapter, bring in a Mount Rainier, and re-run the technology without having to recable, rewire, change architectures, and get this 5x performance increase. So 5x performance increase, no disruption to my existing environment. Now cost obviously is the one drawback, you've got to pay up because it's going to be flashed, it's going to be more expensive. But to simulate this type of performance without flash, to get 5x performance, you'd need so many spindles it would just be absurd. Absolutely. So we're dealing with the cost of SSD, but the beautiful thing from an IT environment is the same tool set that customers use today to manage the adapters will manage the Mount Rainier products. So it doesn't mean you put Mount Rainier in every server, but you put it in the ones that are application intensive, and still need to be able to be in an enterprise environment. Because what this allows us to do is multi-path, create failover scenarios, and fundamentally support clustered applications, which is something others can't do with caching today on the server. Okay, so this is unique in the marketplace. This is unique to Q-Logic. And you're going to be shipping when? So we're starting to start shipping this product early next year, and we're moving into beta right now with customers. So folks that are interested in getting their hands on this technology, SolutionsLab at Q-Logic.com gets them access to this technology is in a beta phase. Excellent. All right, Ryan, thanks very much for coming back in the Cube, sharing with us this great demo. Check it out, and we'll be right back with more from Oracle OpenWorld Live. This is the Cube.