 Hi everyone, I'm here with Sanjay Mareem Madhya from HP. I'm of course Jeff Kelly with Wikibon, we're on the floor of Stratoconference at Santa Clara Convention Center. A lot of interesting things happening today. We're going to talk today about the HP app system. Really want to understand really what it's all about and kind of your approach to Hadoop. Absolutely, hey thanks Jeff. So in Discover, HP Discover in Frankfurt in December, we announced the general availability of HP app system for Apache Hadoop. So what we are offering with HP Apache Hadoop, actually app system for Apache Hadoop is an enterprise ready Hadoop platform so that customers can focus on harnessing the power of big data. So it is a ready to use platform so that when it is delivered, customers can use it on day one. And it is a turnkey solution. We're not just taking Hadoop, we're taking a lot of HP IP integrated with the Hadoop platform and this app system is with Cloudera, Hadoop loaded, so we're taking HP inside CMU, integrated with HP, I mean Cloudera manager and we also have Vertica Community Edition bringing it all together to make it a turnkey solution. And we have the first, we have released the 10 terabyte terasort benchmark, the only company to do that. We have the best numbers on that. And we have management capabilities that will make it easy to deploy and manage. So as you grow the cluster, it's easy to grow. So just to give you a sense of what the product is all about, today the app system comes in two flavors. It's a half rack with nine worker nodes and three management nodes that's ideal for customers who want to do proof of concept. So when Hadoop is at a stage where customers want to understand, okay, what the heck can I do with that? So we are giving this to customers who want to get started. Everything integrated, we have chosen the right servers. You know, Hadoop has so many components like name node, job tracker, customers don't know where to put this all together. Usually it takes like three to eight weeks to put it together. And at the end of it, they're not sure. It's really stable, right? With an appliance, you can really slide it right into the data center. I mean, so customers right now, they want to generate value out of it. So we have taken all the pieces together so customers can get started on the analytics portion. So that's what we have done with that. Very interesting. So I like what you said about the proof of concept because of course that's how a lot of people start. And then so when they want to move up to full deployments, you know, large production style deployments, that's where the full rack configuration comes in. Absolutely. So we have a full rack as well, which has 18 worker nodes. That's like up 384 terabytes of usable Hadoop space, HDFI. And then in May, we are coming out with expansion racks. You know, Hadoop clusters, they start small. We have sold Hadoop clusters which are as big as 600 nodes in just the last quarter. And you know, some customers, we have a large mobile operator with 1500 node Hadoop clusters. So that's how big it is. So it is scalable seamlessly. So let me show you a few more details on the app system. So here is the overall architecture. So we have chosen the right level of switch. We have high availability, taken two switches. Then there's one gigabit NIC, they're all bonded. So we made sure that it has high availability features at every layer of the architecture. But I want to show some of the key features that makes it the state of the art enterprise Hadoop platform. This is the HP inside cluster management utility. It can manage, it can provision right from bare metal to all the software that is required, firmware, software. If you want to upgrade the patch of the operating system with a push button, you can update up to 800 nodes in 30 minutes. And then monitor, let's look at this chart here. If you want to look at the performance of a large cluster, now if you want to know IO performance, these are all nodes. Each line points to a node. So it gives you visualization of the performance. Then you can identify, okay, at this moment, is it the IO or the bottleneck? Is it the CPU or the bottleneck? Now, you also want to check, some customers want to see what happened in the last one week or one month. So we have a historical view. So each of these show the performance of various nodes. So you can then figure out, huh, last week when that process was run, IO was the bottleneck. You can figure out the performance of map-reduced jobs. So we have integrated with cloud error manager. So we also can visualize the map-reduced jobs. Sanjay, thanks so much for taking the time to walk us through this. Really appreciate it. Hey, Jeff, thanks a lot. And congratulations to Wikibon for doing a fantastic job here. Oh, well thank you so much.