 My name is Konstantin Budnik, I joined Vandiske about six weeks ago from Karmasphere and before that, I was doing Hadoop development at Yahoo and Cloudera. And in Vandiske, I'm in charge of Hadoop distribution, Vandiske Hadoop distribution, which is not simply yet another Hadoop distribution, but an engine that allows us to deliver a very interesting bleeding edge, I would say, technologies to the Hadoop market. And these technologies are namely non-stop name-node, but my colleague Konstantin will talk about a little bit later, and technology that allows you to use Hadoop clusters for private clouds. So basically the applications that are run on S3 technology, S3 file system from Amazon, could be seamlessly transferred to Vandiske distribution using our proprietary S3 HDFS bridge. Among the other advancements we put into the Hadoop distribution is a much better user experience for cluster users. And the main thing is that we are the first pretty much commercial company that provides Hadoop 2 support and Hadoop 2-based distribution of the full Hadoop stack. So we are fully committed to open source servers. We are using another Apache project called Big Top that I am one of the co-authors actually to build a distribution. And as a shameless plug, we've built the full distribution from ground zero to the working commercial product in just 28 days using open source technologies, Big Top particularly. Thank you, Jeff. So what we have here is the industry's first multiple active name-node solution for HDFS. And we have three name-nodes serving the entire data space. So what we're going to do is start Terasort. The clients are going to connect to all three of the name-nodes. Then we're going to kill one of the name-nodes. We'll see Terasort continue. Fine. No interruptions. We'll see the other two name-nodes pick up the load and we'll see the dead name-node flatline basically. That's the demo. Okay, what we have here are three graphics applications showing the activity in each of the name-nodes. They're showing RPC bytes in and bytes out. I've prepped it so we already have Terogen data in the HDFS. I'm going to run Terasort now and we'll see the activity pickup. In the next few minutes, we'll see activity on the name-nodes pickup. There we go. The orange lines indicate bytes sent and the green lines are bytes received by the name-nodes. Let's give it a few minutes to get really active. Then we'll go and do the unthinkable, which is to kill one of the name-nodes. That's a catastrophic failure in most HDFS. In our solution, we simply switch to the other two name-nodes and life goes on. Terasort is uninterrupted. So obviously you guys have been putting a lot of hard work into this. As we kind of look forward through the rest of the year, what's on your roadmap in terms of additional development you're looking at to the product? We have planned for a world-file system. This is HDFS that runs across multiple data centers. The result is a single namespace that's spread across multiple data centers and you can run your jobs on whichever data center is near your data. The result is something if you have a data center go down or multiple data centers go down, you can still have access to your data and you can still run your jobs. That's coming up next. We also have plans to use our active-active replication technology for the YAN resource manager and for the HBASE master. Excellent. All right, great. Well, guys, obviously a lot more coming from WinDisco this year, so keep your eyes peeled for what they're doing. Thanks for joining us. Signing off.