 Hello, everyone. Grab your lunch. This is falling straight into the lunch hour here. Hopefully, I'll keep you occupied here. Well, you can grab your lunch and join us here. We're going to talk about the Dell OpenStack solution. Actually, we have a case study with University of Alabama. How many of you have heard of Ink Tank Cef? OK, the guys over there, as soon as you enter the expo hall on the right side, there's a booth. There are partners. I'm going to talk about a case study. Actually, we have published a white paper that just came out last week about an implementation we did at University of Alabama and how we addressed their storage needs for their researchers as part of this presentation. And you can grab the white paper shortly. So very quickly, a little bit about Dell. For those of you who may not know, Dell has been committed to OpenStack from day one. They've been actually the first OpenStack cloud solution provider out there. 2011, we had our first solution out in the market. We have a very deep partner ecosystem, Ink Tank Cef being one of the partners we have. We have our own deployment tool called Crowbar. So if you want to see the tool in action after this presentation, swing by our booth. There is a demo going on there. We can show you what it is. I'll give you a brief overview here. And then come by our booth to see more about what we're doing there. We have two gold foundation members on the board. So a lot of activity that Dell's doing with open source and OpenStack and the foundation. So we have a number of OpenStack solutions out in the market. I'm not going to go into the details of this. But as I said about Crowbar, effectively what it is is a deployment platform that gets you an OpenStack cluster running from a bare metal service into a running OpenStack cloud in a very short time. So you can use it to automate deployment. You can quickly provision bare metal servers, including things like BIOS and RAID configurations, setting up your DNS servers, your DHCP, all the stuff you need at the infrastructure level, whether it's networking, whether it's DNS services, or IPMI, or BIOS and RAID configurations. Crowbar automates all that for you. And then, of course, over time, as you upgrade your cloud, add more servers, how do you keep that in operations? So there's a DevOps model underlying Crowbar. And it's all open source. It's all developed in the open. There's a community around Crowbar. So you can accelerate your multi-node deployments with Crowbar. So feel free to come by the booth. There's a demo going on. We'll show you exactly how Crowbar works there. So the whole solution consists of the Dell Crowbar solution, which I just mentioned. We have Grizzly, already operational, out there as a product. It's supported on a number of Dell PowerEdge servers that we have specifically chosen or selected, because we think they are the best optimized platforms for OpenStack. You can see here, PowerEdge C8000s, the PowerEdge C6220, and the PowerEdge R series are all platforms that are perfectly suited for compute and storage in the OpenStack implementation. We also have what is called a Dell Multi-Cloud Manager. We heard a lot about hybrid clouds in the keynotes. And as you go talk to the community members, it's a hot topic. We have a technology called the Dell Multi-Cloud Manager, which allows you to basically connect to multiple clouds, whether they are private clouds or public clouds, and orchestrate between them. Effectively, what you have is a single panel, single pane or glass, if you will, that allows you to manage multiple clouds. So we have that. And Ink Tank's Ceph, I will go into a little more detail shortly. It's a storage solution at a very low cost. And then it's a complete solution. It has deployment guides and services implementation support associated with it. So we're going to talk about Ink Tank's Ceph today and the case study we have had at University of Alabama. So what is Ceph? So for those of you who don't know what Ceph is, if you compare it with traditional storage, as opposed to a single purpose block storage or a file storage solution, Ceph is multi-purpose. And we'll talk about what that is in a second. And it's unified. All the different purposes of storage are unified in one platform. It's not hardware-based, as traditional sans are. It's distributed software. So think of software-defined storage. It doesn't come from a single vendor. It is open source. It's developed in open source, runs on commodity hardware. There is no fixed limits to scale. It can potentially scale to unlimited number of servers, only limited to the number of servers you've got. So it's a very cloud way of looking at storage. And they've become very popular. Almost every customer I talk to, there's always stuff in the conversation. So why is it unified? Well, it combines three storage models in one unified model. So there's object storage, a.k.a. Swift, that you guys may be familiar with. So object storage is part of Ceph. It also has block storage as part of Ceph. And it also has a file system. So it's a three-in-one combined storage solution that works with OpenStack. And I'm not going to go into the details. You can come by the Ink Tank, both or come talk to us at Dell. But this is why we find it very compelling. This is why customers find it very compelling. Because almost every customer I talk to has an object storage requirement, has a block storage requirement, and a file storage requirement. And this is the only solution that's out there that I am aware of that is open source, that is low cost, that's scalable, that's software-based, and is able to do all these different storage models in one solution. This is why it's very compelling out. So how does Ceph work with OpenStack? Effectively, what Ceph is is an object storage solution. It's called RADOS, right? The Ceph storage cluster is based on objects. And they have got two interfaces. There is the Ceph object gateway, which allows you to access the objects within Ceph through the APIs. So you have both the Keystone API access to Ceph, as well as a Swift API access to Ceph. And that's the object piece. And if you're interested in the block portion of it, you can access it through the Cinder API. So we've already got Cinder drivers that are in the open that have been submitted to the OpenStack community. And it works with Glance and Nova as well. And there's something called a Ceph block device, RBD, which sits on top of RADOS, which gives you that block capability into OpenStack. So all of this is in OpenStack, integrated in OpenStack. And what we've done on top of this is the ability to actually deploy Ceph in a seamless manner using ProBar. So if you're deploying, let's say, two petabytes cluster with OpenStack, and you have like 30 servers, you have to deploy all the different components of Ceph. You have to deploy OpenStack. You have to make sure all of these pieces are connected in the proper way. And all that is done in an automated fashion by ProBar. So that's something that you can actually see at the booth. So let's talk about UAB. So this is all about an actual implementation at University of Alabama. So what problem was the university trying to solve? So University of Alabama has a very innovative program around data analytics. They do a lot of gene analysis, genomics, and things of that nature. And they had hundreds of researchers trying to get access to lots of storage for analysis. Unfortunately, the growing needs of the researchers were not met by the IT department in a compelling way. So what ended up happening was all these researchers were having data and laptops, and USB drives, and local servers. They also had HPC clusters, and there was data all over the place. It was very hard for them to manage the growth of that data. So in transferring these data sets between, effectively, they're using HPC, but they wanted to transfer this data between their HPC cluster and the local researchers working on it. And the whole thing became very difficult to manage. On top of that, they had issues with security and making sure that the data is not at risk, that somebody was just taking the data away. So these were all the issues that the university was trying to solve. And the way they solved the problem was through the cloud storage solution that included OpenStack, and Ink Tank Ceph, and it's all centralized. So think of it as centralized. Think of it as Amazon S3 and EBS. That's internal to their environment. So it's a fully flexible open source infrastructure. It's based on the Dell reference design. So it uses OpenStack, Crowbar, and Ceph. So Crowbar goes and installs the entire infrastructure. They had more than almost a half a petabyte of storage. And when they did the math, it turned out to be only about $0.41 per gigabyte, as opposed to a traditional SAN. They looked at a lot of different technologies. They looked at traditional SANs. They looked at HDFC. They looked at Hadoop. None of those solutions actually matched what they were looking for, and the costs that they were willing to pay for. And this was a perfect solution for them. It was distributed scale. They're looking to go from half a petabyte to five petabytes in the next year. And that's massive scale. And if they looked at the cost, it was just mind boggling. And the biggest thing was data migration between the different clusters. So they had the Ceph data cluster. They had the HPC data cluster. How do you move data back and forth in an easy way so that they don't have to worry about security and risk and all of those things? And at the end of that, they also wanted to extend the framework and be able to manage it easily. So this one solution gave them everything they needed. It solved all their challenges from a data storage standpoint. And going forward, the research cloud is going to be much bigger. So they're looking to go to five petabytes. They're looking for virtual servers and virtual storage to meet their HPC nodes. And they're also looking for self-service. So at the end of the day, the researchers go to the cloud, ask for whatever number of compute nodes that they need or storage nodes that they need, and rapidly set it up and scale up and scale down. The usual cloud-type services. And they're able to get all that with this solution. So this is how their architecture looks like. So all the stuff you see up there are OpenStack nodes. They have two HPC clusters. And the HPC storage is where the self-cluster sits. They were looking for 10 gigabit network interface between the two so they can move data back and forth. All the researchers take their data, put it here, send it to the HPC cluster, do the analysis, and then bring it back to the HPC cluster for later consumption. So this worked out perfectly for them because it solved a number of problems in one shot. So they had virtualized servers, virtualized storage, a high-speed network access, high-speed network access to the storage from their HPC clusters. So it all worked out very nicely. So for more information, come by the booth. We are doing a crowbar deployment out there. There is a UAB white paper that just got published. The customer has actually given us great comments and quotes about the actual deployment. Feel free to contact me at those contact information. We also have a couple of other panels going on. Tomorrow, we have a customer panel. We're talking about how customers are using Dell services and support. And there's a panel on application portability all tomorrow in the morning and afternoon. So if you have any questions, feel free to catch me or come to the Dell booth. Thank you very much.