 Okay, good afternoon. My name is Steve Sonnenberg. I work for HDS Hitachi Data Systems, which is a subsidiary of Hitachi Limited. And I wanted to take this opportunity to update you on our activities and plans with respect to OpenStack. So Hitachi has joined the party, perhaps a little later than others, but in full force. Most recently, we were selected and approved to be gold members of the OpenStack Foundation. There's a couple of different areas where Hitachi contributes. The first is Hitachi is a storage vendor and they make enterprise storage, which means storage, which is reliable, storage of ultra performance and other capabilities. And our goal is to bring those capabilities to OpenStack and to those OpenStack adopters. So most recently, we have introduced a sender driver for our mid-range. That's HUS-RAMS, which is our unified storage, we call it mid-range SAN. And that supports iSCSI attachment for virtual machines. In the upcoming release in the Ice House timeframe, we will also introduce support for HNAS, which is our filer technology we've acquired from BlueArk, both for iSCSI and NFS. We'll be supporting Fiber Channel on our mid-range and also Fiber Channel attachment to our enterprise, which is the VSP and HUS VM. All those can be seen in our demo booth. What I'm going to do is I'm going to walk through some screenshots just for the sake of time, give you a quick idea of what it looks like and how they work. So first of all, a sender driver is basically responsible for the provisioning of the storage. Once the storage is provisioned, it's normally accessed through the VM or in some special cases through what they call the brick layer in order to do backups and other operations. So in this picture here, you see that we have three different classes of storage, enterprise mid-range and NAS, and those are accessed using Fiber Channel, iSCSI, or NFS. And the sender role is really one of control of managing the provisioning task. In addition to that, I'm going to show you some of the work that's been done in one of the Japan Development Centers. It's a portal and we will also at the end discuss the HCP, which is our object store, our content platform, which works underneath Swift. So in order to demonstrate multiple types of storage, we use what they call multi-back-end configuration and through volume typing, we can designate that for a given volume when I want to create it, I want it targeted at a specific back-end. So we can create storage that will talk to the MS via Fiber Channel or HUS or NAS and so forth. And with those volumes, you can perform any of the sender-supported operations. You can attach them to virtual machines so you can have your pick of storage for where it's most appropriate. In addition to that, we can do operations such as backup virtual machines. These are standard operations that sender supports. So what is the advantage to the customers? Well, the customer now has a wide range of storage options, not just the commodity storage but enterprise storage, which differentiates itself in a number of ways. I don't actually work for a sales group, so I'm not going to go through the different capabilities that our storage systems provide. But the list is long and comprehensive and by matching those to the appropriate workloads, you have the ability to build enterprise-class OpenStack configurations. So one typical use case, you can take a company that's using OpenStack for DevOps and in this situation, basically you have a number of developers that are working on the same golden image as their starting point. They're working collaboratively. Each one has their own virtual environment. And along with that virtual environment comes an application volume, maybe a database and so forth. Each of those requires basically a same starting point, but they can't afford to run storage for every copy of storage for each one of them. If you multiply that by a hundred or by a thousand, you end up with an awful lot of storage to manage. So Cinder supports cloning, and our cloning can take advantage of our array-based cloning, hardware copy-on-write, copy-after-write, and deduplication technology, which can take a lot of storage, a lot of virtual storage, and you end up paying very little for it. As an example, we can take 300 gig volumes, that's about three terabytes of storage, and we can shrink that. So in a four terabyte file system with deduplication running, you can manage that in less than 200 gigabytes. So the clone takes no storage because it's using copy-on-write. And even once the copies start to diverge as development moves along its cycle, the deduplication squeezes that down. So it's very efficient and effective. So next I'll turn our attention away from the volume and the storage side into some of the portal activities. The development group in Japan put together a portal as a proof of concept for one of their customers. It's very similar to the Horizon portal in functionality, except that in two areas they added some additional capabilities. One of them is what we call complex image or template management. If you're familiar with VMware, this should make a lot of sense, but if you think about what's involved in running an image, it's not just the image, but you have other volumes that make up the environment that you need to operate at the same time. In addition to that, they put together more detailed task management so you can monitor the activity of operations. So in a typical VM environment, you have the OS image, which is managed by the hypervisor, but you also have other volumes, the application environment. And if you're using Horizon that comes out of the box, you have to treat those two sets of image versus volume separately. So if you want to do a backup, you have to back up the image, then you have to back up the volumes, and in order to back up an attached volume, you have to detach and then take a snapshot, back it up, and perhaps reattach it. So these aren't complex operations, but they weren't supported out of the box. And using our portal, we're able to manage snapshots, backup, launching, and management of a complex VM. So you can take a VM like the one in the center here that has attached volumes, and then using the portal, you can create a machine snapshot, which will snapshot the image plus the volumes, where you could back up the whole system and obviously restore it and reverse the process. Building a template is as simple as selecting the machine and then adding the storage that becomes part of the environment. And now when I go ahead and I launch the machine, I have a machine that's running with all of the requisite volumes. So it simplifies management. And then the last area I'll talk about is the object server. One of the core components is Swift. Long before there was Swift, the Hitachi data systems had a product called Hitachi Content Platform, or HCP. It's an object store archive repository, and what we've done in this case is put together a gateway that allows any of the Swift object servers to leverage HCP as the object repository. Why would you do that? Well, this HCP platform is in the sixth generation, and it has an awful lot of capabilities. It's a very mature product, and we want to sell it to you. So if you take a look at a typical Swift environment, you have your clients up on the top, including OpenStack itself, and then you have proxy nodes and storage nodes, depending on how large it's going to grow, you'd break that up into zones, etc. With our gateway, the object server has actually become almost a shim, and it doesn't store or manage storage locally, but instead it uses S3 to store that service externally, and since HCP can support S3, we can provide our archive behind a Swift interface in a very clean way. So our archive, for example, with four nodes, you could build a basic cluster and manage up to four petabytes of storage with four servers. This can grow by adding additional nodes, and you can manage up to 80 petabytes with no size restrictions and a host of other features as well, backed by the reliability and manageability of the Hitachi storage family. So there are a lot of features inside of HCP. I can't sell it to you this afternoon. One of the ones I did want to show you though is that it has a built-in query engine, which is based on Hadoop, and that becomes useful because most object stores are pools of storage, but it's a lot harder to figure out what it is that's in your pool once it's located in there. In order to do that, I want to introduce you quickly to what's inside an object store. So at the logical level, you have obviously objects. An object really is a blob with some metadata. It might have a naming system. When you go to actually manage it physically, then you have a lot of file systems which themselves don't scale that well. If you tie the file system together on many nodes and you store the attributes perhaps in XFS, then you have an object store. So HCP is a little different, but in principle it provides the same components. So if we take a look at Horizon and inside of Swift, we'll see there are containers. Those are logical sub-areas, if you will, of objects. And we've defined two as an example here, one for volume backups and one that I'll use for demonstration. And by highlighting volume backups, you'll see there's a set of objects and each of those objects actually may have a compound name and the backup will split the object into segments because there's a size limitation. And if you take a look at any object, you'll find there's metadata that describes it to the system. So here's an example of the object details on a single file. And inside that object repository, you might have blobs, but you can also store metadata as an object. In this case, down at the right here, you see the header for a backup. It's stored as a piece of XML as a Swift object and associated with it is a set of metadata shown at the top. And that metadata is transported in HTTP headers. So what you're looking at on the top here are the standard headers that accompany an object, a timestamp, and so forth. Inside of our content repository, we would define what we call a tenant, which has obvious parallel to open stack tenants. And inside each tenant, we can manage namespaces, which are similar to containers. And for a namespace, we have various properties, so we'll enable searching. And inside the namespace, if we were to browse down, you'd see up at the top in the gold, the orange print, a true hierarchy of names, because HTTP looks like a global file system. And inside a given object, there is two kinds of data. There's system metadata, which is stored for every object, and there's custom metadata. And to support S3, what we've done is we've created additional metadata or custom metadata to manage the S3 headers that aren't part of our core infrastructure. So as a little demonstration here, what I'm going to quickly do is I use CloudBerry and I've added a header. In this case, very original name, I call it Metafoo, and I'm going to store the value of bar, and I'm going to attach it to an object. And you'll see that it ends up in our custom metadata. There's a foo and a bar, and that may not seem very useful, but when you pair it to our search engine and realize that you can search for anything by content or by metadata, we can build complex searches using, in this case, an S3 Swift profile that allows us to look at the metadata and do queries and locate objects extremely quickly. So this is just an example of how we can tie our content platform with its built-in search capabilities and using Swift, and I think you enjoy the rest of your show.