 Hello, everyone. Thank you very much for joining us. My name is Manju Ramanathpura. I am not Steve Sonnenberg. Steve is here with me. He's my colleague. I'm just going to take a couple of minutes to just talk about Hitachi and Hitachi data systems and my colleague Steve Sonnenberg is going to talk about what do we have today from OpenStack perspective. So Hitachi data systems is actually a wholly owned subsidiary of Hitachi Limited based in Japan. Most of you know Hitachi more as a storage vendor because that's been our primary dominant solutions in the worldwide market but Hitachi as a overall we've been playing in a much broader market including storage server network devices but even beyond the IT infrastructure. What's been happening in the Hitachi's world is a slow transition of Hitachi data systems into many many solutions and products beyond just the storage. So if we look from where we are today specifically from an OpenStack perspective this is what we have. On the compute side we actually do support NOVA for our suite of server portfolio and on the storage side we support a broad range of storage platforms that includes our entry point a modular storage called Hitachi Unified Storage and all the way up to the VSP which is our high-end enterprise storage platform. So what I really want to mention in this presentation is that when you think of Hitachi we are really not just a storage platform what we have done with OpenStack today is enabling our storage and storage servers in a cloud environment and what you will continue to see from Hitachi going forward is a much broader set of portfolios that are built on OpenStack so that our products are really compatible with any other vendor solutions that are also working in an OpenStack platform. With that I'm going to hand it over to Steve. He's going to talk about the specific demo that we have today for you and after this session please do come by drop by at booth D12 to learn more about what we are doing with OpenStack. Steve? Thank you Steve. Good afternoon. The team in Japan has developed a number of very unique capabilities around OpenStack that I want to share with you. The first one we call the host deployment is a mechanism that they came up with for doing a very rapid addition of a node into a OpenStack environment. So typically creating an OpenStack node requires bringing up a machine, booting some kind of base image using puppet or something of that like and then performing the configuration. The time involved and setting up a new machine could take five to 30 minutes depending on what you have to put on it. So the way this was accomplished and significantly less time is by building a template that is an image in a cinder volume and then once we have the boot image we can do a high speed clone of that image and use that image to bring up multiple machines and without even having to do configuration. So within a couple of moments, a couple of minutes, we can bring additional nodes into the environment. So I just want to show you technically how this is accomplished. In the preparation stage we use the portal or horizon in this case to build a cinder volume and that cinder volume is going to be taken from an image. In this case we're going to be using the hypervisor as the image because we want to grow our node. We want to add additional compute nodes very rapidly. So that forms a template. The template is going to be used when we want to bring up a new machine as shown over on the far right. We clone the boot image and using a cinder attach that iSCSI volume becomes the boot volume for a node. We kick the node off and that's actually using IPMI in this case and once the node comes up it automatically registers with the nova controller and sure enough we have a new hypervisor that we brought into existence into our open stack framework in a very short period of time. So what I suggested to the team after learning of that demo is Netachi has another very interesting technology called LPARs which are machines within a machine. It's a logical partition of a machine and why don't they integrate that under the ironic or using the nova bare metal driver and demonstrate that we can launch server instances into these LPAR machines and I'll show you why. So the reason for using bare metal depends on a couple of motivations. Sometimes you need the additional performance, sometimes it's the isolation and the cost of using actual machines could be prohibitive. So with this LPAR technology we can actually split a machine, split a server blade into multiple machines and so if you take a look at your physical server up on the top here and we partition it we create these logical partitions in which we can put different quantities of processing power memory and even IO devices are partitioned into these I want to call a virtual machine because you think of something else but into a partition machine. So if you compare that to the bottom in the virtual model that everyone's familiar with we virtualize all of the server's resources. Memory is virtual, IO is virtual, CPUs are virtual and this model we can actually split the hardware and we can control, we can dedicate it or we can share the equipment again within a single server. So here's another another image on this case we're using our chassis system it's called the CB500 there are eight processors in the system and each of those can divide its resources in up to 30 LPARs in a single blade. So with this we have practically native performance we have isolation and we can take advantage of the exact resources that we need we don't have to worry about competition from other systems getting IO hungry or CPU hungry and so forth. So just a couple of slides that are from the demo which I'm going to invite you to take a look at. This is using the Hitachi portal system it's actually a demo server very similar in Enroll to Horizon and what we've done is we've created a new section called LPAR management and you can use that just like you would for launching a virtual machine you can launch it into a logical partition. You can also do this through Horizon in this case we're going to be using the bare bare metal driver which is part of Nova or pre-ironic and by selecting the flavor you can pick what type of LPAR that you want to use to to house that instance. Once it's running it looks just like a virtual machine it's got the same characteristics and the similar mechanisms for controlling that machine for starting and stopping and so forth. Under the covers if you were to look at our blade management system you see that in a given blade we've caught it up into a set of LPARs and each of those LPARs has dedicated memory or shared memory that includes devices devices for networking devices for which can also be used for fiber channel over ethernet and of course fiber channel which is one of the native protocols between the blade systems and all of our storage products. So an example of terminating looks looks identical all it does is this frees up the LPAR and that can be reused again as needed. So how does it work and how can we bring up systems very quickly using LPARs? Well one of the one of the major factors in the speed that it takes to launch a machine is the time that it takes to set it up for execution. I mean there's a number of parts involved in networking but in terms of setting up the execution format we've got a copy an image from glance bring it into memory or set up the ephemeral storage and boot that machine. When you're using an LPAR what we do is we simply arrange for that LPAR to boot the image and if we marry that capability with the same capability that we use with the host provisioning demonstration where we were using a pre-formatted image an image that is ready ready to boot off of then the time that it takes is significantly reduced. So part of the magic here is that this is teamed with our enterprise storage Hitachi storage arrays have a number of important capabilities that allow us to do some of these things. They are not commodity hardware designed for enterprise reliability enterprise scale performance they perform storage virtualization which is the ability to harness multiple storage products under a single management umbrella. They provide dynamic storage pooling which allows you to take multiple disks and treat them as a large storage pool tiering data protection and migration and the feature that we would be most interested here is the ability to take a snapshot of an image and this can be done in a small number of seconds. So the first step is to build an executable format of your system of the image of the instance that you want to run in this case I've called them gold l-pars and once we have that image we can simply tell the l-par engine to invoke or go through a boot cycle and boot that image very much as if we were bringing up physical hardware but without the physical hardware. Now there is a cost of bringing up a system just like hardware it goes through a post cycle and even in the the post cycle l-pars don't require the extensive physical tests they run a special purpose BIOS that other physical servers would require. So the end result is we have very fast machine startup isolation between instances exact control of the resources we want to share the ability to match the same density that we would have in virtualizing the resources across the server now we can partition them across the server with stronger guarantees for performance lower latency and this all allows us to take advantage of the increase in server performance in a way that we can control. So I would like to invite you to come by our booth it's D12 it's on the opposite end of the hall Seiji would be happy to give you a demonstration of the host provisioning or l-par provisioning using our open stack environment and there's also a raffle please enjoy the rest of the show if you have any questions got a moment or two okay thank you very much