 All right, good afternoon, everyone. Thank you for taking time out to visit us at the demo theater here. Today, my name's Albert Tan, director of PM for OpenStack. I'm here with Mark Daneu, our master's solution architect, to tell you a story about a multi-vendor solution we deploy with our partners with OpenStack. So before I go into the detail of the story, I'd like to give a little bit of a background on our journey really through IT. What our customers have been facing with some of the transformations you heard in the keynotes. So what we've been seeing really with the customer and their transformation through their journey has been, we've been hearing this term called BIMOTO-IT quite a bit where customers are looking to optimize and looking at the existing infrastructure, existing solution and how to maximize the efficiency. At the same time, they're also looking at more agility, how to innovate, how to accelerate the development, the DevOps and all the solutions on top of that. So these two paths are very much in line with what we do with OpenStack. So before I get into the detail of the solutions here, I'd like to give you a little bit of background on what Hitachi does and our strategy and solution. So here's a high level vision of what we do with really OpenStack and Hitachi solution into the cloud. So we start with really at the infrastructure level where we have our continuous infrastructure starting with our enterprise-class storage, compute and integrations. From there, we go into the content as a service management layer with many of the, really the mobility and metadata solutions for our cloud. On top of that, we have just recently announced in our Hitachi Connect platform a lot of the social innovation, really the internet of things, integration into different verticals. And all of this really started again with the infrastructure layer and OpenStack is a key part of that. So with that introduction, I'd like to go into the OpenStack aspects of our solution. So our customer's journey into OpenStack has been an interesting one. So we've been at the OpenStack Summit since the Hong Kong Summit. We've seen this growth as you've seen in the traction by the user community and by the evolution or revolution of the code base has been great. And we have been kind of adopting this, embracing this journey with the OpenStack community and really our vision is to really provide our enterprise quality experience that we have always had with Hitachi to the OpenStack solutions. So in the last couple of years, we've been hearing more adoption with OpenStack and here are some of the quotes we get from the OpenStack users. For example, one, they want more flexibility and they want more control on how to manage your cloud. And they want to have reduced the cat-backs. That's a very typical when we hear from the customers. Many of them are adopting NFE and which require a lot of high IOPS, high performance, enterprise qualities. Self-service portal and going to more of a consumption model, moving from the traditional cat-backs where you maximize your workload to plan for the maximum. You're going through more of a consumption model, up-backs model. That's transforming how people deploy their cloud solution and OpenStack plays a key part of that. Just a couple more again. All this, for us, you know, scale our web services. They're running many of the different databases on top of our sender drivers and solutions. And more importantly, we hear more, OpenStack is not just for scale-out, it's for scale-up as well. I think we're seeing a lot of discussion about pets and kettles and that's happening more and more. So this is kind of an interesting journey for us to kind of embrace this OpenStack solution with our customers. So with that in mind, let's introduce Mark Darnell, our, I'll let Mark do the introduction on the user story on what we do with OpenStack. Thanks, Albert. All right, my name is Mark Darnell, and this is the title that I tried to negotiate when I came on board. They wouldn't go with that one, so I tried for that. They wouldn't do that one either. So we kind of talked through some of the prior titles that I'd had and eventually landed on Master Solutions Architect. For the purposes of this talk, I am an OpenStack subject matter expert for HDS and because I have the remote and I'm on stage and it would take time for Hitachi people to come drag me off stage, today I'm a technical janitor again. This talk is going to be about a real world, I'm just gonna give you the highlights that we're gonna do. That's actually the abstract that we published for this talk. We're gonna talk about the real world requirements and the solutions of a multi-vendor OpenStack deployment. This is something that should be near and dear to the hearts of every one in the audience. The only thing that most likely will vary is the size of the customer that you're dealing with. And we're obviously gonna be talking about, since it's a large customer, we're gonna be talking about enterprise OpenStack deployment with a number of enterprises that have worked together to deploy this OpenStack installation that we have at this customer site. All right, before we move into the actual details of the deployment, I want to kind of address a little bit of the enterprise culture around OpenStack. I feel this is a pretty important thing to do. And it's kind of a fascinating topic whenever you look at technology adoption in industry of how things start small and start to scale. And especially when you start involving open source, things are really interesting in terms of how corporations or cot software ties in with open source. So let's address that a little bit. I think everyone's familiar with the five steps of acceptance, okay? This has become part of our culture. What I'd like to do is I'd like to real quickly talk about the five stages, the modified five stages for enterprise OpenStack. And I'm gonna bet regardless of the size of your company, regardless of the size of the IT organization, you've experienced all of these. So the first one tends to be open what, right? Where people go, what is that software? I've never heard of it before. So you'll explain it to them. And after a while, the response that you'll tend to move to will be OpenStack, you've got to be kidding. You can't run much real on that. Now fortunately, OpenStack maturity is at the point today where there's a lot of reality to this. But the majority of the people who know that are people in this room, people who do this for a living. So this is where a lot of companies are stuck, in my opinion. OpenStack is fine for the lab and you see a lot of DevOps stuff going on, which is great. That's production development background here. So I'm never gonna be pejorative towards development. But you'll get people who will say you can't run mission critical apps on it. Well, you'll see in a moment the requirements of our customer was mission critical apps need to run on this. So our position, Hitachi's position, is that we're at the point where mission critical needs to be an integral part of the OpenStack discussion. This one to me is interesting, stage four. This is where a pretty significant number of vendors, I wouldn't say significant, let me rephrase, some significant vendors are stuck in this mode. There's a browser from a number of years ago that followed a model of embrace and extend. And you'll end up seeing that with a lot of technology. This one we have to watch out for. This is the fifth one, the equivalent of acceptance in the five stages of grief. And that is OpenStack. It's good stuff, let's integrate with it. Now, I work for Hitachi, so what is Hitachi's position on this five stages? Well, for some unique reasons, which I'll cover real quickly in the next slide, we feel like we've really jumped from open what? Into OpenStack, it's good stuff, let's integrate with it. And the reason why, number one, OpenStack, as you walk around, you're gonna see continuous innovation, continuous integration, so forth, all the way throughout the show floor. OpenStack has at its core a philosophy of continuous improvement. Hitachi, one quick note about Hitachi. We are a multinational company, approximately 1,000 subsidiaries, we're headquartered in Japan. Why is that relevant? I'm sure most of you, if not all of you, are familiar with the word kaizen. It's a philosophy that many Japanese, if not all Japanese companies run their businesses by, and this philosophy is imbued in our organization. It's a system of continuous improvement, which you'll notice is exactly how OpenStack tends to work. So the two communities, we feel a real synergy with the way that OpenStack community likes to work. Because of that continuous improvement, we'll introduce a product with a core set of good features, we'll listen to our customers, and over time, we really nail things with that continuous improvement. And then finally, I've been working with the company for a while, this is about the only marketing plug that I really give how the company works and feels. It really is a bunch of nice folks who just like to make good stuff. So our industry rep tends to be pretty solid. All right, what's the L word? I wanted to give some examples of other technologies that have worked much like OpenStack in terms of enterprise adoption. If you remember the Linux days, early 90s, it was good for desktop, but not much else. First it was a toy, then it was good for desktop, but you would never ever put it in the data center. Well, ironically, who owns the data center right now? Linux, who owns the web in terms of servers? The lamp stack, and then the follow on technologies to that. So Linux, I think, and it's ironic to use a pun, that Linux OpenStack basically are tied together. So I feel their futures are tied together and there's a really good future there for us. PC, another example, it's a toy and now it, once again, X86, pretty much owns the data center. So my position on enterprise is that from the small business through the enterprise scale data center, OpenStack can own the spectrum. And I'm gonna push it one step further than that. Not only can they own the data center, but when everybody is integrating with you, which is exactly what you see as you walk this floor, you have one. So in my opinion, we're watching the end game right now in terms of OpenStack migration into the full compute world. All right, end of enterprise talk. Now let's talk about the gig, what you most likely came to hear about. So we have a large customer in Amia that needed to have the following requirements met. Number one, they needed to be multi-hypervisor. There's not a whole lot of other cloud management platforms that I'm aware of that do a good job of really handling multiple hypervisors. A number of the commercial ones, you basically got one slice. You do one thing and that's all, okay? This customer had a large number of vSphere hosts that needed to be integrated into their cloud. They needed KVM for utility and LPAR. For those of you who know anything about the mainframe world, IBM AIX, things like that, LPAR is a critical feature for hard provisioning of software, of compute resources. So they needed to do that. Storage, they needed block and objects, so you're obviously thinking Cinder and Swift and you'd be correct. Networking, they need 10 gigabit with multi-paths, so lags, VLags, so forth. They wanted hardware firewalls and hardware load balancers so that they could be managed by Neutron and they could scale. As Neutron continues to grow up and more and more SDN comes online, as the DVR technology becomes reality, some necessity for that may go away, but right now hardware appliances really do help enterprises work. Needed enterprise scale hardening for mission critical uptime and everything had to be managed by OpenStack. So it's a pretty ambitious set of requirements. How did Hitachi bring that to bear with our partners and I'll go into some of those partners later on. So first of all, from a compute, we have our CB500 chassis, has up to eight blades of x86 compute. It has integrated mainframe-based LPAR. Hitachi used to make mainframe servers. We took our mainframe LPAR team and we migrated them onto x86 and pulled all that technology with it. Our customers, don't take my word for it, ask our customers, this is a significant differentiator for our compute. Integrated 10 gigabit networking and fiber channel, enterprise hardened power cooling disc, integrated lights out, so forth. All the typical enterprise features that honestly, those are typical for the enterprise market. But to me, even as an SMB owner, here's why they matter. Every one of the audience, if you've done tech work, has had the red thing happen to them at least one time, where on date night, you're in the data center while your date's in the car, okay? Those are uncomfortable conversations to keep on coming up in the relationship later. The less you can avoid that, the better. Or the more you can avoid that, the better. So what does LPAR look like? LPAR is an extremely capable system of dividing our compute blades into multiple hard provisioned compute resources. What you can see there is I can select number of virtual CPUs or number of CPUs or cores, memory, they can be shared or dedicated, you can have shared memory pool, you can divide up virtual NICs, you can do virtual fiber channel adapters and so forth. It's very easy to work with at this level and then there's a much more powerful set of CLIs and so forth that you can do. So this is, you need more information on our LPAR because we're time limited, come talk to us and we'll give you some more. From storage perspective, Cinder, we implement both iSCSI and fiber channel access in our modular at the bottom of our line array up through our storage arrays. They needed, this customer needed fiber channel if they need to implement iSCSI, not an issue. Our flash, there's a lot of flash activity going on these days. We have a flash drive you need to come ask us about it actually implements multiple cores of SSDs with a sophisticated controller in front of it that gives us multi-channel access. So if you have like eight SSDs and you can write to those simultaneously, you can do the math of how much faster we may be than some of the competition. Check that one out, it's not your father's sand to borrow the Buick tagline. Site to site replication, if you have a DC which this customer has that needs to be backed up from one site to another, asynchronous online, our technology does that in the box. Hardware based snapshot capability, if you wanna start with Ceph and then have snapshots and then eventually move into a higher end system, hardware based, same sender API, plug and play. All right, and finally, enterprise hardened object store. We have a great Swift system that implements the Swift API natively. This customer needed that as well. So this is the storage solution that we brought to bear. I wanna walk in detail for a couple of minutes through a copy on write procedure. This is one of the sender optimizations that's a pretty cool optimization to have. So if you look at the compute node in the Glance server and that bottom rectangle is a storage array with a Glance image on it, here's the typical flow that you'll get through say a KVM instantiation on an array. Compute node will talk to the Glance server saying, hey, I need a copy of this image. Followed by the Glance server will now read from that lunch. So the full set of, let's say you got a 50 gigabyte Windows image sitting in Glance, it's gonna read 50 gig across that pipe. Following that, the Glance server is gonna pass it back to the compute node or the sender server, whichever one is actually doing this, and then that's gonna get rewritten if you're using sender backing store back down into the array. So that gives you a grand total of one read and two writes of 50 gig. So it's a hit every time you instantiate a box or an instance. Here's what our driver can do. So compute node, request a Glance server. Once again, notice the arrow is flipped. This time all we're doing is we're just telling the array, I'd like you to make what we call a thin image copy of it. It's copy on write. So there's no data that's actually transferred. We end up just creating kind of a shell. And if you read, if you are now reading from those blocks on the LUN, the primary LUN, it's gonna read from a shared copy that everyone is gonna share. It's only when you write one page back that that'll end up modifying. So the instantiation time on this is zero rides, zero reads, a minuscule write, and your array cache, which caches all of the blocks in your storage area network is an extremely high performance. Everything goes exponential in terms of how your performance improves. Network, firewall appliances controlled by Neutron plugin. If you need some information on the vendors that we're using in this particular area, come talk to us. Our booth is right past the entrance, two down, and you'll see the Hitachi Cloud sign. Load balance appliances are controlled by a Neutron plugin and then multi-path 10 gigabit ethernet, all integrated. Storage, compute, everything is integrated one into one rack, but the networking in the computer all integrated into one box. So this is what the full solution looks like. So that middle strata that starts from controller node, sender server, Swift and Neutron server. That is standard open stack, okay? On top of that, we use DCM and we've actually got a couple of Dell folks in the audience right now. We're partnering with Dell using their DCM cloud management platform, which gives you some really cool capabilities in terms of moving from cloud to cloud and controlling multiple clouds all from one platform. Feel free to talk with them after the talk if you'd like. Their cloud management platform is calling Nova, Neutron, sender and Swift APIs, which means that we're able to integrate. As I said earlier, when you integrate, when everyone starts integrating with you, open stack is basically won the battle. We are full integration in every one of those areas. In addition, we can be a one-stop shop. We can provide you plug and play where we'll just provide you compute, storage, network, whatever, pick and choose. You integrate accordingly. That tends to be the Atachi company culture. This system has completed proof of concept. We're actually in production creation right now. Most likely around the Tokyo timeframe. We should have more information on this. So if you'd like to come back and talk with us then or if you wanna talk with us at the show here, feel free. All right, terms of questions so forth. I'll end up doing questions off to the side here after we're done. Like I said, booth 35, second one over. That's it, thanks.