 Okay. We're on. Hello, everyone. Thanks very much for joining us. My name is Manju Ramanathpura. I'm a CTO and senior director, primarily focusing on all things open source and networking related planning effort. With me, I have Tim Lofink, he's director of product management. He focuses on automation and orchestration related products. Thanks very much for joining us. And more than that, thank you for your continued support for OpenStack. Today, I'm going to talk a little bit about what are we doing from a HTS and Hitachi's perspective for OpenStack. And then Tim is going to talk about what are we doing from an automation and orchestration perspective that plugs in with OpenStack. So with that, let's get forward. So the first slide I have is a little bit more broader theme. What are we doing with open source in general? We're really not that new to open source. We've been participating with Linux Foundation for a while. And we are on OpenStack Foundation as well. Both Linux Foundation and OpenStack, we are Gold members. We are also fortunate to be on the board of directors for both Linux Foundation and OpenStack. Some of you, specifically those who are following the big data business integration, you probably also have heard about Pentaho and Hitachi recently acquired Pentaho. And as you know, Pentaho is all things open source. So everything that we do with respect to Pentaho is in the open source community. What I'm really trying to emphasize is we are truly passionate about open source. We are committed to continue to contribute to open source and help our customers. At the bottom, you also see few other open source communities we've been engaged with. So with that, I want to jump into a little bit more specific to OpenStack. With OpenStack, as I said, we are one of the Gold members of OpenStack Foundation. We are also on the OpenStack board of directors. I actually represent Hitachi at the OpenStack board of directors. In terms of contribution, we do have a significant contribution with Cinder drivers. That's our primary focus, but as you see in this picture, that's only about 25% of our contribution. So we are actually fairly active in other open source projects as well, including projects like Kola, Ironic, Nova, and Manila. And many of our employees are also actively participating in OpenStack working groups. Thank you, Paula. We are really passionate about OpenStack in general. We want to do whatever we can do from our perspective to continue to support OpenStack. On the right-hand side, you see some of the pie charts. I pulled them from Stack Analytics just last week. It shows in terms of what are the different projects we've been actively working within OpenStack community, and who are some of our community champions. And looking from what are we doing specifically in the OpenStack side, our focus has been primarily to make OpenStack ready for enterprise consumption. We focus heavily on how do we make this enterprise class... Oops, didn't mean to go there. There you go. We focus more on how do we make the enterprise class data services available for OpenStack. As you see at the bottom, these are our leading platforms for block storage, file storage, object storage, and our servers. In all those cases, we support OpenStack, and our strategy has been to provide this as a foundation and enable customers to use this inside the OpenStack environment and still meet the enterprise SLAs. That's been our primary focus. So in terms of our approach, go-to-market perspective, we don't really work on building our own OpenStack distro. We partner with our leading partners, Canonical, Mirantis, Red Hat, as well as Suzy. These are our OpenStack distro partners today. As you, I'm sure you are seeing there's no one distro that fits a specific customer's need. We understand that very well, and our approach has been to really support what our customers need. And we continue to, you know, march forward in that direction. And from a platform perspective, we do provide just our bare-bone infrastructure, whether it is storage or server, and let our customers consume that within their OpenStack environment. But in addition to that, there is a large set of our customers who actually want a single point of contact. They want to come to us to, you know, build us, build them a converged infrastructure that they can always contact us if something goes wrong with the infrastructure. So in that case, we have a product called UCP. We've been, you know, providing UCP for VMware and Microsoft Stack, and now we are working on expanding that to support OpenStack as well. So customers have options, you know, they could either buy our storage server platforms integrate with their OpenStack, or they could buy a converged stack from us and reach out to us for all the support and services. And when we look from both, you know, whether it is storage or server drivers or a converged stack, our approach is still to partner with the different distro vendors that you see on the left-hand side, and really hone in on how do we make this one plus one greater than two for our customers so that, you know, it's an enterprise-ready, enterprise-class infrastructure with all the bells and whistles from an automation and orchestration perspective. One of the key challenge we continue to hear from our customer is, CapEx is really not the biggest problem for them, it's the OPEX. So we tend to focus more on OPEX side, how do we make automation better, how do we make orchestration better so that our customers can reduce their OPEX spending. So with that, I'm going to hand over the presentation to Tim. Tim's going to talk about what are we doing from automation and orchestration. Thanks, Manju. Hello, everybody. My name's Tim Lofank. I'm Director of Software Product Management at Hitachi Data Systems, and as Manju says, I'm going to talk about a technology preview that we're doing here at the show. It's actually being shown at our booth. So please, after this, stop by and check it out. And this is the integration between OpenStack and an automation tool that we have with an HDS called Hitachi Automation Director. In the technology preview, there's actually two of these that we're doing. One from the bottom up and the other one's from the top down to try to do infrastructure as a service or platform as a service. And so from the bottom up, we're trying to provide additional value into an OpenStack environment, providing additional controls and abstraction and flexibility by integration of automation through the sender driver itself. And on the top, we're actually trying to do end-to-end automation, leveraging OpenStack environment itself and orchestrating the environment and actually going beyond, where we're actually provisioning the infrastructure of the environment and actually deploying applications. All of this done through the Hitachi Automation Director tool integrating into the different components of OpenStack. And we'll talk about each one of these in a little bit more detail. Before I go into the detail, let me give you a background on Hitachi Automation Director and what this tool provides and does so you understand how they're integrated together and the capabilities that you're going to see versus OpenStack environment itself. So HAD provides first and foremost a service catalog of best practice, automated workflows for provisioning HDS infrastructure. So it's about simplifying, automating provisioning of HDS storage. In here, there's a lot of controls that we have from an administrator standpoint for HAD. So when you actually configure this as an administrator, you can restrict who can access it, what levels of variables that people can change, how much of the resource they can provision from the infrastructure as well. And then there's abstraction and intelligence that we have. So the service catalog is a very simple interface where a user will go and ask for a request storage, give me gold storage from a production environment. And or I'm going to provision for Oracle today, give me all the volumes needed for Oracle. And so basically it's, give me gold storage from production. What happens after that request is executed, the intelligence engine takes over. So it will go through, look at all the available resources in the environment, match it up to the appropriate tiers that it's looking for, and if there's multiple pools that are available, it will actually determine by looking at capacity, utilization as well as performance as to which is the best location to actually provision from or into. So we're trying to distribute the workload in the infrastructure at the time of provisioning, so additional value. The HAD also has a REST-based API for integration into and so this is what you'll see with the first technology preview is we're integrating with the REST-based API of HAD to actually do the things that we're doing. And then we also have another tool within HAD which is called the service builder. The service builder provides flexibility and functionality of extending those automated workflows beyond HDS infrastructure. So that allows us to create these more complex workflows which you'll see in the second technology preview. Got it. Okay, so the first one, as I was talking about, what we've done here is to create a sender driver and that sender driver connects to HAD using the REST-based API and it's gathering information about all of the available automated workflows that we've created. Those automated workflows in HAD are preconfigured to do certain things. Give me platinum storage with replication. Give me gold storage with replication coming from an external array as well. So you're able to go in as an HAD administrator and create these automated workflows. They will be represented up into horizon through the sender driver itself. So in Horizon what you'll see is the Hitachi Gold with replication show up. Very basic, but underneath it's a lot more complex activity that's happening. So when that tier is requested through Horizon, it goes through sender, makes the API calls, calling the appropriate automation workflow in HAD and does its intelligent provisioning, gets the results and pulls that back up into horizon representing that information that it was complete. So as I was saying this provides additional flexibility and control. So you're able to control how much is being requested, where things are being pulled from through that automated workflow. Plus it provides flexibility to be able to make changes basically on the fly. So if you want to provision up or offer up new tiers of storage through the sender driver it's very easy. You go into HAD create additional services that represent those new tiers of storage and they're automatically seen up in horizon for those new tiers. Again, control flexibility is what we're trying to show here in this technology preview. The second technology preview that we have is the end to end provisioning of an entire web application in an HA environment. So this one is a little bit more complex and as I mentioned before what we've done here is we leveraged the service builder tool of HAD it created additional workflow components connecting to Neutron, Nova, Sender and the like to actually call into OpenStack, set up the infrastructure. So provision out the VMs set up the network, put it into a high availability configuration and then it goes off and deploys the applications and get those up and running. So what we're trying to do here is show you the flexibility and power that it has as far as being extended above and beyond just storage and how it can actually work and leverage OpenStack to help automate as Manju was talking about. Helping trying to reduce that OPEX that you have in managing your infrastructure. Make it repeatable, error free for example as well. Now in this technology preview as well one of the other things that we've done and I talked about it at the very beginning where we provided the intelligence and abstraction. So when we're provisioning storage it looks for the best place to provision the storage to at least utilize and does the provisioning at that time. So with this use case for this whole end-to-end web application in a clustered environment we've done the same thing. We've integrated the program as an example to gather performance information in the environment. So at the time of provisioning it will go through and determine where is that best location to deploy this to set this up. And again as I said these demonstrations are available at our booth across the hall over there. So please come and visit us and thank you very much for your time. Thank you. Yeah. If there's any questions we're happy to take questions as well. HMD. HSP. So I don't... So I can talk about it. Yes we do actually support HSP. HSP supports Northbound OpenStack APIs for NOVA. It supports Glance. It supports Keystone. Yes, so that's our scale out platform. I'm bundling them under the Converged right now. It's essentially how customers are consuming it. But it's a scale out converged infrastructure designed for large scale analytics purposes like big data analytics and that platform does support OpenStack Northbound APIs. Anything else? I'm happy to talk to you offline a little bit more if you'd like to. Yeah. Thank you guys. Enjoy the show. Okay.