 All right, testing. All right, good afternoon, everyone. Thank you for joining the session today. Hope you're having a great time at the summit. I still have familiar faces here, so it's good to say hi to friends and families in the audience. Today, my colleagues and I are going to give some highlights to some enterprise solutions that we're doing at Hitachi with OpenStack around IAS and automation. So before I get there, I want to just give you some highlights on what we've been doing with OpenStack at Hitachi. So we can think of it, Hitachi worked with OpenStack in kind of several ways. Just by default, like most of you know, Hitachi or HTS has infrastructure vendors. So with that, we have been working with OpenStack on exposing our enterprise infrastructure through the OpenStack cloud through APIs and drivers. Furthermore, we also have integration through OpenStack with our Converge scale up and scale out solutions that are now integrating with OpenStack in the horizon. And last but not least, we also use OpenStack in our own engineering DevOps solutions as well. So back in Vancouver, we actually showcase a case study where we're going to demonstrate how OpenStack all work together end-to-end as a solution for some of our customers. That's a case study that we actually leveraged multi-hypervisor support, integrating the Hitachi platforms, working with some of the OpenStack solutions across the board. And since then, with Liberty, we introduce also the Manila support for the file share services as well. So today, we actually like to talk a little bit more about beyond just the drivers and more on how our enterprise customers are adopting OpenStack and some of the solutions around there. So we have two highlights today. We're not going to do the actual demo on here. I'll move just right next door over here. So welcome to come by and see the actual demos there. So we're going to focus on two solutions. The first one is a IES solution that my colleague Kimura-san will showcase that we're doing in the Japan market here. He'll go through the details on this solution. Second, we're going to talk about a solution called Hitachi Automation Director that focus on intelligent automation that look at our infrastructure, abstraction, and orchestration through the entire infrastructure. So with that, I'd like to introduce my colleague, Kimura-san, who will go through his part of the presentation. Okay, so let me explain Hitachi IES solutions. Hitachi IES solution provides best environment depending on business requirements. There are three key points. First, simple and easy operation by Service Portal. And second, operating and managing on March hypervisor environment for various workloads. And third one is Hitachi value-add features for enterprise use. Let me explain these three key points in this order. So first one is Service Portal for IES. We have our own Service Portal which have rich UI and task-close feature. As for rich UI, as you can see, it visually displays resources assigned to tenants. And as for task-close feature, it displays a list of task logs so that user can easily see when the task is requested and what type of task it works and how the status it is now. And by these features, user can easily see and operate our IES. Let me go to the second topic, March hypervisor support. We support KVM, VMA, and our logical partitioning feature and bare metal. The reason for supporting March hypervisor is that depending on the workload, it is better to have a choice for where to deploy instances. Let me show the example use case for March hypervisor. This is an example use case and deploying scaling empty system. Please see the figure in this slide. There are many clients access to the road balancer and behind the road balancer, we have scale out application servers and behind that there are database server and it is scale up and which requires stability. In this system, database server should be on the bare metal machine because it requires stability and application server should be on virtual machine because it requires the first boot to scale up, no, scale out. This is a third topic, value at features. Hitachi has been working on open stack to apply open stack to enterprise systems. To achieve it, we have been working on community to improve reliability and stability, especially for ironic and Cinder. And also, Hitachi has evaluated our servers and storages with open stack and we have partnership with Brocade to for network and evaluate with them. In addition, Hitachi has developed drivers for Hitachi's enterprise product such as Elpa on our server, block storage, file storage and object storage. And they have many value at features and we leverage these value at features to our IR systems. Let me explain detail for Elpa driver for value at features. To begin with, I'd like to explain Hitachi logical partition feature itself. Our server, Hitachi compute break has logical partition feature inside our firmware. It provides hardware-level resource management and dedicated to CPU and fiber-channel connection to the storage. So it provides reliability and stability. So suitable for bare metal workloads. We developed Elpa driver for Nova to manage it with open stack. The advantage of using Elpa driver with open stack is these three. First, it can allocate arbitrary resources and second, it can isolate network because it can support VLAN. And third one is attaching Cinder volumes to Elpa and booting from the volume. These features are locked in a bare metal and the last third one, Hitachi is working on it to improve ironic in the community to achieve this feature. Let me explain the detail for the advantage of leveraging Cinder volume in the next slide. This figure shows how Elpa work with Cinder and Nova. Once Nova create an instance, Cinder create a volume from the template volume and create a boot volume. The volume is attached to the Elpa and Elpa will boot from the volume. And volume can be attached as in data volume and also volumes can be backed up by using Cinder function, Snapshot function. By leveraging Cinder, there are three advantages. First, it can save time and network bandwidth on instance launch because we can leverage storage function like copy on write Snapshot. So there is no need to use network to copying the data from data to volume. The second advantage is shorten downtime on backup. Again, we can use Cinder function Snapshot to create backup. So we need to stop the service when taking a Snapshot. So to shorten the downtime, Snapshot function is very good. And the third one is ability to delete data on instance duration. If the data in the servers hard disk, it is very difficult to delete the data on instance duration. But this volume is managed by Cinder. It is very easy to delete by using Cinder function. This is one of the example of our value at feature. We have a lot of drivers, so they all have additional features. To summarize, we have three key features. Service portal for simple and easy operation with rich GUI and task log feature. And second, much hyperbuzzle support, such as KVM, VMware, Elpa, and bare metal for various workloads. And third, value at feature for enterprise use by leveraging drivers for Hitachi's enterprise products. By these three key points, Hitachi IS solution provide best environment depending on business requirement. Okay, now let's switch to team for intelligent automation. Perfect. Thank you very much. So I'm Tim Lofank, software product manager here at Hitachi Data Systems. And I'm gonna talk to you about two use cases that we have prototype technology demonstrations here at the booth this week. And it's around integration with a new product we have with an HDS called Hitachi Automation Director. So the two use cases, one use case is what you're seeing right here is actually integration from the bottom up, integration through the sender driver, providing extra value. And the second use case will be from the top down, actually orchestrating across an open stack environment. So let's talk a little bit about automation director and what it is so you can understand this use case. So Hitachi Automation Director was brought out to simplify the day-to-day provisioning activities associated with HDS infrastructure. So it's about abstraction, it's about simplification, and it's about intelligence through automation. So NHAD, it has a service portal, kind of a catalog of predefined workflows that are specific to provisioning our storage. So those automated workflows for a consumer, it's really simple. They go in and they say, I'm gonna provision for Oracle today. And it already pre-defined and knows that there's these sets of volumes, those sets of volumes require a certain tier of storage. And that's it. So the user goes in and says, yeah, I wanna provision Oracle and the automation engine takes over from there. So there's some definitions that happen in Automation Director to actually make this happen. So the catalog is populated by an administrator and he'll populate a catalog that will include things like I was saying, maybe a catalog item to provision for platinum storage or for gold storage. And additionally, you can include things like replication, either in-system replication or cross-system replication as part of those automated workflows. So what we've done in this use case is we've actually have these catalog items, which can be accessed via an API. We've created a sender driver integrating with that API. And so in Horizon, you now see the catalog of exposed sets of storage services that are available. So you're selecting a gold storage or a silver storage with or without replication. Once that's selected in Horizon, it goes through, kicks off the automated workflow in Hitachi Automation Director. Does the provisioning and provides the information back up into Horizon via sender. So what does this provide? It provides extra control over the environment, allows you to create different storage classes, storage use cases, definitions that can be exposed very quickly and easily into the Horizon environment without having to do any major uplift or code changes associated with it. It also provides additional controls. So within Automation Director, within these automated services, you can actually control how much a consumer can actually use, how much storage, how many volumes and the like. So you can really limit what is exposed to the person who's consuming that storage and control how much they can actually access. So let's talk about the second use case. Again, the first one was from the bottom up, now we're gonna go from the top down, where we're actually going to automate the provisioning and standing up of a web application system. So in this case, we're using another feature of Automation Director, which is called a service builder tool, which allows you to create those net new automated workflows. So we've actually created plugin components as workflow components to integrate with different aspects of OpenStack, with Neutron, with Nova, with Cinder and the like, to be able to automate the provisioning of the infrastructure required for this web application, this clustered web application. And then to actually also go through and stand up the environment, deploy the application, do the clustering configuration, do the volume provisioning, so and also doing the network configuration as well. So what this is trying to do is it's going beyond just storage and it's trying to assist with the automation of deploying a specific application within OpenStack, as an example. One of the components that we also created within this automated workflow is something we're calling a resource scheduler. So one of the things that I kind of missed in the last presentation, with Hitachi Automation Director with these simple calls, give me gold storage from a production environment, there's an intelligent engine that goes through and says, okay, I've got five arrays out there, I've got a hundred different pools of different tiers of storage. It will look to make sure it's matching the right tier of storage with those pools, and then all the pools that are available, it will look at the capacity and performance and make sure that it's provisioning to the least utilized pool. So it's kind of doing load balancing at the time of provisioning and making sure it's matching up to the right requirement. So the reason I say that is the resource scheduler is something very similar, but outside of storage. So what it's doing is it's looking in your bare metal environment, your server environment to determine, okay, so I know I need to deploy these sets of VMs, I need to have this connectivity, where should I deploy it? So it actually integrates with a salometer and Nagios to get metrics information about where is the best place to actually do that provisioning from. So again, this links it in as part of that workflow, so when it's doing provisioning, it's doing the whole end to end from the infrastructure, a network, compute, storage, and then deploying the applications as well. So again, as Albert mentioned earlier, our booth is right over there, and we have these demos right over there. So anytime, please stop by, we're happy to give you those demonstrations so you can actually see this working in action, and we appreciate your time. Thank you very much.