 Hello, everyone. Welcome to our session on Jeffree Stew's private cloud. And my name is Takuma Kawai from Ivy in Japan. And I am a chief architect for Jeffree Stew Corporation account team. Today, I am happy to announce that one of the largest enterprises in Japan, Jeffree Stew Corporation, has just launched OpenStack-based cloud service this April. The name is JOS Cloud. I'd like to introduce the owner of JOS Cloud Project, Mr. Kento Watanabe, Jeffree Stew Corporation. And for starters, he would give us some insight on what's going around his company and its industry. And how the private cloud service will help them to deal with those business challenges. OK, the name JOS Cloud stands for Jeffree's Outros Cloud. And this blue cloud is the logo of JOS Cloud. It's composed of J-O-S. And we have a little contest on this. And actually, I was a winner. Thank you. So please, Watanabe-san. I'm Kento Watanabe, a staff manager of IT Innovation Leading Department at Jeffree Stew Corporation. I have been an owner of some of the biggest projects in our IT department, such as JOS Cloud or Virtual Network Implementation. Jeffree Stew Corporation is a Japanese integrated steel producer with major production operations in Japan. 80% of Jeffree Holdings revenue comes from Jeffree Stew. It is formed through a measure between Kawasaki Steel and NQK in 2002. And Jeffree Stew became the only integrated steel producer in the corporate group. We measure the size of steel producers by how much crude steel it produces per year. Jeffree Stew produces about 30 million tons of crude steel per year. And this is the second largest in Japan and the ninth in the world. We are operating at two major steel mills at East and West Japan. And one mid-sized steel works at Mid-Japan. Our product includes steel sheet, electrical steel sheet, plate, pipe, bar, load, stainless, and iron powder. Almost every variation of steel products. We have just entered the second year of fifth mid-term of business plan. And our vision is to continually reinvent ourselves as a global steel supplier that creates new value as we grow with our customers. Delivering the world's greatest technologies and services is key to achieving the vision. As an IT department, that is our competitive IT strategy company. We have stated our own IT strategy as shown on the middle of this chart. Important part, our IT should pursue technology at global level and deliver with speed of business to create customer-oriented value. To achieve the vision, we have to perform the following three items. First, IT restructuring. Second, IT leveraging. And third, IT risk management. We have already started to build advanced infrastructure, including, number one, Group Cloud. That can consolidate and streamline IT infrastructure throughout JFE Group as a whole. Number two, Virtual Data Center. That enables both disaster recovery with minimum RTO and effective use of IT resources. Number three, open and standardization. That supports flexible architecture. Number four, analytic infrastructure. Underpin by IoT and Big Data that extend our values as an enterprise. Because of our formation and the history, department and production locations own their IT infrastructure. Management systems and policies, such that there are no governance as an enterprise. You cannot miss the cost that derives from this inefficiency. When you see near the future, say 10 years from now, they should be standardized under single governance. This is why we put the biggest focus on data center consolidation. IT resources should be shared, but risks should be distributed. Therefore, we came up with a roadmap to first consolidate our resources into two locations, one in West and the other in East. Then integrate them into one Virtual Data Center. JOS Cloud is a solution that derives a major part of the roadmap. Another key technology that is to actualize the roadmap is network virtualization, namely OTB and Lisp from Cisco. This is a kind of software-defined network solutions in the broader sense. Two data centers are interconnected of a layer two extension with network traffic loaded dynamically following the appropriate path. With this solution implemented, JOS Cloud will gain more flexibility and improve mobility and availability of business applications. JFish Seals' goals for fifth midterm plan is to raise our return on sales from current 7% up to 10%. According to our estimation, migrating existing workload to JOS Cloud will reduce our IT cost down by 28% which contributes to achieve the goal. Now I will hand this over to Kawaii-san to show you some technical considerations. Thank you. Now that everyone's aware of what my client's business challenges are, I would like to explain how JOS Cloud has been trying to solve those problems with technology and architecture. First, I would like to mention how important that to relate Cloud and IT services. At the beginning, I thought Cloud is just a bunch of servers running on a hypervisor. But as I become familiar with Cloud, I know I was wrong. And now I see Cloud itself is service. By this definition, I'm going to read the script from here. Service is a means of delivering value to customers by facilitating outcomes that customers want to achieve without the ownership of specific costs and risks. In other words, service should enhance customer's business assets and it should be designed to align with business requirements. Just providing boxes of servers won't satisfy the definition. Service providers get paid and gain competitiveness in return for designing and managing services, improving their quality and cost performance. Major enablers of services are processes, people, and technology. Service management is a mean to bridge those enablers to make sure that the service always aligns with customers' business requirements. From this perspective, JOS Cloud has always been designed to be integrated with service management. I'd like to focus on the characteristics and how it is working on collaboration with OpenStack. Some of you may have seen this diagram. It is called Cloud Computing Reference Architecture and it's a blueprint to guide IBM development teams and field practitioners in the design of public and private Clouds. It has been created from collective experiences of hundreds of Cloud clients' engagements and implementation of IBM hosted Clouds. We refer to this model when designing JOS Cloud and I believe that it also aims to align business requirements and IT services. Here are the list of JVCC requirements and technical capabilities that help to satisfy them. Our customer, as is usual with the still producers nowadays, requires cost optimization, speed, scalability, and business continuity. And please keep in mind that business continuity is one of the most critical aspects for still producing because still producing is the source of revenue and profit that is generated every minute that still works. And once it stops, it takes a couple of months and hundreds of millions of dollars to get back to the normal operation. They also want us to leverage leading edge technology to become the technology leader in the industry. Virtualization is one of those technologies. It drives consolidation of physical servers, improves CPU utilization from current 30% to 50% up to over 70% and that can contribute to the cost optimization. Pay as you go also helps customers to reduce IT costs because they do not have to pay unless they don't use the resources. Skills also contribute to satisfy the requirements. Exa Corporation, a joint venture between JVCC and IBM Japan, has more than 30 years accumulated experiences and knowledge of system management at Steelworks, where 247 availability is the norm and makes a great contribution to maintain the service quality. Just like this, we were gathering good technologies but needed some more pieces to be filled to connect processes, people, and technology. This is a brief overview of data center configuration. We call this an operational model. From the contract perspective, JOS Crowd provides the choice of four platforms, including Intel Architecture, Power Architecture, System Z or Z Linux, and SoftLayer, a public cloud service provided by IBM. They're all planned to be placed under the control of our service-providing functions, including OpenStack. But currently, Intel and Power are the only areas controlled by those functions. Today, I'd like to focus on those platforms, so called Type A of JOS Cloud. What is significant here is that those two data centers are configured as DR sites. According to a capacity estimation, we chose IBM XAV Storage System to satisfy scalability requirements for the next five years, as well as availability and ease of use requirements. By the way, there would be 500 VMs for over five years. And two XAV boxes installed at both sites are connected through Storage Sync LAN and the remote mirror function being enabled in such a way as to send copies of VM images to the other site for those who select business continuity options. I will talk about DR in detail later. Now I would like to discuss how cloud services are implemented on the infrastructure pictured on the previous slide, especially from an ITSM provider's point of view. This slide lists how JOS Cloud should work to deal with all the requirements that comes from changes in business environment during the next five years, during the term of the outsourcing contract between JFE and IBM. In five years, there is supposed to be a rapid increase in number of distributed servers because still our systems running on mainframes are being re-hosted. Data center will be consolidated, as what I understand explained earlier. And we had to get ready to take advantage of public cloud services to make JOS Cloud evolve into the hybrid cloud in reality. To deal with those changes, we had to keep and improve our service levels with competitive price and operating effectiveness that can contribute to customer's business. This is a brief component model that shows cloud-providing functionalities and the process flows among them to handle every service request from customers smoothly. Boxes colored in purple are mainly IT service management processes that standardize delivery operations. You will see the large cylinder labeled CMDB right here in the middle of the purple area. And this is a single source of information that holds all the configuration items, relationships among them, and any other related information that documents in JOS Cloud. These functions run on IBM's IT service management tool called IBM Control Desk or ICD. And blue area represents automation. Operation manuals or procedures are coded into business process definitions, which run on business process manager or BPM. IBM Cloud Orchestrator is a tool that bundles BPM and IBM version of OpenStack called IBM Cloud Manager with OpenStack or ICM. It is formerly known as CMO2. Here's how it works. A service request is captured at ICD and then stored into the CMDB. And once the change request is approved here, the parameters are passed over to automation. Then BPM kicks the automation process where OpenStack APIs are called. And finally, a virtual machine is deployed completing the whole service request process. OK, this is the detailed version of the previous slide. I just wanted to mention here that there are a lot of detailed analysis and designs to make things happen. Here comes the three-layered architecture mentioned in the abstract of this presentation. And I like to take longer time on this chart because this is what differentiates JOS Cloud from other private cloud services. We separated the service management layer where self-portal and configuration management database are reside from the service providing infrastructure underneath. This is layer one. And it is designed to be the only human interface in this architecture. In the form of service requests, all the events that trigger birth and termination of virtual instances are captured into the single database from the first place, managed through the IT processes for their life cycles, including financial management that control the price table and the accounting reports that directly affect customers' business. ICD is a tool dedicated for IT service management and the best fit for this purpose. Layer two represents automation, implemented with BPM and an open stack. Once a change request is approved in layer one, parameters are passed through the web API and using REST APIs provided by both ICD and BPM. I suppose everyone is familiar with what is happening under those two layers, but I have to mention one thing within layer three. And these are the physical infrastructures. We have to provide a variety of platforms to serve for customers' needs, and both VMware and PowerVM should be available, which adds some of complexity to this architecture. This is why we took advantage of open stack. With that open stack, we would have to have to develop kinds of automation scripts that execute different commands on each platform and would not have completed the project within the limited time and budget. As you might have noticed, each layer is specially connected. This way, changes of environment happening in each layer, such as the revision of the price table or upgrade of storage capacity to not directly affect other layers, giving JOS Cloud flexibility of changes. It used to take around two weeks from the submission of service requests to a release of Virtual Machine, but it is now cut down to three working days. Three working days now becomes key performance indicator between JFE and IBM. To summarize, no virtual instances are allowed to be changed unless they are raised from the quarter and have appropriate approvals. This way, we can guarantee that the status of infrastructure always align with the status of business. Most important of all, we actually built and brought it up and running rather than just drawing a pretty picture right here. Now, let's take off from that picture talk and dig deep into the technology. We made a lot of important decisions around the open stack. As we have seen, ICD is the trigger that makes change to virtual machines. There are two options to make the changes happening after they are approved within ICD. Left column of the table is the option where an open stack takes control of virtual machines. With BPM is using open stack APIs to vCenter and PowerVC. The other option on the right column, BPM passes the control of virtual machines directly over to VM and PowerVC without the help of open stack. There are pros and cons for both options, especially around cost of development and the future of the technology. Our Intel server delivery team has always relied on vCenter when managing VMware environment, because it provides management efficiency, customizability, and stability backed by the market share. It was obvious that they would support the option on the right-hand side. Prior to the development of Jail's Cloud, IBM's distinguished engineer, who are supposed to be here with us today, but unfortunately couldn't make it, proceeded that delivery team to focus more on standardization and openness for the future. And they made decision to depend on open stack rather than a vCenter. They're still not totally sure if they made the right decision, but so far at least we could build the automatic provisioning system without knowing the propriety APIs provided by VMware or PowerVM. Let's face it, there's always been a dispute if it is appropriate to treat servers as cattle rather than pets in an enterprise landscape. And yes, they are cattle in a sense that they could be reproduced repeatedly with the same traits, but our understanding is that our cattle requires an extra care since they are the backbones of customers' business activities and should be allowed to own some degree of the individuality as we see in the next slide. This is the list of major architectural decisions. The first decision is the scope of open stack. Our grand rule is to use only open stack APIs to manage virtual instances, unless customers' business requires a functionality that is not supported by open stack. We knew we had to stick to standardization supported by open stack when we designed the architectural framework. But we were also very careful, especially when discussing backup and DR configuration to satisfy JFE's business continuity requirements. One other exception is when we have to give up Cinder snapshot but to choose XIV remote mirror to accommodate JFE's business continuity requirements to minimize the service disruption during the volume backup and to achieve considerably short RPO of 10 minutes and RTO of six hours in the most strict case. The other exception is Neutron, because our network is already standardized with Cisco at physical level and have plans to take advantage of Cisco solutions to build the virtual network. In practice, we had to admit that there are cases we should narrow down open stack scope to fit into the purpose instead of adopting the whole package. Should Keystone be shared or separated was another point of the argument. As you see in the diagram, we had an option to deploy Keystones at both sides right here. But this option requires us to set up another BPM. We have decided to share one Keystone from both West and East to reduce the number of components to be managed and all the security information is to be aggregated into one data store. Here comes another architecture overview. This time, we are focusing on open stack. As you see, to guarantee the consistency of the central database and processes, we only have one instance of ICD and ICO and Keystone at East Data Center, where 70% of JF's workload runs. West Data Center considered to be the dear side for those service-providing functions. We try to keep our open stack configuration as simple as possible to sustain management efficiency. Even though there are multiple departments and subsidiaries to share the resources, we only have one tenant, which is to be secured under one Keystone domain. Since we have to deploy one or each hypervergeal cluster at both sides, we have to allocate total of four regions. Within each region, there is one availability zone. By the way, per VC, the power version of East Center is also open stack based software, so we had to install six open stack packages in total, four ICMs, and two power VCs. That's a lot of open stack and you may love it. On our way to reach our final dear architecture, there are some issues to be solved. West Data Center on the left, East Data Center on the right. Could you take a look at the numbers in rest circles? Number one, how are we going to synchronize open stack data stores between two sites for the use for disaster recovery? It turned out we didn't have to synchronize any of them. Once we decided to spread the regions over both sites. Number two, now that we separated the regions, we had to prepare these centers at both sites and license for both sites. As we bought them, excuse us, but the costs are to be recovered from Watanabe-san. Number three, the most important issue. How are we going to copy and recover the virtual instances? I will talk about this on the next slide. And number four, how ICD and the CMDB can be recovered. We leveraged DB2, HADR configuration to replicate all the changes made in primary database to the dear sites. And number five, how BPM application can be recovered. The product provides an export function of process definitions to be restored at the dear site. This is the detailed explanation of issue number three from the previous slide. Values are synchronized between XIVs at both sites for both VMA region and power region. There are differences between VMA and the power on how to recover the virtual machines using open stack. In the case of VMA, each instance is allocated on each VMFS that contains multiple VM decays representing one system volume mapped to Nova and data volumes mapped to Singer. Each VMFS is assigned one physical volume on XIV. When disaster occurs, first, V center locates the backup VMDK and registers it under its control at the dear site. Second, then the volume can be registered as a glance image and deployed within the region. Then empty Synder volumes will be mapped to that system, and the data can be restored from the backup VMDKs. Because this is a kind of complicated procedure and takes longer than the expected RTO, we decided to use only V center instead of open stack when recovering from disasters. As for power, PowerVC can recognize the synchronized volumes as Nova and Singer volumes. ICM will see them as new instances at dear site. We have tested those procedures during development and validated that systems can actually be restored in this manner. In conclusion, let's recall Jeffy's requirements and our technical solution. We had only technologies at the beginning, but now we have automation, standardization, and open stack. The connect processes, people, and technology. With regard to our skills, because we have gone through the tough development project for about a year, extra cooperation has gained a deep understanding and capability around open stack, and our partnership became stronger than ever. Let's wrap it up. What is great about JS Cloud? In summary, it is a solution that combines everything Jeffy still requires from IT infrastructure services. We have long been delivering IT services to Jeffy still, and what we do has not changed much over time, and will remain the same in years to come. But on the course to developing JS Cloud, when and how we deliver the services has changed to satisfy customers' business requirements. Capacity planning, hardware procurement, kitting and racking, design and installation, and systems management. In the past, we were doing those tasks at each individual project level. With JS Cloud, we prepared the resources and tools ahead of time to be shared across projects. This enables us to shorten the lead time or the cost overhead burden to each project. This is the first thing great about JS Cloud. The other thing is that all the scattering existing processes are connected within JS Cloud. For example, user account registration used to be processed with papers, but we replaced it with an ICDs electronic workflow. There are no more manual checklists for the pre-release routines, but we set those configurations into our virtual machine templates. Even customer becomes a part of the process by triggering the service requests, such that all the information are stored within the CMDV from their existence and easily tracked down through their life cycles. Information stored in CMDV can be leveraged as our big data source, where we can analyze to detect signs of a service level breach and to improve our service quality proactively. The third and the last thing is that we eliminated every single manual operations that needed to be automated to achieve the KPI, released within three working days. Each factor works in orchestration with value-added properties underneath, such as business continuity or pay-as-you-go. JOS Cloud aligns ICD service and businesses of JFH Steel Corporation. Thank you. Any questions? If there are no questions, thank you very much for your time.