 Hello everyone and I will introduce about U.S. OpenStack and Hadoop big data solutions. Today we will mainly talk about the integrations with OpenStack and Hadoop. First we will introduce what is U.S. U.S. is developed based on the OpenStack core project and stuff to provide in-frame structure services. It also integrates with lots of past platforms such as to provide big data services, container services. U.S. is also a unified platform for enterprise users to leverage its features to build their applications fast, stable and reliable. And so here we will introduce something about the design background of U.S. For those public cloud services like AWS, customers just need to focus on their applications and uploading all the infrastructure considerations to their service providers. But customers just need to choose the providers and trust their IT infrastructure and evolution tendencies. Based on the fast product iterations, customers can continue to leverage the new features and benefit from them without any modifications in their application layer. In the last 10 years, public cloud service becomes more and more mature. However, it is not the same story in private cloud. The evolution of private cloud is much slower than public cloud, either on the new technology innovations or customer acceptance. And one of the important reasons is in public cloud platform, all services come from one provider so that the interactions and integrations between services is perfect. And therefore, users is easy to learn and use it. However, in current private cloud, different providers have different features, and even for the same features, different ways to use it. So it's hard for users to know all the products and choose the best one. Also, a powerful private platform needs to integrate at least tens of upstream or self-designed sub-modules which may face the problems mentioned above. And we may integrate such as storage services, these set clusters, and the infrastructure services, the core services like compute services for an over-project, and the network services, the new term project, and so on. And based on the S-platforms, we may also integrate with a lot of past modules like Hadoops and private cloud foundations, and also the Kubernetes as a container services. So UOS is just an enterprise product to integrate most common user services with unified deployment and admin operations with friendly user interfaces. And so while we need UOS, and first of all, we use UOS to use with the friendly graphic user interface, we can use private cloud to achieve the public cloud user experience. Also, based on integrated UOS DevOps tools, it's easy to scale up, scale down the services, and also for software upgrading. Based on high-ability architectures, we provide stable and reliable S-platforms, and UOS also is designed purely based on the open source, so no vendor lockings here. And what our customers always to do, how our customers always to deploy the applications for our customers, the applications always deployed in several ways, as shown below, applications can deploy on bare metals because some applications may be hard to deploy and comfortable with the virtual machines. And also, our users want to deploy the applications on virtual machines in the OPTAC or VMware devices. And for those stateless services, users want to deploy it in container clusters like the Natives, and for those data processing works, users may want to build their Hadoop clusters to run those works. And so, based on those applications' user case, we want to integrate the Hadoop clusters in the OPTAC. And so, what's the current situation to integrate Hadoop in OPTAC? The project in OPTAC is called Sahara. Sahara is integrated Hadoop clusters into OPTACs. And it uses heat as the source of the Hadoop clusters, and uses the truth and the sweep as data sources, and uses vendor plugins to provision Hadoop clusters. So, we can base on the heat of the situation template and to guide the OPTAC to build the certified VMs for the data processing workers. So, how to provide Hadoop data processing services in UOS? And the solution is, we think, what we think to do is to just integrate the Hadoop cluster on UOS as platforms. So, all the environmental services are managed by the OPTAC, and while past services are deployed on it, and so all environmental services can be quick provision and configurations. For different applications, we can provision different VFD virtual machines to satisfy them. So, the base stack is a lot of environmental, and we build an OPTAC cluster. And based on the ASC services, we can deploy the Hadoop cluster and the Kubernetes cluster, and so on, and others past services. And so, this is one of the suggested environmental nodes in UOS. At the top of the rank, we can see we use two switches, one gigabyte switch and one 10 gigabyte switch. The one gigabyte switch is for management, and the 10 gigabyte switch is used to transfer data. And one of the operations nodes to integrate with UOS operation tools to do the operation for the whole cluster, and three control nodes. This control node is to deploy the OPTAC API services and make those API services high availability. Then we can use the general purpose computing nodes and several big data computing nodes. The storage is built based on the SAP OSD services. So, this is the network properties. And NIC-1, NIC-2 is connected to the one gigabyte switch. And NIC-1 is a provision network to install the OS or operating systems. And the NIC-2 is the management network. NIC-3, NIC-4 is a bonding and to provide tenant VM network, storage cluster network, storage management network and the external network. So, some problems maybe exist to set up the Hadoop clusters on OPTAC-S platforms. And our customers always worry about the performance issues to set up Hadoop on S platforms. And also, our users is always to use the traditional ways to deploy and use the Hadoop cluster. So, enterprise users may familiar with the traditional ways that is deploy those Hadoop clusters on the bare mental. But its drawbacks is that in the enterprise, it's a lot of clusters, separate clusters. One OPTAC clusters, one Hadoop clusters and one Kubernetes clusters. So, it is not good to operate in the management. Also, the users may consider how to leverage the Hadoop features. For example, the Hadoop NAND node can schedule the compute task to the worker nodes where the data is. So, these are the data localities. They may also may just retrieve the raw data form and store the final data to OPTAC storage. And other intermediate results out this battle to store the locally. So, based on those kind of considerations, our general purpose computing node is for the general VMs. That is, its root disk and the data disk is based only by BDs. And for the big data computing nodes, the Hadoop VMs is different from the general purpose VMs. Its root disk is also based on the stack volumes. But the data disk is directly used raw devices on the local nodes. So, it will achieve the high-list performance. So, this is one of our tasks for the Hadoop on OpenStack. This is Hadoop on AWS. And the task is run the Terrasort 10 gigabyte data for the sorting task. And we build the AWS EMR VM flavors at four VMs with feet to X large. And we also build the Sahara VM flavors with exactly the same configurations. So, we can see that for the AWS EMRs, for the sorting task, it costs about more than four minutes. But in our Sahara Hadoop service, it just costs one minute and 36 seconds. So, these are our conclusions. We can benefit from Hadoop on OpenStack and it really has a lot of benefits. And the first one is the high performance. It also can achieve the high performance and we can have efficient schedulers. Maybe the enterprise have a different department. And the different department can provision their, quickly provision their Hadoop clusters differently. And also, we can quick deployment and outstanding scalabilities. If the Hadoop node is not enough, we can scale up the Hadoop VM node very quickly. And also, lose the coupling between application and software and leverage the benefit in us. So, actually, we just need one of the OpenStack clusters and the administrators just to manage one cluster. That is OpenStack cluster. And all of those other clusters like Hadoop cluster and the Kubernetes cluster can build on the OpenStack as platforms and to provide other past services. So, these are just our solutions for the integrations with Hadoop. Integrate Hadoop with OpenStack. Thank you for all and this is all the information and I introduced it today. Thanks.