 Good morning, everyone. Welcome to join this session. How about your summit so far? How about the lunch? How about the party on Tuesday? Today, my topic will work you through how to use the open-step variance, this project to compose hardware resources on the fly. So this is joint project Intel and 99 Cloud work together and many other vendors who are working on the variance project. Today, first of all, I will introduce the data center challenges today. The dramatically data increase and cross our data center have much challenges on handle this data and process this data. And then I will introduce the Intel Red Scale Design Technology. Third, I will talk about the variance project which is announced last summit in Barcelona. And fourth, I will talk about the use cases, typical use cases of when you are using variance can help us address what kind of problems. At last, I will show you some demos. Digital transmission driving the data center scale out. According to the survey and investigation by 2020, there will be 50 billion connected devices. So the data center have to have the responsibility and capability to process this data. And a lot of business, they will grow up including the AI and IoT, such kind of new business. And industry will have to use the data center to help us to analysis this data, which means the data center cannot use the traditional technology to help us to process this data. Because when we process data, we will require more flexible and low-cost agility to process those data to implement the AI in our business. For example, some financial industry, they have very high frequency trade. And they need a fault detection. So that is a real-time fault detection. In the traditional data center, they will do it in some offline process and analysis the result maybe in minutes or in several hours. But in the future, we will like to detect the fraud maybe in seconds. And from the life science perspective, the genomic research, we will like to analysis all the people, gene data in the data center and help us to detect cancer, maybe some pain in your genomic data. And the government have a lot of data in their data center. They do not know how to use it. And by having more agility in our data center, we can provide those capabilities to government help them to improve our normal daily life. So with the growth of the cloud and the AI analysis requirements, the data center have to transfer with more agility data center. The open solutions will help us to accelerate the piece of the innovation. So the business, just I mentioned, the business needs to reduce the operational and capital expense and help us to deliver new business in minutes or in seconds. Optimize the data center according to the telemetry data. And in real-time analysis, just like I mentioned, the financial banking, they are using the data to do the fraud detection and address the application which the workload needs more agility. I can give maybe an example here. Some application, they will have, they will need to scale out and or shrink in minutes like the online commercial applications. When they first launch, they do not know the demand of the application. Maybe in several hours, a lot of people seeing this application are very interested. They will log into the application and buy things in the online commercial platform. So the requirement will huge increase. They will need our infrastructure to increase, scale out very quickly, maybe in seconds, to fulfill this requirement. If your infrastructure fails to do that, then you will lose the customers. And when we scale out the infrastructure, we cannot interrupt our services. Because when some, currently the data center technology has the capability to scale out, but it's not so smoothly. We have some downtime in minutes or in some hours to implement the scale out. But in the future, our data center must have the capability to scale out very smoothly without interruption when you put in more data hardware or storage network resources into the data center. So the data center are facing significant changes to trying to meet the demands of the exploration of the data. And the continuous connectivities to the platform for real-time mobility and flex and enterprise management is very important. The current market trend indicates that the servers are still under very low utilization. For example, in China IT industry, the CPU utilization normally may be below 50% off. And with the data increase very quickly, how can we reduce the TCO by increasing the utilization? And so we can avoid more capital expense in our investment. So how can we resolve these challenges of data centers? I think in the key notes, Mark has given us signals about the future involvement. That's the composable infrastructures and cloud-native applications. I believe you guys hear the composable infrastructure. I don't know if you understand. I think from my perspective, it means both software and hardware. From the software perspective, which means the open-step project that must support the composable capabilities, you can get the ironic or get the neutral, they can run standalone with other open source, you can compose them together to fulfill your requirements. That is the composable from the software perspective. How about from the hardware perspective? Right now, when we scale our infrastructure, we have to purchase more servers, more storage, more network devices, and put them into the data center. And we have some workers that help us to set up the devices and connect all the cables and test the power if we're okay. If we have such a capability platform, we can just call an API, all those hardware resources can compose them together. That would be great, right? From my perspective, I think the future infrastructure, including software and hardware, they are all composables. After we have the composable infrastructure, then we can implement our cloud-native application beyond them. Eventually, we will have our application to implement their agility, and they can scale out very smoothly with our interruption when we put more hardware resources into the data center. I think the Intel RISD is such kind of hardware technology to help us to consider all the compute storage and networking as disaggregate resources, and then all the disaggregate resources can be composed on the fly to meet various needs in the data center or cloud environment. The disaggregation in addition to the following hardware refresh at different rates for each storage compute and networking, they support more efficient resource utilizations. Imagine a cloud which has the capability to compose those hardware on the fly, then you can grow a shank to meet the usage very quickly and without interruption. So, the RISD technology, they provide the four significant values, flexible, manageable, and economic and open. What's flexible? Flexible means you can create a very dynamic to config and customize systems or a server to meet the requirement. For example, if I would like to do a normal ID hosting, I maybe just need a server with several CPUs, maybe 100 gigabyte memory and 100 gigabyte storage, but if my business grows up, I need to analyze the user behavior in my application. I would like to add more servers with the GPU devices, so I can do the user behavior analysis on the GPU. So, at that time, one option is I have to purchase more GPU with more servers, right? But with the possible flexibility, if I have a GPU pool, then I can just hop out the GPU to the existing servers, let the server have the capability to do the AI things, right? So, that's what the flexible means. For the manageable value, it will help with the with the API, we can manage the hardware resources, very convenient to manage the hardware resources, and we can tackle the hyperscale complexity with the powerful and modernized API-based software. That software will also help us to discover inventories, compose, and monitor to the telemetry in those hardware. It's a unified hardware platform to help us to do the operation, reduce the operation efforts. And because of the flexible and manageable value, we can reduce further our TCO, so we have saved a lot of money. And what's more, the API standards is based on the Redfish API. It's an industry standard. It's a very open standard, and a lot of companies contribute to that API aspect. So, with the RSD technology, it will help us to evolution the data standard in the future, saving our money and increase the utilization and help us to provide more performance, optimize our data center to the applications. Today's data center, they are still built on the traditional architecture, where they take days and weeks to provision new services. And especially when we launch a totally new business, we have to take weeks to purchase the hardware. But with the RSD technology, we can compose the hardware we need in minutes. So, let's take a look at the software management foundation. At the bottom, we can see it's the compute resources, storage resources, and network resources. And beyond is the RSD components. It consists of two, majorly two parts, the PSME management engine. It will help us to control the port, the red drawer, or slats. And the PSME is fully described all the disaggregated components in its domain. And the RMM, it will describe the red level spec data for power and thermal. For the orchestration layer, we can integrate the technology with a lot of cloud technology, such as OpenStack is one of the typical cloud solutions. We have announced the Valence project, which is one of the OpenStack Big 10 projects to talk with the port manager with RSD API to control the PSME further to control the compute resources, storage, and network resources. All the port manager, it will expose the red fish API, which is an industry standard API to help us to control. So, the OpenStack, he will use the red fish API to talk with the port manager. I don't know if you guys know about what's red fish. Can you show your hands if you heard about it? Grace? Red fish actually is DMTF, the distributed management task force standard capable of managing multi-node service API specifications. The goal of the red fish is to replace the IPMI or overland that technologies will help us to control all the hardware and collect the data from such as the thermal or fan or some sensor data. We can all use the red fish to control and get the data. All the devices, all the devices implement the red fish API can be controlled in the OpenStack Valence project because Valence project is implement this, use the standard red fish API to control it. It's schema-based and human readable. I will show a demo later on. You can see how we can talk with the RSD by using the red fish API. Let's show you what's Valence. Valence is OpenStack Big 10 project. It's a collection of software which supports the consumptions of the RSD resources in the cloud. It can integrate, it provides the Valence API, which is a Python-based demo based on the flex framework to expose the REST for API. The API service communicates with the pod manager through the red fish specification. We also have implement a Python Valence client, provide the client CLI command line to help you to interact with the RSD. We also have implement several plugins to help other projects in OpenStack such as the ironic horizon. They can control the RSD resources. In the end, I will show you a demo how we can use Ironic to control the RSD servers. Valence also supports the horizontal scalability by managing multiple pod managers. This is the architecture of the Valence. You can see Valence currently has a very simple web UI. You can use the web UI to compose node and use it as a normal server. The web server will talk to the API controller. The API controller will use the REST for API interface to talk with the RSD. We also support to integrate with the deployment tools. In our case, 99 Cloud has built our deployment tool based on Cora. In the demo, you can also see how we can deploy the OpenStack over Cloud by using the Cora with RSD. In the end, you can see we also have plugins in Ironic. Currently, we implement it. We have the driver in Ironic to talk with the RSD. The general workflow of the Valence is that user can use the Valence client to compose a node from the RSD resources. He will turn a composed node to the user and the user will enroll it in the Ironic. Ironic can take control of the node and we will control its power. Then we can use the Ironic to deploy OS in that node. After the user finishes the node, he can release the node from the Ironic. The user can release the node from the RSD by using the Valence client. There are three typical scenarios about using the Valence. First is you can use the Valence standalone with several, for example, with your deployment solutions to perishing under clouds, which means you have a seed cloud. It contains the Valence and your deployment solution. Valence will help you to compose the node. After you finish the composition, you use your deployment tool to deploy several clouds in your data centers. In the 99 Cloud, we combine the Valence and Cora together. We can use the Cora to deploy the node's resources. It's returned by Valence. Also, after you finish the cloud deployment, you can, in your cloud, the main page, you can also integrate Valence in your Ironic. The Ironic can manage the bare metal which is composed by the Valence. The third is about the storage provision. In the RSD 2.1 API, it will support the dynamic approved NVMe provision. So we can just call RSD API through the Valence. It will help us to compose NVMe storage and hop up to the server we need. And then we can use it as bare metal in our cloud environment. So it's demo time. I will show you some demo video. First one is from the command line style. You will see how we can use the command line to compose node and use the Cora to deploy over cloud in our environment. Okay. First, we will source the keystone and then we will use the Valence client to compose a node. We delete the existing node first and then we compose one. When you compose a node, you need to input the memory, CPU and storage unit for the node. And in this case, we compose a node with memory 4GB and storage 50GB storage. It will help us to select a node for us. And then we will show you the Redfish API. You can see from the Redfish API spec to see we already composed a node. This is the schema of the Redfish. And the node is using the remote disk target in the RSD. And then we will hand over the composed node to Ironic. So we can use the Ironic and Cora to deploy all in one open stack in that target node. First of all, we have to enroll. After you enroll the node, Ironic will take over the node. He will, first of all, Ironic will sync the power status of the node. So this one actually is, you can regard it as a seed cloud. We can manage the bare metal which is just composed from Valence. And in this seed cloud, we can deploy it. You can see there is a new hyperwriter shows in that table. And we can launch that, launch a new instance. Eventually, it will install CentOS in that composed node. And we will pass some script into, when it boot up, it will be installed as another open stack cloud. We will pass some initial script. So it's another real web. When the Ironic take over the node, it will reboot the system and load the kernel and RAND disk. We can pass it very quickly. And after the system is reboot, we get the system from the Ironic. We will use Cora to deploy the system as overcloud. So after the deployment, you can raise it overcloud. So this is two different open stack cloud. This is for the demo when you use the command line. I will show you another integrate with our GUI. So our GUI is use the Valence client to implement, to talk with the RSD. So you can see we have one pod manager in our environment and one compute services. You can see the status in our dashboard. We can also add other pod managers which help to scale out the horizontal scale out. Sorry. So you will see the online one is offline in the GUI. And you can also see how many nodes you have composed in your pod manager. And if the node is exposed to the Ironic, which is take over by Ironic. We can compose a node by a wizard. This process is actually similar to what you just saw in the command line. You will input the name of the node and input the course, memory, disk size. So in the export action, you can hand over the node to the Ironic. So the following process is similar to what you have in the last demo I just saw. You can find the node in the hypervisor. There will be a new bare metal hypervisor show up in the hypervisor tab. And you can deploy it by using the Nova Lunge instances. Okay. I think this is all the demo I just showed. Any questions you have? Any questions? Could you please clarify RSD is a product that's available now? Is that something Intel is providing now? Yes. It's providing now. Actually, several hardware vendors, they are providing RSD comparable hardware servers such as the Ericsson and Innspar, Huawei, as I know. And for software vendors, they also have their solutions to manage the RSD. And from the OpenStat ecosystem, Valence is the project to help to interact with the RSD API. And you can also use the standard Redfish API to talk with the RSD. And what are the special hardware requirements to be supportive of this from a hardware perspective? Actually, it's a standard RAC server with some hardware control plan inside. Like I just showed you the PSME and RIM, that kind of component you have to install in your hardware so you can enable the RSD functions. Of course, with the RME or some GPU provision functions, you have to have your hardware support such as the PCIe switch, the type of devices. Any other questions? Okay. Thank you today.