 There's only a couple of people. I guess we're going to start. We only have a few people here today. Hopefully, more people will roll in as we talk. Good morning, everybody. So I'm Dali Wu. I'm the GM of Inspear Systems. For those of you who don't know Inspear, just to give you a brief overview, we are the number one server manufacturer in China and top five in the world. So the company's been around for a long time since 1945, and we have publicly traded since 1998. We are also the number one cloud solution provider in China, and we built over the past two years, 285 data centers in China. We are also the number one software brand in China and the top HPC solutions provider. And we have the capability to build over a million servers annually. And we just started our manufacturing plan in Silicon Valley in Fremont last year, 2015, to service the customers in the US. So for today's topic, I'm going to talk about the major rack scale open hardware platforms available today. So a couple of years ago, five, six years ago, when the internet and e-commerce took off, it became very difficult and slow to deploy data center, hyperscale data centers using traditional hardware architecture. So the hyperscalers went off on their own and started innovating on their own. So Facebook was the first hyperscaler to open up their designs and share with everybody using the OCP. So they established the open compute project. And then around 2014, Microsoft also contributed their open cloud server into the OCP community. And then recently, back in March at the OCP summit, Google also opened up their designs and contributed their 48-volt rack into OCP community. And at the same time, the same thing was happening in China with the hyperscalers in China by Du, Alibaba, and Tencent, the BAT companies, got together and formed the Scorpio project. So the Scorpio project is renamed now today to Open Data Center Committee. And then around 2014, Intel also introduced the rack scale, Intel rack scale architecture for open rack scale designs. So just to take a quick look at the open rack designs, so there's two different types of major open racks. There's a 480-volt open rack, and there's a 208-volt open rack. The 480-volt open rack is mainly for Facebook data centers because most of the colos cannot support this kind of power infrastructure. The 208-volt reface is more common and supported in most colos. So the key features of an OCP design is basically its rack level power. So there are no power supplies anymore in any of the notes. And it's vanity-free hardware. So the sound cards that you don't need in a server, the video cards that you don't need in a server, the covers, the pretty front bezel. You don't need all of that. So the pretty front bezel just basically stifled airflow. So Facebook removed all that from the servers. So it's basically bare metal compute, raw power. And then also everything is serviced from the front. So very easy for the data center manager to maintain their data center. Also, the inner width of the rack is increased to 21.14 inches, 21 inches. The regular rack is 19 inches in a width. But the outer width remains the same 24 inches as a standard rack. And then also the U-high, the standard 1U is 1.75 inches. So the OCP open U is 1.89 inches slightly higher. So you can get better cooling through the rack. So these are the three most common configurations for the open rack. Compute-intensive, balanced, or storage-intensive. So depending on what the hyperscale data center wants to do, they can optimize their deployments with dedicated designs. And then each rack has 12.5 kW, can support 12.5 kW. So in the case of open racks, some of the spaces, the U spaces may not be fully utilized because of the power limitation. So when there are no server nodes in the UnopQ Pi U, they basically put in a piece of box, literally a corporate box that is flame-resistant. The Microsoft open cloud server design, it's very simple. It's a 12-view. And the difference is, it actually fits into a standard 19-inch rack. And then there are two choices, either a computer or a J-Butt node. Very simple for dedicated scale out, distributed scale out build. So you can have a very flexible compute-to-storage ratio. And also, it's a cable-free architecture. There's a backplane at the back of the server where you plug in all the nodes and the J-Butt. So it makes it very easy to maintain an upkeep as well and service. So where are some of these OCP deployments happening, what customers are adapting them? So of course, Facebook, and then there's Microsoft. And there's also Rackspace. Rackspace, usually, the deployments for Rackspace type is 208-volt three-phase. And then Goldman Sachs, a lot of the financial customers are jumping onto this infrastructure as well, fidelity. And what is so good about this type of design? First of all, the efficiency, power efficiency increases by 38%. And the reduction in cost comes from maintenance as well. For example, Facebook told us that before they changed over to the rack scale design, they had to use one technician to 5,000 servers. After they changed over to the rack scale design, the OCP design, they only need one technician to service 25,000 servers. So that's a five times reduction in staff. And then the Scopio project is kind of the equivalent of OCP in China. And the Scopio design is kind of a hybrid between the OCP and OCS. And this started in 2011 with the BAT companies, Baidu, Alibaba, Tencent. And InSphere was the manufacturer at the time that was willing to put in the resources and the money into designing this new design with the BAT companies. Deployment mass production, deployment started in December 2012. So some of the design specs for the Scopio rack is centralized power, centralized cooling, and centralized management. So in OCP, you have centralized power. And then the Scopio project takes it one step further, so to incorporate centralized cooling as well, centralized management. And there's many different types of compute and storage notes to choose from to customize according to the application requirement. OK, so what are the design benefits for Scopio project? First of all, the density. Because now you have the 21 inch width, same thing as OCP. So you can pack more into the same space. And Scopio retains the 1u height so that you can pack more into the same open rack, same thing like open rack, but you can pack more. Because the fans are actually built into the back of the rack, so you can free up more space for the front to put in more notes. And then the power consumption overall is reduced by 15%. And deployment speed, it's better 15 times. And then the total cost of ownership reduced by 12%. And then the failure rate is much lower because now a lot of the unnecessary components that we move from the design. And then you can control at the rack level. Some of the deployments that happen in China include many different types of industry customers. In addition to Alibaba, Baidu, Tencent, your customers like the 12306, which is the world's most complex train ticketing system, also adopting this type of standard. And then SIGNA, HuaSuo, and then some of the telcos also adopt the same designs. The smart rack layout. So the smart rack, ODC rack, the Toscopio is also called Open Data Center rack. So the front of the rack has all of the notes and the power zone. So the power bank is built into the middle of the rack. And then you can configure it based on what you need. So it's a customized configuration for the power zone. So for the US, it's a 208-volt-3 phase. And then you can have a single power bank, which is able to support 12.5 kW. And then some customers may want a different configuration for the power bank, and we can customize that as well. And then in the back of the rack are all the fans. So now the servers themselves do not have any more power supplies and no fans. So it makes it much more cost-effective. So imagine you have 30 fans in the rack, which cools the entire rack, cools all the servers. So there are no more fans. In the traditional architecture, you have five or six fans and multiply it by 40 notes. So over 200 fans in the rack, which now you can save calls on and also is much more optimized to do it this way, because each fan zone can be customized to cool the server nodes in the front. So for example, if you have a mixed rack where you have some Hadoop nodes on the top, then you can spin up the RPM speed of the fans to cool those nodes. And then if you have some static nodes, like Coast Orch nodes on the bottom, you can spin down the RPM speed of the fan. So overall, it's very optimized for cooling. And then the rack management controller can allow you to monitor all of the fans and all of the power supplies and all the system nodes in the rack. Then that way, it becomes a very modular design. For upgrade, for maintaining, it's very, very simple. So let's take a look at some of the designs for the server nodes in the ODC rack. So the first one on the top left corner is a very popular configuration for Hadoop. It's 1U with 12 1.5-inch drives. And the one on the right is a very popular configuration for HPC. So you can have dual nodes in 1U. And then each node can have 6 2.5-inch drives. And then on the bottom, you can have a Coast Orch node, 18 3.5-inch drives. They can be either 3.5 or 2.5, and all of them are hotswappable because there is a cable arm that allows you to pull the entire system out of the rack without disconnecting power. And you can service the drives very easily, tool-free. And then we also offer the GPU node, which is 1U with 4GPUs built-in. Very good for HPC GPU compute. OK, and then some of the hyperscalers don't want to pay so much money for hotswappability. So we do offer value nodes, which do not offer the hotswap capability, but still maintain the easy-to-service component. So some of the hyperscalers, when a system fails, they don't really care. They pull it out to put in another one. So the configurations for the rack, you can have more options than the OCP because now you have a ability to add multiple power banks into the rack if you need more power, like the customized rack on the right-hand side. So for GPU servers that consume a lot of power, we can actually customize a rack with up to four power banks. So you can have 50kW per rack for higher power footprint. So you can do scale-out and scale-up type implementations for both public cloud and private cloud. So when data center managers consider building the private versus public cloud, there are a few things to consider. So for public cloud, mainly, the most critical part is MTTR, the Mean Time to Repair. And then for private cloud, you're more concerned with MTBF, Mean Time to Failure. And then usually for public cloud is a scale-out built. And then for private cloud, you need a scale-out as well as scale-out. So how do we come up with an infrastructure that can provide for this type of demand, for both types of demand? So we came up with an in-cloud rack for scale-out as well as scale-out deployments for public and private cloud. So we work with Intel very closely on the rack scale architecture and come out with the in-cloud rack, which is really optimized with a lot of the rash features that the enterprise cloud need, 99.999% reliability and availability. So a close look at the in-cloud rack. So in the front of the rack are the server notes and the power shelf. And then the back of the rack has not working building as well as cooling zone and the rack management. So there is a similarity to OCS design where it's cable-free. So there's a passive backplane in the middle of the rack that you can plug everything in. So that way, there is no cable. So for the scale-out type of architecture, we offer four sockets as well as a socket systems as the configuration options for the server notes. And then two socket and single socket for scale-out. For scale-up, you have four socket, a socket. And then also for expansion module, if you need more PCIe resources in the systems, we can add the PCIe expansion module. Or if you need more JBOT storage, you can add a storage module. The configuration example for a rack scale. So the difference between the smart rack and in-cloud rack, in the in-cloud rack, you can actually have a scale-up configuration with GPU compute or with the four socket or a socket systems, E7 configuration for, say, VM virtualization or memory-intensive applications like SAP HANA. So some case studies using rack scale. So Alibaba, as you know, the November 11 is very similar to the Cybert Monday in the US. November 11, one single day of e-commerce trading volume on Alibaba platform is $14.5 billion. And to support this kind of huge trading activity, you need a very reliable platform. So Alibaba built it out using the smart rack ODC architecture to support this kind of trading volume. And then Baidu is the largest search engine in China. And they also use the smart rack architecture to power their data centers. Before, when they were using traditional server platform, they were only able to deploy around 300 to 500 server nodes per day. After they change over to the smart rack architecture, they are able to deploy over 5,000 servers a day. So the deployment ratio really increased a lot and is a lot faster to deploy this way and then maintain this way. And they can save a lot more energy and reduce power consumption. And then the PUE ratio is reduced from 1.8 for standard servers down to 1.3 to 1.5. So overall savings is huge for Baidu. So I am going to turn it over to James to talk about the in-cloud OS, not yet, which is developed by Inspir to support the Cloud Datacenter build out. So in-cloud OS 1.0 was introduced a couple years ago when we introduced the open data center rack scale platforms. And then today, the in-cloud OS 4.0 we announced it back in December 2015 and formally released it to the market. And the in-cloud OS will provide the infrastructure for the Cloud Datacenter build. And then Inspir also created a community to facilitate this type of in-cloud for the Cloud computing build out. And then James is going to take over from here. Good morning, everyone. I have this. I'm James Zhu, and I'm the director of Inspir company. As Dori introduced, we are a total solution company. And Dori talked about the hardware solution. And we're talking about the software solution. My presentation today is an open, secure, converged Cloud Datacenter operating system. To make sure our products can meet the future requirements, let's review the IDC prediction. According to IDC, by 2017, 80% of enterprise will commit to hybrid Cloud. And by 2018, 60% of enterprise will use move their ID series to Cloud. And also by 2017, 60% of enterprise will use Cloud work load-centric management. And also by 2017, 60% of enterprise will embrace open API and open source. So it means everybody here working on the open stack will have very good future, because we will match the IDC's prediction. So when we design the products, we always look for the future, so if we can meet the customer's requirements or meet the development direction. Here is Cloud Datacenter development stages. And for us, we will focus on to create the Cloud Datacenter operating system. And our philosophy has three elements, open ecosystem and the service. And our operating system will have this feature, open, converged, smart, secure, enterprise class, and a business-driven feature. The in-cloud OS, we have four parts. In-cloud manager, in-cloud sphere, and the in-cloud network, and the in-cloud storage. The in-cloud manager will customize from open stack. And the in-cloud sphere is our own hypervisor. We are similar to the KVM, or Zen is our own, because we have our own operating system. It's certified by the UNIX community. Our in-cloud operating system, as I said, is inherited from the open stack, customized from open stack. Because a lot of customers, because open stack is a huge system, some customers don't need so many parts, so we will customize and get rid of some parts. And suppose some customers have some special requirements, we will add module, edit function for them. And also, we inherited the operating system. It's an ecosystem from the open stack. And also, we build our own ecosystem. And the in-cloud operating system, we will support the virtualization management platform of heterogeneous hardware. It means this operation system will support the hardware from different vendors. And we also provide an open interface for our partner to write the home code. Here is our operating system's standard interface. So from the in-cloud open stack, we will provide the interface to support all the open stack API. And we have our own in-cloud manager. We'll provide the interface to support the API to connect to the Amazon AWS and the Microsoft Azure. And we also provide the interface for our partner to write the module to communicate with our operating system. And the most important part, we will provide SDK tools for our partner to develop our module. And our in-cloud operating system will support different business model, different application, and different cloud data center. Also support the private cloud, public cloud, also support different hardware from different vendors. And here is how our operating system works. The in-cloud manager will take the request, include the business request and the resource request from the customer. And then dispatch the request to the private cloud and the public cloud. And then the private cloud and day cloud will use the request and get all those resources to create all those virtual machines and the management. So our operating system is designed for the enterprise customer. So we call it business driven. And we also use a configuration called smart configuration to configure all those resource. On the left side is a traditional IT data center. On the right side is a data center deployed our operating system. The in-cloud operating system is a secure system. We support operation protection, audit, and system security, communication security, access control, and data security and SDC reinforcement. On the access layer, we support web security and API security. On the virtual platform level, we support hybrid security and the resource security. This operating system can support disaster recovery. From data center point of view, we support all those multiple copies. We also support local data backup and two data centers, disaster recovery. And also support hybrid cloud disaster recovery. And from end to end point of view, we support the active passive and the mutual share to active disaster recovery. This share means a few data centers share one site. And the active means, so two data centers are working together, but the data can be recovered mutually. The in-cloud operating system, we call it enterprise class operating system. It means we design for the big enterprise to use. And we verified for one single data center, 5,000 physical machine, and 20,000 virtual machine, and 10,000 concurrent tasks. For software-defined computing, we verified the computing, how we virtualized the resource based on the task. And also we support the memory preloading. And also for software-defined storage, we verified 1 million IOPS. And also support PB level capacity. And also the storage system will support the disaster recovery. For software-defined networking, we support customized networking, SDN. We also support the software-defined connection and bandwidth. It means after you connect all the data centers to networking, you can use the software configuration, redefine the connection, and then include the bandwidth. Suppose you think 10 G is not enough, and then you can configure S, 20 G, or 40 G. And we support VX name flow control. We also support the virtual machine migration. Here is how the included operating system looks. On the bottom side is the heterogeneous hardware. Above the hardware is virtualization. Above the virtualization is the open stack. Above the open stack is the service, scheduler, and operation. On the top layer is the open API. Also supports SDQ tools. On the left side is security. On the right side is monitor, workflow, and billing system. Here is our key technology. The first key technology is software-defined. The software-defined computing, we support virtual CPU banding, virtual CPU power decontrol. For the software-defined storage, we support RPCN, FCCN, and the virtual disk log and sharing. We also support the virtual disk hardware migration. And for software-defined networking, we support standard we switch and distributed we switch. Also support layer 2 to layer 7 load balancing. We isolated three networks, like management network, operation network, and the data network. We also support the link aggregation. For software-defined security, we support the mandatory access control of virtual resource. We support the virtual machine isolation. Also support the communication control between virtual machine. We also provide security protection of the imaging file. This is the second key technology we implemented, called elastic computing. For the heterogeneous virtualization, we support the large virtual machine. Also support lightweight container. And support the management of heterogeneous virtualization. For macro series, we support high-availability proxy-based load balancing and default switching. We also support the pacemaker-based multi-node management. And support multicast-based hybrid communication. And also support distributed data management. For large-scale resource management, we support hybrid cloud cost domain storage and multi-data center management. This is the third key technology we support. It's intelligent management. The self-adaptive intelligent operation, we support the big data analytics. And also support accurate prediction, auto provisioning of IT resource. For large-scale distributed monitoring, we support dynamic scaling of components. Support a plugable adaptive framework. Also support the intelligent warning. The context-aware service orchestration and scheduling, we support a virtual application template. Also support the business-driven and resource-driven model. And we implemented multi-objective optimization. It's based on your application. We will assign different computing resource or storage resource. Here is how we build our own ecosystem. We inherited the ecosystem from OpenStack. And also we build our own ecosystem. We start a plan, a Chinese name called Yan Tu. It means cloud map. It's a plan. So for this plan, we will promote three partners, three kind of partners. There's a development partner, service partner, and a solution partner. And the final goal is provide an open, secure, and a complete solution for the customer. Oh, yes. Yes. And what is it for? It's similar to OpenStack NOAA. All the dashboards is similar. And we have all those demos in our booth. And if you want to know the detail, you can come to our booth. And we can show you all those details. Yes. So when you come to our booth, we may can provide the slides to you. Here is our ecosystem's current status. And we have built three R&D labs, and the seven certification center, and the 31 service center. We also build a in-cloud store, just like an Apple store. Here is our marketing performance. After we released the in-cloud operating system 4.0, in three months, we got 2,000 subscribers. And we installed 3,000 virtual switch and 10,000 virtual CPU. And we created two national standards and filed 200 patents on service. And got 10 big customers. And got two awards from the Chinese Cloud operating system or the security system. And we got 19 big partners to join the ecosystem. And the in-cloud OS is certified by the national standard, the Chinese national standard. Here is the case study. A division from China Mobile called HUNA Mobile, they have deployed our system. Their situation is extremely complicated. They have 40 million subscribers. And they got 11 large supporting systems. And they have 50 non-X86 server. And they got 1,000 X86 server and 10,000 virtual machine. And they got three kind of virtualization software. And also, they got a six data center. And they have five different kind of business model. So this system, their business model and data center or the machine are very complicated. So to support their system, we use this case study to prove our in-cloud system can support this very complicated business. It proves it's very easy to communicate with their supporting system. And also proves we can support non-X86 server and X86 server, no problem. And we also can support the unified management. One management interface can manage all six data centers together. From the efficiency point of view, we support operation and the management efficiency. It increased 50% to 70%. For the operating cost, it reduced by 30%. Here is my presentation. If you have questions, and the Dali and Mia can answer the questions. And you also are very welcome to visit our booth and downstairs. Any questions?