 Hello, good afternoon. My name is Aaron from embedded technology. Today, I'm going to show our idea and prove our concept of running Kubernetes on a microserver architecture. We have been many years using a microservice, microservers on our safe cluster appliance for many years. And we come out an idea, how about use these kinds of architecture of server on microservices. So we found this limitation on a server node that on Kubernetes, that is no matter how powerful your computer is, you can only run maximum like 110 parts on every nodes. So because microservices, you don't need to make much powerful computing for your services. So there's all kinds of microservice needs such a powerful server instead of using an ARM-based microserver. So first of all, let me introduce the architecture of a microserver. That in one unit chassis, we have eight microservers, which is on A72, 64 bits with dual core or quad core. And every microserver in the chassis are independent to each other and connect through the switch. In the past, currently, we use it for safe storage. But with this kind of architecture, we think there's another advantage if we deploy the Kubernetes with containers in such kind of server. The server has also dual switch inside to provide up to four 10 gigabits per second network. So with the limitation of single nodes can have 110, we are capable of deploying 100 parts per node. So totally in a single one unit microserver, you can run a deploy up to 800. So that is amazing density for the Kubernetes microservices. So beside the density, you consume only 105 watts for these 800 parts. And because this is a single SOC on each server module, there's no NUMA issue that caused by a single node with multi-suckets. So because of the time limitation, we cannot do a live demo. But we do a demo in-house and use screen capture to show you how we can do it. First of all, we can deploy one controller and the other seven nodes as the Kubernetes nodes. So every node, we can run like 100 parts replications. So you can see first nodes is master and the other sevens are the nodes for parts. And we can deploy 700 engine x replications on all of the nodes with a balancer building to distribute all of the service requests from the external. So for this example, after several seconds, 10 minutes, there are 600 parts up and running with engine x services. So we use the ZIT works. So we use an Apache bench to test a small stress test, like 20 concurrent connection and totally 10,000 connection tests. So it can run quite smoothly, distribute to all of the seven nodes in single chases. But this is the output of the Apache bench. So you can see it's response time and how many connection requests was complete. So all of the requests are balanced to all of the microserver. But we can also give more even higher stress tests on these seven nodes inside 1U. That is, we increase the concurrent connection up to hundreds and test a millions of connection. So during the test, we switch to the Kubernetes dashboard and see all of the service up and running. And we can also check how the results are utilized. You can see the processor use only 11% on this node. This is one of the seven nodes. 11% of CPU and the capacity of the CPU is dual core. In the future, we will upgrade it to quad core. So it can service more. And the utilization of the memory, it is only 3.5%. So it is very less for this stress test. And the path allocation is not 100% of the total capacity of 110. So in one small unit of the microservers, such a kind of test is quite successful and deployed. And also, our idea is that not only running it with Kubernetes, because we have a safe cluster, also another such kind of microserver. So we can integrate it with the safe storage as its back end. So finally, we can manage such kind of infrastructure with one management interface. We call it container-converged infrastructure. So we use the configuration that we use, 20 OSDs with three monitors in the back end. And this YAML file of the Apache, this test we use Apache to test this service. And use the safe three monitors and use the file system as this back end storage. So we also make a loop to access the connect these 100 Apache HTTP servers with safe file system as the back end storage. And make a small loop to connect to the Kubernetes cluster. And you can see all of the services connections respond from different parts. Because there's a low balancing to distribute the load. So this is a proof of concept in the beginning. But to be finally, that can be real products. Still have several things to do. First of all, we will integrate the Kubernetes into a single portal with our current safe management use interface to provide several features that Kubernetes dashboard cannot provide. So such as create a standalone, a master, higher availability controller, and add or remove Kubernetes nodes. So beside that, we also plan to provide the rule-based access control for this kind of microserver architecture. And because it is on 64, not all of the popular software are supported. But we can provide these kinds of compatible Docker image for popular languages and the database as finally become a container-converged product. So user can very easily, without starting from scratch, to use these kinds of appliance. We have a booth over the marketplace. So after this speech, if you have some bright idea to talk with us, you are welcome to visit our booth just over that corner. Thank you.