 Hello? Hello? Hi, guys. Thanks for coming for this presentation. My name is Peng, and I'm the one of the maintainer and co-founder of Hyper. So I was supposed by co-founder Xu to present this, but I took it from him and I changed the title. So I tried to make the best presentation from here to make something interesting to you guys. So Hypernates built a public container service without infrastructure service. Today I'm going to talk. This is our agenda. I'm going to talk about what we are doing in the container service today, and what's the problem of that, and what's Hyper, and how we can use Hyper to build the next generation of container service. And below is a diagram of current public cloud space right now. You see the past platform service and infrastructure service. But in the future, many believe that there will be the containers of service that make it a bigger, major part in the market. So what's it today? Most of the container service is running on the infrastructure service today. And you have this infrastructure service, which is NOAA. And on top of that, different tenants need to build their different bay of VMs as a boundary for isolations across different tenants. And that introduced two problems. One is you need to pre-create a VM pool. And that means you need to do a lot of capacity planning. What's the instance type? What's the storage setup? And what's my VPC SDN setup? And that alone doesn't make you any more competitive. You need to define a lot of things before you know actually how much you're going to use. And another problem is supposing you have a database container and a web containers. And maybe the database container requires more memory and more space. But the web containers ask for a higher CPU. So probably you set up different instant type for different containers. Then you probably know the scaling result before you actually schedule it. Then the question is why you do another scheduling if you know where your container goes to. That's the fundamental problem. And the third problem of that is it's result in a low utilization rate of your resources. If you do a scheduling within a pool, that means you must have some idle resource sitting there. Whatever your algorithm is, no matter how good they are. So there must be some idle resource sitting there. Another major problem is you've got two layers to match. One is the infrastructure. You still got the infrastructure service to match. The VPC, the SDN, the storage, the VM instance. And you got the application, which is a container to match. You need to schedule them, orchestrate them, doing the service discovery, and whatever you need to do. And basically, this idea is just to replace the VM configuration management into the cluster management. You need to manage the scheduler, the mesos, the Kubernetes, the swarm, whatever. And you need to nest your container set in the lib network in your neutral network. You've got the neutral VPC out there. And then you do a lib network container SD in the neutral VPC. And this double layer make you really hard to build a match and really hard to debugging. And also, think about the SAP, the RBD stuff. And you're going to use the, because AWS already have the EBS volumes, and how you do the precision storage for your containers. You use the floccers or some other precision storage on top of your EBS volumes. It's still a double layer of complexity. So what's Hyper? We build Hyper as an open source alternative to the Docker runtime. It's a hypervisor-anostic Docker runtime. It allows you to create a VM using the Docker image in sub-second. And technically, it's a hypervisor plus Docker image. It allows you to run any Docker image we set in here. Right now, we support KVM, XAN, and virtual box. But working on the VMware integration so probably can run, using Hyper to run Docker image on your reaccess host by the end of this year. So that screenshot is how you can launch a Docker image with Hyper. So you do a Docker pool and do a hyper run. But instead of launching a Linux container, you launch a VM instance in something like 300, 200 milliseconds. But with the isolation of hypervisor, it's hardware-enforced isolation. And our test result, below is test result of two machines. A typical server setup, it takes 300 minutes and a 50 millisecond for a port of image to launch. But on a lower configuration, this is an ultra-low-poutage I3 CPU. It's like this big. We can achieve a half-second to launch a VM. How it works, inside of the Hyper VM is still a VM. The white box is your physical host. And you put the Docker image into a physical drive. That's the blue box. And the gray box is a VM instance. So in the VM instance, there's no gas OS. There's only one micro-kernel. One micro-kernel is not something like a core OS, which is still a full OS. And the micro-kernel will start with something called a hyper-start. It's a micro-init service. And the hyper-start will load the Docker image from your physical host into the VM instance and launch the application from there using a concept we call the port, which is Kubernetes native concept. And we separate the different namespace using the amount of the namespace. So each different Docker image will now see each other's root FS toward the potential complication. And by combining both the hypervisor and the Docker image, we achieve this result. So for native containers, it's weak of isolation. You share a kernel. Whatever kernel problem there is, you might have to exploit. But VM, on the other hand, is very strong. It's an independent kernel. Hyper allow you to reuse the same independent kernel by with the, yes, hyper just let you reuse the independent kernel. And you can actually bring your kernel. For the portable, Docker is portable. You can run Docker image on any Linux distro. On the other hand, VM is not really portable. You can now use a KVM image on VMware. But with Hyper, because we support multiple hypervisors by using the Docker image. So you can basically run Hyper with any hypervisor with the Docker image. It's truly portable. And boot performance, you can create a Linux container using something like 100 and 200 milliseconds. On the other hand, you see two instances probably in seconds, in minutes, in like one minute or two minutes. But Hyper allow you to achieve 200 and 300 minutes, which is really close to Linux containers. And the image size, Docker image is about like tens of megabytes, hundreds of megabytes, that's all. But a full-blown VM is like a gigabytes, tens of gigabytes. And on the other hand, Hyper using the Docker image still megabytes. An immutable Docker instance, the Linux container launching with the Docker image is immutable. But a full-blown VM is suffering from the configuration drift. You have to use a Chef and Puppet or those kind of tools to manage the configuration in the VM. And even you use CoreS, there might be some like configuration drift over time. But Hyper, by removing the, guess what, we can achieve the same immutability as Docker containers. Comparable means like whether they allow you to reusing your existing tool chain to manage the runtime. With Docker containers, it's not really comparable. You have to build your own management tools with containers. But Hyper allow you to reuse some of the hypervisor management tools with Hyper. So it's generally between the traditional VM and Docker. Because we're using the hypervisor technology underneath, so it's relatively mature than the Linux container right now. Bring your kernel, it means like in a public multi-tenant environment, you have to allow different tenants to pick their own kernel version and kernel module. With Linux containers, you cannot allow that because all the containers from different tenants share the same kernel version and can model. But with Hyper allows you to different tenants to run their different kernel module and different kernel version. That's a major requirement for a public environment. And for the return of investment, if you're using Linux containers, you probably need to ask the question, where goes my existing virtual infrastructure? Where goes my existing OpenStack deployment? Should I sort them away or build another layer on top of that? But with Hyper, because it's using the same like Cinder or Neutron technologies or Hypervisor, you just plug Hyper into your OpenStack deployment and turn that into a container's service. So it's achieved better return of investment from that. So this is what we see the current container service looks like. But with Hyper, I'm trying to propose a new way to building the container service. So instead of letting the user do the orchestration by their own, so we can move the orchestration engine to the cloud or base layer to replace something like NOVA. So imagine you use Kubernetes or Meso as a replacement of NOVA and using Hyper as a runtime. So you can achieve this kind of deployment. The infrastructure is the application itself. There's no two layers. It's only one layer. And this is what we build called hypernatives. It's a secure, multi-tenant Kubernetes distro. So we still use the Cinder and Neutron for the storage at VPC as the SDN solution. But we don't use the Linux container. We use Hyper VM as the runtime for both isolation and the easy of integration. And on top of that, we use Kubernetes as the orchestration engine instead of using NOVA. Because NOVA is more designed for the long-running stable workload instead of the fast-provisioning Docker workload. And that alone leaves you a really isolated, public, multi-tenant environment for developers. So next, I'm trying to give you a little demo. I want to ask my colleague Xu to give you a demo of the hypernatives work through. I will show you how it works. And different from Kubernetes, we are a multi-tenant system. So there is three users in the system. And the red tab is the administrator. And the purple one is for our demo. And there is another one, the yellow or green one. And I will show you, I have pre-created some OpenStack project and the Neutral Network and so on for the convenience of the demo. And then in the demo user, yes, this user as a normal user, he do not have the authorization to check all the project. And he can only see some network belongs to him. And there is only himself as the tenant and no other tenant can sort. And inside this, we can launch some Kubernetes application here. I will show you the, I will do the following steps. First, I will create a Kubernetes name space. It's the Kubernetes concept for the resource isolation. And I will create a pod and the repeater controller in the inside the name space and create a service to access all the pods. And let's run, see what happens. Yes, after the creation, we can see there is one name space is created and then the first part is created. And after that, we use the repeater controller to let it replicate to three pods. And now you can see there is three pods running just in. That already have six seconds and three seconds. It's launched very, very fast. And at the end, we created the service. And it shows it have the public IP address as, where is my mouse here? We can try to access it. Yes, it's created. And it actually has three pods behind it. And then I can show you it works with Cinder. What I will do is, firstly, I will show you the existing Cinder volume and then create a pod with the volume. And I will show you the volume. Yeah, firstly, as said, the volume at the top is the volume ID and then create the pod with the volume ID and volume ID here. This is the volume ID we already created. And then if we show, we have one more pod. And after it's run, we can see to the use the DF, we can see the volume is mounted here as we expected here. So it's already created. And also from the other user, you can try to show what the pod is. There's no pod. And there's another user, so there's no NS. And you can see this user, he can only through the public network and only see his own tenant and cannot see others. That's all. Thank you. So as a sum up of this, I think what trying to do it here is more of a mutable infrastructure service. So imagine the diagram is a graph representation of your cloud formation or heat templates. We got basically Hadoop deployment script here. And imagine you could replace the VM image with the Docker image in this script. And you end up with hypernatives. And what's the difference is the new script will take a sub-second to provision to deploy. You can deploy a Hadoop or Spark in millisecond. And then you've got a lighter image to manage. It's not a Qco 2 VM image. Take gigabytes, tons of gigabytes. You've got a layer of hundreds of millions megabytes of Docker image, the typical Git-like workflow of Docker to manage your images. So it's lighter. And it's immutable. You don't have the long-running VM instance suffering from the confluent drift. You have an immutable instance. And if you want to replace that instance to do the CDC, it just takes a millisecond to launch a new one. And it requires minimal ops because it's immutable. And the Kubernetes offer you the service discovery, the replication side, all this failover, and all the scaling can really build in Kubernetes. So it's kind of like a bit more of both pass and infrastructure service. And the last one is just enough cloud. It's like, if you're using something like this, you have to pre-create a VM cluster. Then it's not just enough. You might need to pay more than you actually use. But with something like that, you just need to declare how much resource you want to use for your application, and that's all. Let the cloud provide the resource for you. So it's just enough cloud. And overall, we have been discussing whether we should call it container service or immutable infrastructure access service. And all of this is open source. We have our website here, our documents here. And for HyperNatives, the new Kubernetes distro, we are actually open sourcing today. And our plan is to building a public container service based on HyperNatives by the end of this year in both New York and California. We have the plan to announce it by the end of sometime in December. And the GitHub repo is listed here. So that's all my presentations. Any questions you may have? Go ahead. I cannot hear you. Oh, yeah. The question was, why we choose Kubernetes instead of Mesos or Swarm, right? We are just fun of Kubernetes, I think. I think Kubernetes provides more facility to help the developer to model in their applications compared with Mesos, which is basically a resource scheduler. So it helps just to develop more than the other frameworks. And I think that's the reason. Besides, it's Google stuff. So yeah, we are fun of that. Excuse me? Yeah, that's what Kubernetes does. Kubernetes basically is an orchestrated engine for your containers cluster, right? And my point is we can replace the Linux containers runtime with Hyper using the hypervisor technology. Then you end up with the same runtime, same behavior, so you can apply both Kubernetes on top of the hyperclusters to manage your hyper VM. Any more questions? Yeah. The question is, should you continue to use your Docker image or manage your VM image, right? Is that your question? OK, so your question is whether we want to only manage the Docker image or two images. That's the point of a hypernative. You only need to use your Docker image. You can build your Docker image on your laptop and push it to the Docker registry. And that's all. You don't need to use a CentOS or a CoreS VM image here. Completely no. Because when the Hyperstar, a launch, only a microkernel, there's no VM. There's no VM image. There's no CentOS or Red or CoreS. There's only a microstandard Linux kernel in the hyper VM. Makes sense? The question is, how many networks does hypernatives? Yeah, as a user, as a tenant, you can create as many namespace as you wish. And each namespace is a neutral VPC underneath. Yeah, right? Thank you.