 So, hello everyone and welcome to my topic and today I would like to talk about how to use OpenStank Kubernetes as a hypercontainer to build a real multi-tenant but secure container platform to run your NFV workloads. Okay, so, I'm hurry you can find me through this GitHub ID and Twitter. I'm a coder, author and speaker. So I'm not a member of the hyper and also the feature maintainer and the PM of Kubernetes. Many focus on scheduling, signaled and also maintain Kubernetes Frank T project. Okay, so let's begin. When talking about NFV, the so-called network functions virtualization, I think it's actually originated from a so-called trace of telecom operators. It is according to a survey from Gardner that it is saying the traditional business of the telecom operator really grow in the past five years. Well, compared to that, the non-traditional business climbed to 8% of the whole revenue and some of them even reached to 15 to 20%. And all of these so-called non-traditional business actually includes about four business models including entertainment and media, M2M, cloud computing and IT services. So if today you go to the marketplace, you check out what the telecom operators are doing today, you'll notice that they are doing the new business. But the Gardner also found that there are still new problems happens to this new business model. One of the most severe problems is the so-called PEN of the telecom network. It is mainly talking about that the network in the telecom operator company is always composed by some specific equipment and devices, and they have to follow strict protocol, which is quite different from what we are using today, and it requires high reliability and high performance, but also require high operation cost. That's why this kind of problem actually result in some bad things, including that the long deployment time of time cost of your new business applications and a much more complex operation processes of your new business application. And you always have multiple hardware devices coexist your data center because you need to handle it. And what's worse is a closed ecosystem because nobody can play very well, nobody can play very easy in this field. So if you say this problem in one sentence, it is actually about your new business model actually requires new network functioning, and that's why we need to have NFV. The core idea of NFV is that we are trying to use software to replace the hardware network elements in your data center, and this software should be able to run on any kind of COTS computer, I mean the normal computers, the PCs, the servers that you need today, and these computers should be able to host it in any data center in all the world today. So that also reminds us the functionality of this kind of NFV actually should be able to first locate to anywhere most effective or inexpensive. That is what we are doing with the normal application today actually. And second, they should be able to be speedily combined, deployed, relocated, and upgraded. That is also what we are talking about the microservice today. So you can see here that NFV is actually a modern kind of software which is quite different from the traditional software, network software in the past three years. It should be more agile and it should be easy to deploy and it should be upgraded very fast. So that's why we say the NFV can bring us with something magical like speed up the time to market. It's very important for non-traditional business. It's not like the old days when you can just ship a big machine with a system software to a customer, it's not work today. And it also saves the cost to maintain the software and what's more important, it can help us to encourage innovation, not a closed ecosystem anymore because everything is not software, host on GitHub, that's good. I will give you the case about NFV today. It's very popular, I think, many people knows about this project, which name is Project Clearwater. It is actually an open source implementation of the IMA system. In the past few years, this kind of IMA system is always very complicated, which is composed by multiple layers, including control layer, application layer, media layer, which are composed by different kinds of physical machine equipment, but in NFV, this kind of IMA system has to be transferred to pure software written by code hosted on GitHub. So the Clearwater project is actually composed by 10 software, 10 components on the GitHub. It's fully open-sourced. Anyone sitting here can use Clearwater to build the IMA system, to make a phone call, to make a video call very easily. So I will give you a small demo about the Clearwater project. So now we have the software. The software in NFV is actually what we call the VNF, the virtualized network functions. So now we have VNF. The problem is how can we shape the VNF to a cloud, so now we can enjoy the benefits of the massive cloud computing system we are using today. You can see here that VNF actually loves the cloud, but actually, we need to make a decision here. What kind of cloud I'm talking today. You know, today we have two kinds of cloud, the virtual machine or container. I think a lot of people may know the difference between them. The virtual machine, which is built by Hypervisor, will have its independent guest kernel with full operation system. But compared to that, the container only has very thin layers like Reddit, only including your software and your binary and libraries, that's all, because it shares the kernel of the host operating system. So that's why we need to make a decision here, because you can see the container may be like Reddit, but there's some security problem in the container you can see here. So today I will give you a comparison analysis based on six dimensions here, including service engine and things like that. Okay, so let's begin from the first one. The service engine is talking about the speed. We all know that provision of virtual machine took much longer time than a container, but why? It is most caused by the Hypervisor configuration, the guest always, and the guest always, the light guest always with NFS, I mean, with NFS, I mean the process management service, stop scripts, they all will need extra time to deal with. But compared to that, provision of container is super fast, because it's only about how to start a process inside a right namespaces and C-groups. That's all, no other overhead. And that's why you can see from the picture, it is actually a test result from an Intel paper. It is saying that the average start time of the key and virtual machine actually calls you about 25 seconds. Well, compared to that, a container which contains exactly the same application only calls about 0.38 seconds. It's super fast. Okay, that's agility part, which we can see here, the container playing much better. So the second part is the network performance. I think people may understand that the virtual machine may have the poor performance if compared to container, but today I like talking about the network performance because for VNFs for functions, actually the network performance plays a much important role here. So for example, for throughput, according to the same paper from Intel, it's very interesting because the resulting packet per seconds that a VNF is able to push through the system is stable and similar in all three run times. I mean on the host machine, in the container or in the virtual machine, there's no much big difference. And this test is being done by using a direct forward test, level two forward test, and level three forward test. They all show the same result. It is very interesting and there's also same with the network latency because for example, in direct forwarding, there's still no big difference in this kind of test. And on the other hand, the virtual machine shows a little bit unstable here, which according to analysis is caused by high quality time to process regular interrupts. And for network forwarding also shows the same results. And in this test, the container even shows some extra latency, which we think is caused by extra kernel code execution in SQL which is unneeded in virtual machine. And again, the virtual machine shows a little bit unstable here, which you can notice in the graph that the maximum latency of the virtual machine is not stable, can reach a very high level. Okay, so for network performance, you can see that there are no too much big difference between the virtual machine container and the host machine. It's very interesting. And the third part is a resource footprint and which is actually about the density can deploy to your containers. So for virtual machine, according to the paper from Intel, the KVM actually requires the average about 125 MB memory when booted. But compared to that, the container only require about 17 MB memory. That's because the amount of code loaded into memory in the container is very, very limited significantly because it does not have to run a guest operating system. So that's the reason. And that's why we say the container can bring you with a much higher deployment density is because density is limited by the incompressible resource that is memory and disk. But for container, it does not need to even provision any disk. So all the density here is limited by the memory footprint. And that's why the container can play better here. OK. So next part is portability. And if you want to move the virtual machine from one host to another, the most simple way is using the virtual machine disk image. And it is actually a provision disk containing the full operating system. And the final disk image is usually counted by GB. But what's worse is that you only need some extra presets for porting virtual machine, including the hyper-wilder reconfiguration, the precise management service again. But compared to that, the container image we're talking about, for example, the Docker image, the OCR image specification, it is quite motivated because it can be very small, just like your application binary plus 205 MB more. Because for example, you can just use a Bilibox image. You can use an iPod image. That's enough. And for Docker, it only give you new features, the so-called multi-stage build for you to create any kind of small deployment image for you, just to contain your application binary and a very, very small operating system. OK. So that's why we can see the container image is very small and it can bring us with a better portability than the virtual machine. OK. Next part is the configurability. And I think a lot of people have experienced the configurability of the virtual machine, which I need to say is very complicated. There's actually no obvious methods to pass configuration to the application running inside the virtual machine. And I think you may know some alternative methods, like share folder, pull mapping, pass environment. But you can see here, there's no easy way to do that, no user-friendly tool here. But that is where container is doing very well, and especially Docker, Rocket, things like that. And they all give us a very user-friendly control tool for you to do that. For example, to mount a volume, it's very simple to pass environment to the application inside your Docker container. It's very simple. And it provides us with more and more command-like tools, arguments for us to use to control the management, to manage your persistent running inside a sandbox. It's quite different from the virtual machine. OK. I think that's also why Docker becomes successful today. The last but most important part is the security and isolation. Well, I need to say the virtual machine wins here because it can bring you with a hardware-level virtualization with independent guest kernel. Well, compared to that, the container, however, just is very vulnerable because it just has a big isolation implemented by namespaces and C-groups. It shared a kernel of your host machine, which gave you more possibilities to be attacked like a jailbreak. And you have some way to reinforce them, to do reinforcement in these aspects, like capabilities, live sitcom, security needs. But you can find that. If you do it by yourself, you can find that it is very hard to make decisions, make this kind of decision. Like, for example, what capability do you think is needed or unneeded for a specific user container? It's very tricky to make this decision because you don't know what the user is running inside. You don't know what the requirement of the user. So that's why I say and I believe that non-cloud provider today allow users to run containers without wrapping them inside a form-blow virtual machine. It's a truth. Although I know there are a lot of reinforcement you can do, but I think nobody believe in that. OK, so that's the truth today. And that's also another problem. We just want to keep clone native. I'm a member of CNSAI Foundation, so I know it's very important to keep clone native. We just don't want to tolerate the slow startup time of the virtual machines. But at the same time, especially for the NFV deployment, we really care about its security because it's always in a multi-tenant environment. We always worry about the noisy neighbors that run us. OK, so it's really tricky to make this decision here. And that's why we have Hyper. The basic idea of Hyper is actually trying to keep secure while keep clone native because we're trying to make the virtual machine more like container instead of the other way. And the idea of Hyper is actually originated from the container where you today. I will use Docker, for example, here. It is actually composed by two parts. The first part is the container runtime, which is a dynamic view of your boundary and of your running processes. In this example, it's a co-hello world, and it is running inside the namespace and the C groups. And the second part is content image, which is actually the static view of your program, your data dependencies, files, and directories. So in Docker container, you will have read-write layer, need to layer, and a group of read-write layer, which will be unimounted together to form a unified and general view of your file system. That's the Docker image we're talking about. So this picture reminds us what happens if we replace the container runtime part here to hyper-weather while still keep using the Docker image. Yeah, that's a better idea, I think. So that's the hyper-container project, which is free open-sourced, which is composed by three components. The first component is runway. It is also a compatible hyper-weather best runtime implementation. And it has a control demon. It also has a need to service. So you can see here why I say hyper is a better implementation for pod, because first it will have the activated machine running as a pod, and then it will start your user containers inside of the pod. And that's why we don't need to install a Docker demon inside, because the hyper-start will be responsible to manage everything. And it uses the standard Docker image. And we also supported OCI image specification. So you can see here there are some nice traits of hyper-container here, like service agility. It can be started within 500 milliseconds. And it has good network performance. And it has very small resource footprint, because we are not using full operating system inside. We use a minimum customized guest kernel, which is very, very small. And it has the same portability like Docker. And it is exactly have the same comfortability, because we are using the same API and command line like Docker. And the most important thing is that hyper-container can give you the ability of hardware visualization and independent kernel. It's very important, just like the machine. I'd like to give you a very quick demo to prove that I'm not lying here. OK, so this is a physical machine hosting on the package donate. So it's nothing special if you take it here. It's a very normal machine. And we can use hyper-CTL to see the status of the containers and the hyper-containers, which actually is a lightweight machine. And we can just run a hyper-container like this. It's very like Docker. And its API is very user-friendly. OK, so it's about 600 milliseconds. It's very fast. It's a machine. Please remember that. And we can also execute command inside this pod, actually, a lightweight virtual machine, just like Docker. That's why I say the configability is very good for hyper-container. OK, we are inside. We can see the file system. And we can see the kernel version here. It's very different from the host machine, right? It's hyper-cernal. It's a customized special kernel we are using. And we also check the top command. You can see here, it's totally different from the host machine because it is fully isolated. It has its own proc file system. So that's hyper-container. And we also can do something, I think, dangerous here, like a folk bomb. You can run a folk bomb inside this hyper-container. And your host machine will be still working. I will give you the example. But please note, you should not do this in any kind of a Linux container, like a Docker or Rocket, because it will damage your host machine. So let's run a folk bomb inside this container, OK? So the container is not dead because the folk bomb have eaten all the resources of the container. But if you do this thing in Docker, it will actually eat all the resources of your host machine. But on the other hand, this host machine is still alive. And we can also check the status of this dead container. It is still here. And we can just use our command line to delete it. OK, so that's all. That's the hyper-container. OK, let's move to the next slide. And you can see here, we have some actually benefits of using hyper-container to run your applications. The most important thing is that it has kernel features. It's very important because that's why you can run your legacy application inside. And the startup time is very fast. Average is about 5 to 600 milliseconds. And it's portable, like Docker. It has small memory footprint because it uses a very small, minimal guest kernel. And it has very good configability and good network performance. And it provides you with a backward compatibility. And what's more important, it gives you the ability to bring a mature and virtual machine, like security and isolation level to your system, OK? So the next part is the hyper-netes, which answer the question, how can we run your NFV workloads in hyper-container with Kubernetes? So that's the hyper-netes project, which is actually upstream Kubernetes version with hyper-container at this runtime. Please note that Hyper is now official container runtime since Kubernetes 1.6. It is integrated by using Frankty project. And on the other hand, we are using OpenStank to provide us with multi-tenant network and persistent volume. We are using standalone Kingston, Neutron, and Stinger here. We don't need to install any kind of full version OpenStank. We don't need to do that because it's too complicated. Just standalone components is enough, OK? So let's begin from the container runtime. It's very simple. And when talking about container runtime, we need to talk about the pod, which is the most important concept of Kubernetes. Actually, it is trying to help you to solve some bad habits, actually. For example, some people like to use CPU-WireD to manage processes inside a container. But we all know that container, it decides to run your one application in one container. That's the best model. So please use the pod in this example. And some people also trying to ensure the container order by using some hacky scripts is also wrong. You need to use the pod and you need container. You need container to do that. I will give you an example about that. And some people also trying to copy files from one container to another to share these files is also wrong. Please use the pod to solve that problem. And some people also trying to connect to its peer container through the whole network standard of your system is also wrong. Any kind of this container, which need to frankly communicate with your other container through local host, you can just put them inside a pod. Because pod is a group of super affinity containers. Containers inside the same pod can communicate directly by using local host because they are sharing the same network namespace. And they also share the same volume. So you don't need to copy a file from one to another. They just share the same volume here. So no need to do that. And it is just like a precise group in your container cloud. And that's why I say it's how hypercontainer match to Kubernetes philosophy. Because for example, if you want to create a pod, which name four is two containers inside, then according to the CLI workflow, it will first run for a sandbox and create and start container A and then create and start container B. So if you're using Docker runtime, it will create three containers for you because there will be an extra container, which name is full. It's infracontainer to hold the whole network name space for you. So you will have three containers inside. But if you are using hyper, hyper will create you with a lightweighted virtual machine, which name is full as a pod. And then it will create and start to use the containers inside. So that's why you can see here you have a physical pod running on your machine. Yeah, that's how hyper matched to Kubernetes philosophy. OK, the next part is multi-tenant network. The goal of this pod of work is to leverage a tenant-to-ware neutral network in Kubernetes without breaking its model. And the model of Kubernetes network is very easy to understand. So first of all, all pods in Kubernetes should be able to reach each other directly without NAT. And second, the node should also be able to reach every pod by using directly IPJs in result NAT. And all of this kind of communication should be able to best on IPJs. OK, that's the Kubernetes NAT goal model. In order to implement this, we define a network object, which is a top-level resource object in Kubernetes. And it maps to Kubernetes name space by using 1 to n. And the key point is each tenant created through the Kingston will have its own network. And it also have a standalone network controller to manage the lifecycle of a network object. So the network here will be mapping to the Neutron NAT object. OK. So as long as we have a network, the next thing is how to assign a pod to a network. So the pods blend to the same network will reach other directly by IPJs. And this is actually done by using kublate. So we're going to kublate workflow. When you start up a pod in kublate, it will actually call the setup pod in the network plugin. And it will send a gRPC request to a small demon, which name is kubistank. And it's the kubistank responsible to transfer the API to the standard Neutron API we're using. And the responses from these API calls will be used to configure the pod machine, actually. So it's very easy. And that's why we need kubistank. But it's also responsible to handle the multi-tenant service proxy. The reason I say so is because the default IP table best kubi proxy is not talented there. For example, your pod and nodes can be isolated into different networks in the multi-tenant environment. So that's why in hypernetics, we have the building IPvAs as a service avancer in the pod. And it will be able to handle all the services updates in the same name space. So there will be no network isolation here. And we also follow the standard on-service update and on-end point update workflow in kubi proxy to make sure that all the services and the endpoints here will still keep up to date. We also support the external load balancer, which, for example, you create a service which type in the network provider. We will call Neutron API to start OpenStack LB for you to use here. So this is the network part. The last part is the processing volume, which is a little bit different from the one we are using today. In Kubernetes, if you create a volume, it will have two steps. The first step is to attach the volume from your volume provider to a machine to the host machine. And then it will buy the amount to this host path to your container. That's all. That's Kubernetes volume. But in Hyper, we don't need to do that, because the Hyper virtual machine, the pod here, is actually a virtual machine based on HyperWider. So you can just directly attach a block device to this pod. No need to attach first and a buy amount. No need to do that. We support that, of course. But you don't need to do that, because attaching a block device directly will give you a better performance, actually. So you can see here, that's why we don't need to install a full version of this tank, because we don't need to use no. We don't need to fight where the node is to attach it. We just mount the block device directly to the pod. That's all. OK, so if you create a persistent volume in Hypernetics, it's exactly the same like what you're doing with the normal kind of volume. Just please remember to create a standard volume beforehand. And remember it's ID. And then you just put the amount claim you want to use the standard volume. That's all. Then, over, enhance the standard volume plug-in very responsible for all the stuff to mount the block device to your pod. OK, so this is a full topology of Hypernetics project. It's quite simple. It is just a standard Kubernetes cluster running and with a hypercontainer as a runtime. And you will also need to install standalone Kingston, Neuchon, and Stinger with SafeRBD, for example. And you also have the Neuchon L2 engine running in every node. And you will have a Kubernetes stack running in every node. And in master node, in master machine, you will have an extra network object. That's all. And the next goal of Hypernetics project is one to make things more modular. For example, we will use a third-party resource to manage the network object we introduced here. And we will translate the Kubernetes stack to a standard CNI plug-in. And we will remove the so-called enhanced standard plug-in because we know we can just write a special plug-in for block devices. It's special for Hyper. And in other cases, you can just use any welding plug-in, which is supported in Kubernetes. OK, so if you're interested in this project, please pay attention to the Hypernetics repo. And we have a roadmap here. We are updating code there. So you can just pay attention to that. So let's back to the Clearwater demo. So as I said, Clearwater is composed by 10 components, which actually are very complicated, you can see here. And what's more important is that every component in Clearwater, they're just trying to communicate with other components by using DNS name. That's why you can see here, we need to create a headless service for every component in Clearwater to make sure it has a special DNS name. And although it looks like a little bit complicated, but deployed it to a Hypernetics project is super easy just by one command. I'll show you how it works, because it's all based on the Hypernetics project and the Clearwater Docker deployment. So I have a small demo to introduce the things here. OK, so just imagine you have already installed a Kubernetes cluster with standalone use from Kingston. You can check the scripts of in Hypernetics project, how to do that. I have not demoed here. So as long as you have your Hypernetics project running, now you can just begin to log into the Clearwater project and then just run kubectl, create Kubernetes deployment. That's all. And after that, all the services and deployments will be created here. And you can check the post status. And you can see all the post is undercreated. For example, if we define two instances for a procedure deployment, you will have two procedure posts creating here. It's very easy to understand why. But there's still one thing I need to mention. It's about the line list problem. We define a line list problem and a line list problem in every deployment in Clearwater because Clearwater is using a C2RD to manage your prosthesis. So unless the expected pod we are defining here becomes available, then the pod will become running. So we need to make sure that. So that's why we need to wait for a few minutes. OK, so now let's check for the status of the pod to see if it's running. OK, it's 10 minutes later. It's become running. And that also means all the applications inside the container are available. So that's why, for example, we can check the status of a homestead pod, for example, to see if it really works. Let's execute a command inside this pod, inside this container. It's very easy, just like you are in Docker. OK, so we log into the container. And then we can use net state to check if the expected pod is listening. OK, it works. And we can also try to make sure that this is running inside a lightweight virtual machine. For example, let's check the current version. Yeah, it's a hypercontainer. And we can also check the status of top command as I just showed you before. Yeah, it works. So that means this NFV deployment is fully running inside a small, lightweight virtual machine built by a hypercontainer. So let's back to the Clearwater project again. So now we can try to consume over Clearwater project. And by using its public IP, please notice that we need to assign a public IP to the Clearwater project. OK, it works. Now we need to sign up user by using your favorite username and password. OK, so we'll skip the part. OK, so when we log inside the Clearwater, it will create an account for you. It's an IP account. You can use it to make a phone call. So we need to now configure this account to a client. We are using an excite as a demo here. You can try other things. OK, let's configure the username and password inside this excite software. It's very simple. And we need to wait for a few seconds to wait for the account to become available. OK, it's become available. So that means the Clearwater project is functioning very well. Now we can just use it to make phone calls. So in order to do that, we also need to create another account. So we can just try to establish some phone call communication between these two accounts. So that's why we need another machine. And on this machine, we will again visit the portal IP of Clearwater project. And then we can see all the account is here, right? And then we need to create another account by clicking this button. OK, so now we have two accounts. And then we need to configure this account to the client on this machine. It's also an excite software. OK, done. Let's wait for a few seconds. Come on, guys. Bandnet work. OK, our new account becomes available. Now we have two accounts running on two machines, right? So let's try to make a phone call, actually a video call between these two accounts, a video call. OK, so let's wait for a few seconds to establish the connection. This actually depends on your network or environment. I did this demo in China. So OK, so now we have received a phone call from the other machine, from the other peer. Let's answer it with your over video. Let's wait for the video being established. Come on, guys. Yeah, so that's the video being established. That's actually about we are using the Clearwater Project to serve as an IMA system. And we are establishing a video phone call between two machines by using, by consuming, our Clearwater Project, running in Kubernetes, in hypernative project, actually. It's fully open source. You can try it out if you like it. OK, so that's the main content of my demo. And actually during the process of deploying Clearwater to hypernative project, we also find there is something we need to learn from this kind of workflow. So the most important thing I need to mention is that we should never use things like CProvider D or SystemD to manage multiple applications inside one container. It's wrong. Don't do that. You can use a pod to do that. I'll give you a simple example here. The Clearwater Project, for example, this pod actually has some auxiliary container running. It should become running before all user containers. You can just define them as a so-called innate container inside a pod. As an example showing here, for example, the Clearwater Infra container and Clearwater SNMP container. You define them here, and they will be executed one by one precisely in right order before all the user containers define below. So that's how we use containers inside a pod. And that's how we use the pod to solve your complicating deployment. That's the core concept of microservices. Microservice is not talking about putting all things together into the container. That's wrong. You put everything into a single container, and you combine them into the same pod if necessary. That's microservice. OK, the next thing we learn is do not abuse DNS name. So the Clearwater Project actually is using some very weird DNS name, which is actually illegal in communities. So if you are trying to use DNS name, please remember you follow the standard rules of DNS name. For example, this is the run, and it has been fixed in Clearwater AppStream, luckily. And the next thing I need to mention is that if you are using Kubernetes, it is very important for you to remember that the live needs and the ready needs check are really useful here. And the Clearwater is one of the best examples because it is using the CplWiderD to manage all the application processing inside a container. But the CplWiderD is always running, so you have no way to figure out whether your application is running or not. That's the problem, and that also happens in Java application. The Tomcat may be always running, but your application may be dead. OK, so that's why we need the live needs problem. And in this example, I defined a very simple script. It will periodically check the expected pose if it is available, and only the pose can be connected in this TCP, then the pod will become running, become available. So that's the key point of the live needs check in Kubernetes. And it also same with the ready needs problem. It decides whether this pod can be used as a service bank and or not. So that's the key point of using this kind of checking Kubernetes. OK, so this is the end of my speak, and talk about Kubernetes, talk about how to use hypercontainer as a runtime in that project, talk about how to install new chunking stone and a cylinder in standalone mode to provide you with a multi-tenant and secure functions in this hypernetics project. And I also have good news to share is that we have proposed the StackCube project, which is a new OpenStack Foundation project, which is fully originated from the hypernetics project I'm talking today. So this is the end of my speak. Thank you very much. So we still have time for some QA Do anyone have a question? Hello, I'm Fuqiao from China Mobile. And you know that when we are deploying multiple VNFs, a problem that actually stop us from using container is that it is closely bounded to the host OS. So when we are really deploying these VNFs, we're using the common host OS. And we will have multiple VNFs coming from all different vendors. So since the container is closely bounded to the host OS, you have to have all the different vendors to create their VNFs all based on the same host OS. And any one of them want to upgrade their VNFs. They have to negotiate with the other VNFs that are actually running on the same host. So that is actually the things we consider when we are talking about container. So I'm wondering if these fancy things that you proposed here, whether it could solve this problem. Yeah, I think so. And that is what I'm talking about, the latency application, actually, because if you are using hypercontainer, it has independent guest kernel, just like a normal virtual machine. So in this example, we are using over default kernel, which name is hyper kernel. But you can also, you can even bring your own kernel, a full kernel. That also works. So that depends on you. If you found that your application can run on, for example, all the version of kernel, you can just try to use that in hypercontainer. It's totally free. Totally free. Hypercontainer supported that. I think they solved your problem. Okay, sure. I will definitely follow this project and see. Yeah, thank you. Thank you. So I was just wondering, how does hypercontainer compare with unicunnels? Okay. Because there also, you try to have a minimal kernel and try to make your or package your applications along with that minimalistic kernel and make it work in an isolated environment. So how does it compare with unicunnels? Yeah, that's a good question. Actually, unicunnels and hypercontainer is two different things because the core idea of the unicunnels is how to package your application with the operating system. So in order to run a unicunnel, you also need to use a hyper-rather part. So that's a problem here. For hypercontainer, we actually did a lot of things to optimize the hyper-rather part. That's why it can be started very fast. That's why it has a very small memory footprint. So this part of our work cannot be done in unicunnels. So that's the first thing. And the second thing is that please remember that the most obstacle, most important obstacle, people using unicunnels is that you need to learn how to package a unicunnel thing, unicunnel stuff. It's not easy. It's not like a dog image. So hypercontainer, however, is more like container using dog image, for example. So that's why I think the unicunnel guys proposed the Linux kit project. The Linux kit is something similar to unicunnel, but it actually is using the normal way to build a customized kernel. So I think it has a much better user case than unicunnel. Okay, so thank you. So the follow-up is then in that case, can I use Linux kit instead? Yes. Or how does it compare hypercontainer with the Linux kit assembled, let's say runtime or application packaged as Linux kit? Yeah, I think you have mentioned the thing what we are doing today and we are actually integrating Linux kit as a minimum kernel inside hypercontainer. So you will have hypercontainer running, but you have your ability to build your own kernel by using Linux kit tool kit. So that's what we are doing today. And then you'll have support for Mobi also? Yeah, of course. That's what we are doing. Yeah, then it makes portable. Okay, thank you. So I think it's time. Thank you very much for your interest in our project. Thank you.