 Great work. So first of all, who I am, I'm Gerard, I work for Red Hat. I'm living in China for many years. That's why I speak a little bit of Chinese. And I'm a principal software engineer for the unit called Developer Tools. So before Red Hat, I worked on large scale OpenStack deployments. I developed a lot of codes that deal with agile projects. And that probably also explains a little bit why I kind of care about what I'm doing here. So currently I'm working on container technology. Everything targeting Docker, Kubernetes, Potman, OpenShift. And specifically developer tools. The tools that I develop are all kind of utilizing virtualization to simplify a problem that we're having. These can be live work, HVM, Hyper-V, Hyper-Kit, and something else that we are doing, machine drivers. So part of my work is that I worked on Minicube. Probably people have used that before. It's almost essential if you want to learn about what Kubernetes is. So the project that I worked on at that moment is called Minishift. And now working on code-ready containers. So developer tools. Developer tools is within Red Hat a division that is kind of focusing on tooling that has to deal with developing container-based applications. Container-based applications is a completely different way of how you should look at your application. And for a lot of people that was kind of like how to start. So within Red Hat they saw that there were a lot of tools in the market. Messersphere and Kubernetes is not even a thing yet. And they thought, well, maybe we need something. And they called it ADB, the atomic developer bundle. They provided different tools to get you going on developing container-based applications. And over time that developed, just like Kubernetes became a thing, more like, well, we should have something that targets specifically how to develop applications targeting that. And that became Minishift. So Minishift started out as a prototype tool from, I'm not mistaking Jimmy Dyson, worked a lot on Kubernetes, but for Red Hat specifically on OpenShift. And he took the source code from Minicube, forked it, and changed some stuff, making it work for our purposes, which was good. It deployed a local OpenShift 3.0 cluster. It did it all in one, and it uses a method called OC ClusterUp. Everybody was happy because it worked. So that tool, being a prototype, was not perfect. So we took up the work and kind of worked on it, trying to make it stable and trying to get the stuff in there that we wanted to see. Because people use this tool eventually as kind of a means to learn about Kubernetes, learn about what OpenShift is. And especially a very important, develop and deploy applications in a very reliable way in a cluster that they have at that moment. And especially here, because it is not easy to get Kubernetes up, it's especially not easy to get OpenShift up and running. And if people tell you, oh, you should run it on AWS, that's not an answer you want to hear. That's certainly not something you could do as most developers, let alone that you have access to AWS or whatever. So that's why we've worked on this and we've noticed a lot. So one of the things, most of the things we learned was this, the lessons learned. And I'm pretty sure there's a guy there sitting who's from MiniCube, our colleagues in a way. So what we notice is networking always complicated deployment. If you understand how MiniCube works, how you understand how Docker for Windows or for Mac works, it's a virtual machine. And a virtual machine is not your native machine. There's networking involved, the virtual networking layer. So as soon as a company would deploy this kind of tool, we ran into issues like, oh, but I need to use a VPN to access my company resources. Well, that should not be an issue, right? But my VPN tool uses a route all. So every traffic, all of it will go through the VPN. Oh, that's painful, because that means that the access to that VM will also all of a sudden go over the VPN, which is in that moment not local anymore. So we ran into a lot of issues here. Also, the other stuff that we noticed is there were firewalls involved and proxies, especially on if you're doing Windows deployment. Tools like your virus scanner, McAfee, semantic endpoint protection, whatever tool they could be, they might inspect your packages and might in that case, hey, this is a network segment that we don't know, they would block the access. Or proxies even, and especially here proxies would be company proxies. Our resources need to be accessed on a different place or via a proxy to actually get access to it. So how do you get that in a VM properly to work? Local machine, it works, but why doesn't it work in MiniCube or MiniShift? And especially when images were pulled from the internet. We all know that most of the Kubernetes images and also for OpenShift, they live on, for instance, a registry. These registries are maybe operated by Google, Google Cloud Storage or Google Cloud Registry. Especially if you're here in China, most of my deployments would fail. And if I would explain to my colleagues why it failed, they would look at me and shrug, but it works for us. Yeah, but I think some of these images are actually hosted on a location that is not possible for me to pull from, right? So how to deal with proxies in that case or even registry mirrors. So this is a lot of stuff that we notice is mostly on networking things fail. We worked a lot with actually the MiniCube team also trying to resolve a lot of these issues, but some of them are really complicated. They were not easy to deal with. Besides that, we were also working on lip machine. If you know what lip machine is, any of you maybe? Okay, good. One or two fingers go up. So it's a library that the people at Docker made specifically for setting up Docker runtimes on platforms where they could self contain it. So even on Linux, for instance, you would run up or spin up a very simple VM and within that VM there would be a Docker container runtime environment. Well, that's great because that means that you can kind of replicate on other platforms. You're using a VM on Windows and a VM on macOS and it would all be similar in a way. However, this Docker library is dependent on the resources at that moment that Docker would put into it and the community effort. Docker decided at some point in time, well, this is not a library we will be using anymore, but we and MiniCube were relying on it. So what to do then? We can try to support it ourselves and that's kind of what we did is we forked off several of these drivers and we created machine drivers, but now it's still, it's in our maintenance with MiniCube trying to get that stability and it's hard because we are a small team, but we're like looking into what we can do there. But also the other problem that we're still having is we are forked from MiniCube. So we're not MiniCube and we had talked many times with several of these team members what we can do there. We might be able to integrate a way to post, start and pre-start to be able to actually get something in there that we can run our commands for deploying our tools. And a lot of these ideas did formalize, but they were not in the time schedule that we were trying to develop it. One of them that of course was integrated was Cube Admin now. But for our case, we would like to have had that earlier, but yeah, unfortunately. So what we did instead is we faked a lot of these issues, SSH issues we're trying to get access to the machine, stability issues with networking, especially even in lip machine we had many issues resolving around the idea of localization and internationalization. But we noticed that unfortunately for us it's very hard to deal with a fork project. It would have been better if we would have done that differently from the beginning. But when we started on the tool it was a prototype, it was not our project. But even the worst part of it was it was using a non-standard installation method. So it was not like open shift in production. And this is a very bad thing because if you're expecting to be able to run your application on this environment, but on your local desktop it worked, you push it to the cluster and then all of a sudden it would fail, which usually is the other way around. It works on my machine. So this was an issue. How are we going to solve that? And unfortunately this was also not a thing that they were able to solve for us because hey, you want a single node cluster, right? So now I'm going to talk a little bit about what then OpenShift 4 code ready containers and cluster API is. So OpenShift 4 is a new version. Yay. But just like the previous version of OpenShift it provides everything you want to get your source to running an application, what we call source to image. But it provides with that a lot more. It becomes a trusted enterprise Kubernetes solution, highly available, which actually is for us a thing that we don't need. An installer provisioned infrastructure, auto updates, operators, and it would be deployed on Red Hat CoreOS. But yeah, we want to provide a developer tool that you could run on your local machine. And they even came with, well, there's a new installer. This new installer is also a product from the lessons they learned related to the installers they had. They had Ansible scripts and stuff, and they had a different installation methods and it didn't work for us. So they looked at that and they came up, well, we'll have a new installer and the new installer will kind of work as what we will do with, for instance, the cluster API. It targets, for example, the cloud providers. When the first release came out, they targeted AWS specifically. But now over time it will support Azure and will also support to bring your own hardware and stuff. But usually what it will do is it will try to maintain the nodes in the cluster. And this is what we call the installer provided infrastructure API. With that it will create nodes that run CoreOS, Red Hat CoreOS. To do that, it actually creates minimal bootstrap and master nodes to form that control plane. This is all kind of like as what a cluster API would expect. So and after that, worker nodes are created using the cluster API. So how will that look like? So this is, for instance, your cloud provider. See it as your AWS or something It will create an initial bootstrap VM. This bootstrap VM actually encompasses kind of like a net CD. It's an initial start for your cluster. It's an ephemeral VM. It means that this VM will not stay around. After the cluster has been created, this VM is not needed anymore. So it will start it up and it will start the installation according to the things you've provided, the setting files. It uses ignition for that. And at that same moment, with Terraform, it has created several masters to create your high available environment. And then it will wait around for the master nodes until the bootstrap VM is completely ready and serves up the configuration file that is needed to deploy the master nodes. So you will never have a master node that is specific to be started first and has all the resources to create the other ones. No, all the masters will be provisioned and started and created at the same time with the information from the bootstrap VM. So after that is done, you get rid of the bootstrap VM. After that cluster API comes into play. So the cluster API is a way to deal with the creation of nodes within a very general way. For instance, if you want to create nodes on a platform like Amazon, you don't want to know and specifically want to know the Amazon API to talk from your cluster how to create these nodes. So for day one, there's an installation of the cluster and for day two operations, it deals with management, how to scale out, how to maintain them. So this is what it will look like then. After the masters are created, the cluster is ready in a way, but you can't run a workload. The masters, the control plane is marked as tainted. I will not run your workload. So at that moment, it will just instruct. I want worker nodes now and these work kind of like any kind of resource kind of like a replica set or a bloop deployment set. I'm telling it, I want several of these and therefore the workers will be created and it follows the same procedure in a way. It will create with ignition a request to, hey, I want now something that looks like a worker ignition file. It talks to the control plane, it retrieves that definition and those nodes will be created. So great, but this is mostly targeting cloud providers. So for us that wouldn't work. So how do you develop this on a developer environment that's local? How do you're going to test actually that something for the cluster API works as expected? So one of the things that we did in our teams is we worked on the LibVert cluster API, specifically a provider for the cluster API that deals with local LibVert environments. So in this repository, you can find more information about that for sure. I will later actually show a video about that, a short demo. It's still for now in our situation, it creates the bootstrap using Terraform. So it will look like this. So there will be an operating system with a virtual machine monitor, in this case LibVert. There's a bootstrap that gets created and the same masters. Exactly the same situation. So for the LibVert provider there will be no difference in how we deploy. And then again, we instruct the cluster API to create the worker nodes. But then, that's not what we want. If you want to have a development tool locally, we don't want to create a high available environment. So we want a tool that provides a local open shift forecluster. We can have a new start, luckily. We have a new tool and the opportunity needs to do that. But hey, it needs to be familiar. Okay, it needs to be simple to use. It needs to do start, stop, delete, and set up. And emphasize here on set up. So we wanted to stay as close as possible close to what MiniCube did. This is what we have done before. That's the same as we did with MiniShift. So we want to do that again because this is what people expect. Although we do introduce a new command like set up, we want to be able to take away the burden of getting it up and running easily. So, but then it needs to be configured as a single node cluster. So no high availability. It needs to be targeting Linux windows and macOS. And it needs to be optimized for use with hypervisors. So this is actually kind of where the problem for our start it is. More like how do we do that then? So let me see. So we created a single node cluster according to this repo where we prefer to deploy master and worker on a developer machine in a single VM instance. Which means that we needed to disable certain things. We untamed the virtual machine. And now what we do at the end is we create actually a VM image. Very simple. We bundle it up. This tail took away the problem for us that we had with actual packaging it up. Because when we did an installation before MiniShift, people would start it. It will pull images from the network. We would touch remote registries. These might not work because there was no network connectivity or it was burdened. So now you download a whole bundle of a VM and at that moment when you do start it unpacks it. It starts the VM and everything is there. So the only problems that you will encounter is the cluster is up and then. So if the network is then unavailable that's a different situation. At least you have your cluster. Yeah? So people said but hey but then I have a download whether it's 1.9G or more. Well if you do a stop, start, delete, start, delete, stop whatever order you do that. As soon as you deleted something before with MiniShift or MiniCube, this whole process of repulling happens again and you're also pulling in that same amount of data. So wouldn't it be better than in that case the use of VM and at least what we got is it's a reliable way. At least for our situation this works perfectly. If we want to do nightly releases for instance of OpenShift or even Kubernetes it's slightly more difficult because you're not going to create these images. Although we're now considering maybe actually to do that. Yeah? So but then there's another thing. People still actually want to be able to scale out. The machine said they ask us. But now I have that single node VM, the single node cluster. Is it possible to actually use the cluster API? Yes it is. We are now at the moment still developing it. It's kind of a work in progress. There needs to be some changes for that because you miss some images on your local machine. But it's actually possible to actually also talk to the same environment and say well scale out. I have a single node cluster but now I want several workers. Is that possible? Yeah sure. Change the replica set and it will create for you new workers because it will be able to talk at that moment to your local liver demon. So that will look like this. This is kind of at the moment the handwork you need to do a little. You need to enable liver to listen. You will need to be able to talk to this endpoint. It has to be very specific because it's now hard-coded in the in the livered environment. But that's actually kind of what we always deploy. So this should work. We changed the replicas from zero to actually what you want. And it should be able to spin up your worker nodes. Actually it says here zero but it should be two. So now the current state that we have with code ready containers. We are able to deploy on livered and KVM as the original targets. We had a lot of good feedback about that. We are still now working on the actual hyperkits solution. We bumped into some issues but they were related to some driver issues. We have resolved them now and they will be proposed upstream. And we're kind of now looking at Hyper-V. I'm just beginning to do that. I'm working on Windows actually for this also at this point. But we're expecting to have this also maybe in a short time to be able to have Hyper-V support plus virtual box. So I see that the time is quite short. Future. As I said already we want to be able to work on Hyper-V. But this needs specific support. If you know if you're having a cluster you need to be able to use DNS. And specifically for OpenShift it will actually have inter cluster communication based on domain names. So we need to provide the DNS solution. We've been testing it now for virtual box and it works perfectly. So we'll be able to see if this will also work for Hyper-V. Our next step after that is we are anticipating really to release an upstream project for this. Mini-Shift was always upstream first but unfortunately for this situation now is how OpenShift got released. We have to kind of do the reverse. We're now looking at if Fedora CoreS will have an image for that or CentOS to provide our basis for that. But the other stuff that we're actually also looking into is done. Well besides that we could also integrate tools for Pubman and Builder. We have been working before with people who did the Boot to Pubman project for that. And we're trying to see if we can help out with harmonizing some of these images and tooling set for that. We're also looking at the machine drivers again. Especially for Minicube this will be very helpful. We've solved a lot of issues that are related to networking like static IPing and we'd like to see how we can help with that to get that up and running for them. And more cooperation hopefully in the future with them. So the how about then? I've talked about possible multi-node clusters on Libvert which is kind of similar in the sense of like getting it onto a cloud provider because there is a provider for that. But how about Hyper-V then? At the moment there is no support or no cluster API targeting Hyper-Vs particularly. There is now work ongoing for Azure and we hope to leverage a lot of the same things. But the way that communication is handled is slightly different. So we're looking at how it's possible to kind of have an intermediate. If you want to do that on your local machine that understands the WinRM protocol for management to actually start local machines on Hyper-V. This is very preliminary. We haven't made any decision. We kind of have the same problem for Hyper-Kit. Hyper-Kit itself has no management tooling whatsoever. So there are no tools actually to easily deal with lifecycle of that. So is there something that we might have to create for that if we really want to create a local cluster API provider for that? And these are questions that we have and we'd like to seriously to hear from people if what their thoughts about that is. So I will stop here. I would like to thank you. That's my WeChat if you want to sync up and talk to me. Again if you want to send me an email particularly about MiniShift or Code Ready containers please send it to Red Hat. But if you have questions about community in general because a lot of people know me from there you can also use my personal address. Okay. So I want to open the floor for questions and answers. Always people in the first row. Hi. Thank you for your presentation. I have a question. Actually I think I misunderstood the word desktop virtualization because what I was trying to do is to put the GUI applications inside a container and then managed by Kubernetes. So actually we are already succeeded in putting the Linux-based application inside the container. Right. My question is is it possible to run the Windows-based GUI application inside the container and manage it by Kubernetes. Thank you. Well that's not the purpose of our project but I know that people have been looking at that especially specifically for Windows containers and stuff. I wouldn't be able to answer that for you. I could ask around what we can do about that because one of the things that we even as MiniCube also talked about many times people want to be able to run a local cluster maybe from their VM or whatever that talks to multiple Windows service in their environment or whatever or virtual machines that run that. At the moment we don't have a good solution for that. I have to be honest about that. Thank you. I use CLC and it already is a much better experience compared to running CLC before on AWS. So my question is this. So in MiniShift there is the profile command. Right, right, right. Okay so in MiniShift and in MiniCube you can have multiple instances on your machine which is if you're working on different projects or different streams it could be very helpful. I understand that. For MiniShift this was possible and for MiniCube because the resources that you will use on a machine were kind of quite minimal. It wasn't that much. So at this moment since we're quite early with the code ready containers we have decided to be now a single profile, a single instance. But in future it is possible for us to very easily provide the support to have multiple instances. We have everything in our code base already handling this. There are multiple ways to talk to the VM. We have names properly coded for that. We could even have multiple virtual machines within an instance like for instance a master and a worker. So we already have support for multi-node cluster. So we have support for that in future but at this point in time it's not something we have focused on. We've now only got the first alpha release out for I think, is it three, four weeks? So it's very early on but I would definitely see people requesting it. So yes, it's an option. It's possible. I just can't tell you when. Okay. Well I would like to thank everybody then. If there are more questions, may you end, Tima? Okay, thank you.