 Hello everyone. Thanks for coming. My name is Ranjan. I work for Red Hat. I work out of the Bangalore office. And today I'm going to talk about code-ready containers. So my goal in the presentation is to introduce code-ready containers and talk about how we build it and finish up with a demo, simple demo. Okay. So first of all, what is CRC? Code-ready containers is a local OpenShift cluster. So earlier we had MiniShift, but it only worked for OpenShift 3, version 3.x. Then OpenShift changed the architecture and moved to an operator-based self-managing cluster. So we needed to build this new tool to kind of do the same thing. Have a local cluster on the developer's machine so that they can try and test out their code, how it would work in a cluster. So there are like two parts of CRC. So you have a disk image which is created from OpenShift installed. But before that, why do we need CRC? So OpenShift installed, which is the supported tool for OpenShift 4.0 installations, it only works for cloud environments and big clusters. And developers, they don't need it. They want something locally on their machine so that they can iteratively try out their code. So the kind of advantage of code-ready containers is you don't need public cloud. And OpenShift installed, it only works. It has a leverage support for locally doing installation, but it needs, like, 3V, it creates 3VMs, which is kind of heavy, and it only works on Linux. And many developers who actually want to deploy their applications to OpenShift, they use a development machine which is not necessarily on Linux. They use Mac mostly and Windows. So with code-ready containers, we wanted to have a local OpenShift 4 cluster which would work in Mac OS, Windows, with the same CLI options, some same commands, like MiniShift. Okay. So internally, code-ready containers creates a VM which is derived from OpenShift install, which has a minimum of 8GB RAM, and we use the native hypervisor with the operating system provides so that for Mac OS, we use HyperKit. For Linux, we use Livevert, and for Windows, we use Hyper-V. And we use LibMachine, which is a Docker machine's internal library to manage the VM's lifecycle. So basically, code-ready containers is nothing but two things. So we have the disk image, which is like the VM image which has OpenShift installed in it, and we have a tool. But as a user, you don't see that difference. You get one single CLI tool to manage the whole cluster. And we put everything in the same binary so that you don't have to download multiple things to just download one binary and you are ready to deploy your code and test it locally. And so now I'm going to talk about how we create the disk image, because that's really the main part of it, the CLI interface that you get to what it does is basically manages that VM. So disk image is generated from OpenShift installed, and all the scripts that we use to generate the disk image is available in that link inside our code-ready namespace on GitHub. So like I told you, there is the support from OpenShift installer, but it doesn't create a single node cluster. It will create three VMs in your machine. So you need really a beefy machine to actually try it out. But we wanted something that is a bit lightweight than that, so we create a single node cluster. So there's the master, where the API server is running of OpenShift, and all the workload is also scheduled onto that master. We use the OpenShift install provider to actually create the cluster and change the configuration so that it doesn't create a worker node, and the master is able to run the workloads. So previously OpenShift installed used to taint the master, but they still taint that it won't allow workloads to be scheduled. But later on, I think in the last month, they removed that, and now we don't have to do any changes to the worker's configuration, and it's able to run those workloads inside. And we also scaled down several of the operators because in a developing environment, you actually don't need monitoring. I mean, if you have a spatial use case that you want to consume, then that you can enable it by yourself, of course. But by default, we scale down the monitoring operator, we scale down the cluster version operator, which is used for upgrading the cluster. And we also do some housekeeping to reduce the space. We delete the logs, we stop generality from logging everything, and we also delete some of the pods that their work has been done. So what the snc.sh script, the shell script, what it does is it basically uses the OpenShift install with liver provider to create the cluster, then changes the... sets the worker node to zero in the install config so that it doesn't create a... Then we also create some PVs, personal volumes, so that later on you can create PVCs to use those PVs in your code. And we also expose the internal registry so that you can directly push your images to it, and in your cluster, you can use those images to deploy your app. Now, after snc.sh, the script is run. We have a VM, which is like a single node OpenShift cluster. Now we have to ship it so that developers can actually use it. So for that, there is... We run another script in our CI, which is create this .sh. And before that, we also do some other things so that the developers work becomes easy later on. So we add a developer user with HD password identity. So the username is developer, the password is developer, and with that credentials, you can get into the cluster. We... Oh, so to use OpenShift install, you also need a pull secret to pull the images from quay.io. And while using OpenShift install, it'll ask you for the pull secret. At that time, we use our own pull secret to actually create the disk, but we cannot actually give it to everyone for using because it'll be using our credentials to pull from the registry. So we just remove that pull secret. We remove the cluster ID. And during CRC, when you get code-ready containers binary, during CRC start, those things you can pass it to as an argument. Then, yeah. So after SNC.sh runs, we have a VM. And we do all those things. Then we extract that disk image from the CRC VM. We have the QCao 2 image, which has the OpenShift 4 in it. And then we convert that to the HDX format for it being able to run in a Hyper-V hypervisor, which is the Microsoft default hypervisor. And for Mac, HyperKit is able to run the QCao image. And for running that image in HyperKit, we also need some additional information like the kernel command line and the path to the kernel. And there are many information that is needed for OpenShift install to actually create the cluster. So we club all those information in a nice little JSON file so that later on the CLI tool is able to pull it out and start the VM as it's supposed to. And we package them all together in a TAR archive. And we changed the extension to .crc bundle because initially people started extracting the disk image and they tried to send that disk image to CRC, but CRC was not expecting a disk image. It was expecting a bundle. So we just changed the extension to .crc bundle. So after we have the disk image, we have the OpenShift thing. Now we need something which can start the OpenShift cluster, stop it, and if you want, you can delete it. So we have the CRC CLI tool, which is the Go code, which actually manages those things. And we basically wanted to have a very simple interface to it. So you need to know basically two commands, like CRC setup and CRC start to have your OpenShift cluster locally running. So what CRC setup does is it checks if your machine is able to run a VM and if it has all the networking configurations needed to actually route the DNS requests to the cluster. So on CRC setup, we check all those things like if your machine is able to run VMs, if your network is configured to route the requests to the cluster and all those things. And if the setup, CRC setup command finds that anything is missing, it does the setup for you automatically. And some of these operations are like privileged operations, like you cannot edit ETC resolver as a normal user. So CRC setup will ask you, will prompt you to enter your password. And just don't be afraid to give it to him and it will change those files to actually set up your DNS. And how the DNS is set up is we just add a configuration in the host to forward the DNS request that are supposed to go to the cluster to this DNS mass container running in the same VM. And that way we don't actually have to run many things on the host. We just reroute the request to the VM. And then CRC has, so after all of that, you have a cluster. Now you need to actually, basically, easy way to access the cluster. So we have additional commands called CRC OCN which will configure your OC to talk to the CRC cluster. We have CRC console which will launch the web console. If the console URL is something cryptic, you don't have to remember it. You just do CRC console and the console opens up. To use the cluster, you can use these tools, the usual tools that are being developed. So ODO would work, your OC would work, the web console already works, and there is also OpenCVS code connector which was shown in the previous presentation. So all these supporting tools work with CRC out of the box. So these are the things that we want to do in the near future. So there is no support for OKD still because OpenC4, the VM it creates is based on Red Hat CoreOS which is like, you need actually some developer account to use it. But it would be really nice to have OKD with the Fedora CoreOS in CRC so that it can be used by anyone who doesn't have a Red Hat developer account. So once that support lands in, OpenCiv install will also have it in CRC. And Podman support, another thing. So on Mac and Windows, if you want to move to Podman, you have to, there's one way, there's a tool called Boot to Podman. It also creates a Linux VM and your Podman client tool would talk to the Podman demo running in the VM. Since we already have a VM that has Podman support inside it, we just need to expose it so that with CRC only we can use Podman. So that's something being worked on. Another thing is, right now, since it's kind of a full-blown OpenC4 cluster, it takes a lot of resources. You need minimum of 8GB of RAM by default to run it. Whereas in MiniShift, we could do it with 4GB of RAM. So there's a lot of effort going on minimizing the resource consumption. Then another thing is like... So these CRC starts, CRC stops, CRC delete. These things are very common. So if someone is working on an IDE like VS Code, they don't want to open up a terminal and do those things. So we wanted to have something that would sit on your system bar on Mac and like the start menu on Windows. From there you can, with a few clicks, you can do those actions. So that is being also worked on, which is I was working on recently. It's supposed to come in the next release of CRC. And this continuous upgrade to the latest OCP version. So our release cycle is four weeks. So every four weeks we push out a new version, which contains... Whose disk image is based on the latest OCP release that they push out. Okay, so these are the resources or the source code for code-ready containers, the SNC. Like if you encounter any issues, you can go there, open a GitHub issue or look at the code, raise your pads if you want something to be done. And you can download CRC from this link. I think slides will be available later. And on IRC we are on hash code-ready. You can talk to the developers there. So I'll now try to quickly show a demo of CRC, like start the cluster, try to deploy a sample application. So let's do CRC setup first. So I already started the cluster because it takes some time, so I wanted to be ready. So everything is set up already. And now I just need to do CRC start. And yep, it is going to start OpenShef 4.0 to the 13, which was the 1.4 release of CRC had. Yep, sure. Is that all right? And once this finished, I'll be able to access the console and I'll try to deploy a sample application just for completion sake. One more important thing is, with earlier releases of CRC, the certificates in the cluster used to expire after 30 days, but that issue has been fixed now. So if you want to stick to an older version of OpenShef, you can do that now. The cluster will automatically regenerate the certificates and it will have a longer validity time. So cloud.redhead.com, you'll get a link there to download CRC. ESCS, you need a redhead developer account because the builder images that a cluster needs is pulled from quay.io, which you need access to that registry. So these credentials will be used to give you access to that registry. ESCS, we are using, so the next release is supposed to have 4.0.3, which is supposed to come tomorrow. I think I know some teams inside Redhead that are doing it with CRC. We don't test it, but there are teams who have success doing that. You can do that. Run it on a CI to test your application, delete it, then run it again on the next run. You can do that. Yeah? Yes, I do. Just put it in the airplane mode. When OK comes to life, will you still need a redhead developer account to actually use that? Hopefully not. So how do you then pull any images? Like you want to base something on redhead? So then you'll use the quay.io images, the builder images. It's also based on the redhead rel. So those images will be based on CentOS or Fedora. So this will be available in the public registry. Okay, so we have the cluster. I will do CRC console. So you can check the status of the cluster with CRC status command. So it says running. So we should have the console. Okay, let's try with OC. So CRC OC and this command would configure your OC to talk to the cluster. It's same as Minici5 if someone else has used it before. Okay, the cluster seems to be up. No, it's not that. It's a new cluster. So no, it did check that. So yeah, maybe when I changed the network, maybe the IP changed. Sorry, live demos don't usually work. I should have had a recorded version of it, but OC is talking to the cluster. So I think network configs are fine. I think just the web. Let me check pods in the OpenShift console name space. Console pod is running. Let's try to deploy a sample application, sample Node.js application. So you would usually have your own code for this and use Odo or something else to deploy it. But yeah, thanks. Time is up. Okay, thanks, everyone.