 Hello everyone and thank you for taking part in this Minikube session. My name is Predrag Rogic and I am one of the contributors to the Minikube project. This session has two parts. First part is an introduction to the Minikube project, where we will cover why was Minikube project created, what it is and what it's not, where we are now and what are the plans for the future. Also we'll talk about how you can join the community and start contributing to the project. The second part will be deep dive to local Kubernetes learning environments and here Anders will introduce the three different environments that Minikube supports. Hypervisor, container and bare metal. There will be details about each process once and some suggestions where would you want to choose one or the other. And since this is a recorded session, we'll leave some time for answering your questions at the end. The session time is limited to about 30 minutes, so let's jump straight on and talk a little bit about the background and explain which problems were meant to be solved by this project. So this here is a special anniversary for Minikube that started five years ago, offering the solution that would improve the existing single node local cluster experience in Kubernetes. As you can see here, the original proposal was not to deal with multi-node clusters nor to be used for production workloads. And as we will see later on, this original vision changed a little bit, but for the better. The primary goal of Minikube is to make it simple and easy for people to run Kubernetes locally for learning and day-to-day development, including testing and debugging. And the main guiding principles are to be inclusive and community-driven, user-friendly, to support all Kubernetes features, to be cross-platform, reliable, high-performance and developer-focused. Here are some specific design goals derived from those principles. Simple UX that allows setup and tear-down of the cluster with a single command as easy as Minikube Start and Minikube Delete. Unified UX across operating systems, support for local storage, networking, auto-scaling, DNS, load balancing, and so on. Minimal dependencies on third-party software as much as possible and minimal resource overhead, and we'll touch on this one in more details here. You can also visit the link to original design proposal document if you're interested to learn more. We're now going to see how Minikube development is reflecting specifically these principles and goals. This is an example output of the Minikube Start command. All it takes to bring up a local Kubernetes cluster in less than 20 seconds. Also, you don't need to install separately Kubernetes tooling needed to operate your cluster, like Cube CTL, as you can use the ones conveniently provided for you by Minikube, for your operating system and your architecture. Now, there are other great tools as well that would allow you to achieve similar things. That is to run Kubernetes cluster locally, and they are usually specialized and focused on a specific operating systems or architectures, container technologies, or virtual machines. Minikube, on the other hand, is aiming to support as many configurations as our users commonly use. And what you see here is not a definitive list of supported components, rather those that are being actively developed and thoroughly tested. So, starting from the lowest layers, Minikube runs on AMD64 as well as ARM64 CPU architectures. In terms of the operating systems, Linux, macOS, and Windows are supported. Next, it can use a wide variety of hypervisors and drivers, like KVM, HyperKid, HyperB and VirtualBox, then Docker, Podman, and you can also use an existing operating system, whereas native driver would allow you to use your local OS, and SSH would allow you to use a remote operating system. For Bootstrapper, we use CubeADM, and for container runtimes, we support Docker, ContainerD, and Cryo. Finally, while Kubernetes officially support just last three minor releases, we recognize that there are some users who are for various reasons, still using older versions, and so we support at least the last six minor versions that are continuously tested. Similarly, this is not a definitive list of all the Minikube features, more of an overview of the most important or recent ones. Currently, there are 28 add-ons you can use to easily install Kubernetes applications, and two of them are installed by default, and those are the default storage class and storage provisioner. You can see all supported add-ons with Minikube add-ons list command, as shown here on the right. To install additional add-ons to the cluster, you can use Minikube add-ons enable command, and you can also disable, configure add-ons, and so on. And similarly, to other commands, simply use Minikube add-ons to get the detail usage help on that. For the advanced Kubernetes features, Minikube supports a load balancer, node port, pod system mounts, feature gates, and so on. And there are now eight ways to push images and some of them allow fast load and building. To support the next billion users initiative and make Minikube more inclusive, there is a continuous effort to support non-English speaking people by allowing them to report problems in their native language by GitHub issues, and currently there are seven languages supported. Then users can have a conversation in their native language with the project maintainers and contributors, which would use usually the help of Google Translate. So sometimes it may sound a little bit funny, but you know, we're trying. And Minikube itself currently supports seven non-English languages like German, Spanish, French, Japanese, Korean, Polish, and Chinese. And there are also plans to improve it further and add more languages. And here we need help from your side. If you remember from the beginning of the presentation, initially Minikube was envisioned as not to support multi-node. But it turned out to be the most wanted feature user asks for, so that changed the original assumptions and adding multi-node support came into the focus. It's one of the examples of how the community is actually steering this project. So after a couple of iterations and going through alpha and beta phases, multi-node finally became GA in December 2020. Now, while adding more features, more tests are also added to retain reliability, but they also require more resources. And since maintaining a high performance is also one of the guiding principles, it has a continuous focus. And that resulted in improvements of 86% faster start time and 20% less CPU usage. These metrics are measured with each PR, so that we know if something would negatively impact Minikube performance. Here on the right-hand side, I hope you can see an example of a PR metrics, where with the Docker driver start time is around 18 seconds. For details, I would recommend that you visit these two great presentations held by Media and Priya at Qcon last year. Also ARM64 support was added for Docker and Bodman drivers, as well as for Apple M1 chip. There are new commands introduced like image load and build, pause on pause, schedule stop, SSH cost, etc. Amongst the latest add-ons that were added are auto-pause, CSI hotspots driver, volume snapshot, and so on. To make Minikube more user-friendly and simpler, automation plays an important role. And therefore, the improvements in automating driver and CNI selection, setting default CPU and memory parameters appropriate for your machine, auto-pause, add-on, and so on. There's a monthly cadence for each new release, so many new features and improvements are delivered regularly. Okay, so what's ahead of us? The plan for the next release is captured in the current KitKat milestone plan. It's accessible through the link given here. And also, the development plan for this year, amongst other things, includes removal of dependency on lib machine that is deprecated and no longer maintained. Last commit was summed two years ago. The documentation improvements using container runtime interface and container network interface by default, graphical user interface for star stop and potentially other commands. Future improvements to resource monitoring and alerts. And currently, as you saw, it takes about 18 sections to start up the cluster. And the goal is to go under 15 seconds. So we are closed, but not there yet. And also, the kernel assistant mounts like SIFs and NFS are in glance. Also, other things will be prioritized and added to the development plan going further. Minicube community is continuously growing. And by the time you see this video stream, the number will likely to be higher. There are more than 600 contributors at the moment, whereas the number of non-Google commits rose by almost 75% in the past couple of months. The GitHub project has more than 20,000 stars and more than 3000 works. And there are almost 5000 Minicube Slack channel members. How can you become a part of the Minicube? You can join the Slack channel, mailing lists for users and or developers. And since this is a community driven project, you can also take part in an open by weekly office hours meetings, as well as triage parties. Now triage party is a weekly meeting where maintainers and contributors discuss open issues, prioritize and make sure they are timely addressed. Last but not least, as seen in an example with the multi-node feature, anyone can impact the project by sharing how they're using Minicube and what they think could be improved or added. So feedback is also collected through an always open quick five questions away, and the results are regularly assessed. Please share your experience and thoughts with us, it really matters. If you would like to contribute to Minicube, here are some guidance of how you can start. The getting started and also how to contribute guides are on the website. Those GitHub issues that are considered a good starting point are marked with good first issue label. So look for those and don't be afraid. You will get further guidance and any help needed from maintainers and other contributors as needed. We also need lots of help with documentation and translation, as well as specific help with expertise in networking, testing, CICD automation, containers and so on. And due to time constraints, we could not go much further in details here, but there are links in the presentation that could point you into the right direction, should you want to learn more. I hope this was informative for you, and I want to thank you for your time and attention. Now, let's move on to the Minicube deep dive with Anders. Thank you. Hello, my name is Anders Birkjön and I've been working as Minicube maintainer for two and a half years now. And today, I will be talking about three different local Kubernetes environment, the hypervisor, the container and the bare metal environment. The hypervisor environment creates a virtual machine running a separate kernel. This is also called hardware-assisted virtualization. In the container environment, we create a system privileged container using the same kernel as the host. This is also called over-slevelled virtualization. And in the bare metal, we run the Kubernetes components directly on the host. And this host may in turn be a virtual machine. And then we will be using hardware-assisted virtualization under the hood. In addition to this environment, we also have Minicube running on different operating systems in the traditional meaning, not in the Linux distribution meaning. So, when we run on macOS or Windows, we will need to run the Kubernetes control-playing components in a virtual machine, either if it's visible to us or if it's hidden from us. In addition to this complexity, we also have different architectures. Currently, we are trying to support two out of the five Kubernetes environments, AMD64, the traditional 8664 PC environment, and also this new ARM64 environment, like the Apple M1 or the Raspberry Pi or similar device. For the Raspberry Pi, there is also the 32-bit ARM, which is not used for the virtualization, but it can be run in native mode with Minicube. So, when you go to install Kubernetes on a machine, the first thing you need to do is to install a server running Linux since the control-playing only runs on Linux at this time. In Minicube, this is done by using the driver. On this machine or server, we then need to install the container runtime. Traditionally, this was Docker, but you have other options to choose from as we will see later. In Minicube, this is done by the provisioner. The third and final step is to install the Kubernetes control-play components. In Minicube, this is done by the bootstrapper. This will install the software and configure and initialize the control-playing. In this presentation, we will not cover running multi-node or running multi-master, but after you have set up the control-playing, you can, for instance, join some working nodes to the control-playing node. You can install the Kubernetes client, CubeCDL, on the host, and finally go to deploy your applications. When you run a single node cluster with Minicube, we taint the control-playing node so that you can run apps also on the single node with this CubeCDL taint command. What's nothing is that Minicube not only provides Kubernetes access, but we also provide SSH access to the machine, and we even expose the container runtime to the user, for instance, for loading and building images. Traditionally, this was done by a command called DockerM that gave you access to the Docker daemon running on the machine, but in later versions, we have bundled this in a Minicube image command to make it agnostic to the different container runtimes, so there is an image load and an image build command. These three environments can be further broken down into six different environments. For the hypervisor, we have the traditional Oracle VirtualBox external installation, but you can also run with the so-called native virtualization of each operating system. For the container environment, we both have the Docker engine running as processes in Linux, and we have the Docker desktop environment which runs using virtual machines on micro windows. And for the bare metal, finally, you can either run it local, as we have done traditionally, or in the latest version of Minicube, we can even run a remote cluster, so that the actual cluster can reside on a different physical machine than your current laptop, and then we use SSH to connect to this remote machine. You can then break this down even further by looking at different vendors of the different solutions. So in addition to VirtualBox, you might also select VMware or Parallels for your external virtualization, and the operating system virtualization comes in different flavors. For instance, on Windows, you would use Hyper-V, and on Mac, you would use something X-Hive-based, such as Docker's HyperKit. And for Linux, we're using the Libvert daemon access to the KVM, the kernel virtual machine. For the container environment, we also support Podman as an alternative to Docker when running natively on Linux, and you have the Docker desktop running on Mac or on Windows. And there is also a new Podman machine that you can use as an alternative when using Podman on Mac. The bare metal, as we mentioned earlier, can come in a regular physical server or virtual machine server. This can be run locally connected to your local network, or it could be run as a virtual machine. It could even run remotely in a private cloud. Here we can look at the difference on the different approaches in the more simple common solution of having a physical server and then just installing the container runtime directly on this. We have one Linux kernel running directly on the server, and then this container runtime will start the pods for Kubernetes. When we have a virtual server, or when we're running Minicube in a virtual machine, then we have a hypervisor running the Linux kernel on the native kernel of the host, which could be a Linux kernel or it could be a Windows or Mac kernel. And in this virtual machine, we will then run the container runtime like we did in the physical. When running Minicube with a Docker driver or the Podman driver, we create a system container that emulates a node, and it works something like actual virtual machine, except that it is running on the same native kernel as the host, which gives it a lower overhead. Inside this driver runtime, we run the Kubernetes container runtime. So here we show Docker in Docker, but you can also run container D in Docker or cryo in Podman. And finally, the most complex solution where we have the Docker desktop driver, where we have Docker running in a virtual machine with a Linux kernel on the native system, and then inside this virtual machine, we have the same setup as with the Docker engine driver. So there are two different kernels involved here, but there are also two different container runtimes involved. If we look further at those three install steps, you can subdivide them into more detailed approaches. The first thing is that you need an image in order to start your operating system. And we do bundle a Minicube ISO for use with the virtual machine, which is originally was based on boot to Docker. When running in the container environment, we use Ubuntu 2004 as the base. Then you will use the machine provider to install the servers with Linux using this image, be it virtual box or be it Docker running the image. Then we have the second step where we provision the container runtime. We configured and installed Docker or container D or cryo. And we also make sure to make the Kubernetes images available in this runtime. We call this the preload step where we preload all the needed containers from a target into the environment or from a cache directory. And finally, the third step was to bootstrap the Kubernetes control plane and also to install the Kubernetes networking if we're running in a multi-node scenario or if we just want to replace the networking on the local single node device. For the operating system, we had the boot to Docker option same as Docker machine, which is based on Tiny Core Linux. This is now deprecated in favor of a custom build route environment, which was originally based on CoreOS. And this image is used for the virtual machine. It's called the ISO traditionally referring to the CD image. For the container environment called the kick for Kubernetes in container, we are using Ubuntu, which is the same as used in the kind Kubernetes product. And then we install the software using regular system packages inside this OS. You can also bring your own the next supported distribution. And we will try to configure and install Kubernetes based on the OS that you provide. And this is not fully supported, but we will need user support to support the more alternative installations here. For the container runtime, we have traditionally used the Docker as the container runtime in Kubernetes. And MiniCube has supported Cryo for many years as an alternative. For building the images for Cryo or for loading them, we use the podman command line tool running in the virtual machine. And the latest addition to this family is to run container D. And when using container D, we use the buildkit and buildkit demon as the means of building images using the MiniCube machine. And since the latest version of Kubernetes, the Docker shim has been deprecated, which means that we need to provide a CRI also for Docker. CRI stands for container runtime interface. And traditionally, MiniCube shipped all the Kubernetes component in a single binary around this. But since this was maintenance burden and also meant it differed a bit from the regular Kubernetes installation, we are now using the standard CUBE ADM installer. And this is the same binary you would use to install in a production cluster. And as I mentioned, we download the required binaries and the images for this CUBE ADM installer in the preloaded table. When it comes to the CNI or container networking interface, we have traditionally used Docker, but now since it's deprecated the Docker shim, we are making the CNI default to all the container runtimes, either using the standard single node bridge, or perhaps in the container environment, the kind net PTP alternative. But when it comes to multi-node, you would install something like Flannel or some other option for connecting multiple machines. So with all these alternatives, when would you choose to use a specific driver? And here I will give three examples on when you would use a driver. The first one is the so-called traditional setup where you would run Minikube in a batch machine, the way it's been done for years now in Minikube, traditionally using VoucherBox, but there was also these native virtualization alternatives. The second is to run Minikube in a system container using this new Docker driver. And the third option is running Minikube in a VM provided by the user in a local host shell on the control pane itself. So for the first use case, you would run Minikube start and select the VirtualBox driver. You can also run Minikube start-vm and it will pick the suitable native driver. And the pros of this approach is that it is running the OS provided by Minikube, which is then compatible with the Kubernetes version being installed. And this setup has been tried and tested. The cons are, however, that it is quite high overhead compared to some of the other options. And you will need a platform with support, for instance, for virtualization, or in this case, VirtualBox, which does not support the ARM64 architecture. If you have taken the Kubernetes introduction course, I believe this uses the VirtualBox player. The second driver, and the second driver we will look at here, is the new Docker driver. And this will then start a system container with Minikube and everything in it. Since it's a container and not a full virtual machine, this will give a lower startup time, assuming then that if you're on a platform which needs a virtual machine in order to run containers, that this virtual machine was actually running beforehand. And the resource usage, when running natively on the host, means that you will not have to spin up a second kernel. And you can share memory with the other processes on the host. And the cons with this Docker driver is that the kernel compatibility is limited to what the host can offer. You cannot use this Minikube virtual machine to run all different kernel-based deployments when it comes to, for instance, storage systems or other uses of Minikube. And finally, the networking we're running in a virtual machine is somewhat limited, which means that we need to tunnel all the networking into this virtual machine running the actual container, as opposed to running the virtual machine directly where we have access to the network interface of the machine. And running Kubernetes in Docker is something that KIND has been doing for Kubernetes testing that has been adopted for Minikube use with the container building and the SSH access. And finally, we have the so-called non-driver, which means that Minikube will run all the QBADM commands and all the containers directly on the host. And the pros with this approach is that it's a low overhead since we don't have any virtualization or any containerization. And it's also quite simple in terms of which processes and which virtualization layers are running. And the cons, however, is that since we are logged into the control plane, we don't have all the niceties of running a regular desktop. And there is no isolation, which means that all processes started will run directly on the machine. So this is not something that is recommended to run on your laptop, but it can be used when running in a virtual machine that you have set up, or for instance, when running in CI. If you have done the Hello Minikube tutorial with the CataCoda from the Kubernetes website, this uses the non-driver in a virtual machine accessible from the web console. And that was it for the presentations. If you have further questions, you can always reach us on Slack or on GitHub when it comes to issues. And we will now be available in the chat to answer any other questions you will have. Thank you for listening.