 And morning or good noon, it's 11.55, so I think good morning is appropriate. Good morning, and then later, five minutes later, it would be good afternoon. KubeCon. It's so great to be here in person in Valencia, Spain. So I'm so happy to see you guys in person again, and I'm here to talk about Minikube. So first of all, Sharif could not make it, so I'm here presenting for both of us. Who am I? I'm a technical lead manager at Google, and I've been a Minikube maintainer since 2019, and I've worked other works in the open source world, KITGAR, doing a course at a Minikube, GoPro, a bunch of others. I have been doing a lot of open source works in my past four years of life. Sharif is also a software engineer. He's been part of the Google Container Tools team since 2016 and Minikube's maintainer since 2019. Minikube started in 2016. Actually, it was started by Google by the same team that creates container tools. For example, Scaffold, Canico, Jib, Kept, Tecton. So the same team that created those tools created Minikube six years ago, and the original proposal was just to a tool that gets you started with Kubernetes without pain, just like for learning Kubernetes for somebody just like, I want to get my hands on Kubernetes. Sometimes it's hard to remember that this project has been alive and it's been working for you guys for past six years. Some people remind us on Twitter, took a screenshot. But why Google is supporting Minikube? Google has been the main sponsor of Minikube in terms of head counts, getting full time software engineers working on them. Google contributes to a lot of open source projects. And I mean, Kelsey, I love Kelsey. So give it up for Kelsey. I think he's such an inspiring person. He is such a great person. I love Kelsey. But he missed to Minikube. So Kelsey, you're not perfect. So Google has been contributing to Minikube as the main sponsor. But we have maintainers across the world. One of the time, there was a time we dreamed that we have maintainers from four continents in the planet. But we don't have three continents, unfortunately. We don't have Africa, but we like to have maintainers from all over the world. But why Google? Google cares about development experience across the ecosystem, especially for Kubernetes. That's why the main reason Google has continued supporting this. And I'm grateful for Google paying me to work on this. The primary goals of Minikube has been laid off, and then they're very clear on the website. We are inclusive, community-driven, user-friendly, support for all Kubernetes features, class platform, reliable, high-performance, and developer focus. These are our goals and principles. Here is a fun chart I've generated. This is the Minikube line of code. Since we started in 2016, we are about 1.5 million lines of code. It looks a little bit scary, but don't be scared. Come contribute. Who are behind Minikube's emojis? By show of hands, who has used Minikube here? Oh, my God. Everybody almost. You guys know Minikube's emojis, right? I talk to you guys like, oh, I like the emojis. Who are the people behind the emojis? 729 contributors have contributed to Minikube, but we actually built a tool in Minikube. We call it PoolSheet. And we open-source that tool as well. That tool visualizes the contributions that all our contributors do, including the contributors who do triage contributing, like if you label issues for us, if you organize our issues, if you help other users. We recognize that. Actually, go to our Minikube website. You will see the yearly dashboards, all of the contributors to Minikube, exactly what they did. We have some fun categories, like the most helpful PR reviewer, the most worthy PR reviewer. So like we have fun categories, check it out. The tool that we open-source is called PoolSheet. I'll have a slide about the links of all of those in the future. One thing I want to talk about Minikube that I like you guys know is testing Minikube is different, very different. And I think no other project in our namespace has done testing like Minikube. And what is that? Minikube, as you guys might be aware, uses many virtualization technologies to start the Kubernetes for you. It could be a VM driver or a container driver or no driver at all. But testing that, you cannot test that on a container. You cannot test it on a normal Linux. You need a hardware that supports nested virtualization. And this was actually our first integration test machine that we built in San Francisco office in Google. We basically bought a bunch of Windows, Mac, and Linux with GPUs and hooked it together and made Jenkins out of it. Unfortunately, this test lab died because of corona, because we could not go to the office to unplug it or restart it. But this was our first integration test lab. Because Minikube testing is hard and different, because we need nested virtualization. And you cannot have that on macOS or Windows in the cloud. Minikube has a huge amount of support for different types of virtualization technologies. So we have, of course, all the operating systems, two main CPU architectures, x86 and ARM. By the way, while I'm here, please use x86 instead of AMD64, please. I hit with my eyes when I have to figure out, is it ARM64 or AMD64? It's so hard. Just say x86, please. Let's all agree on this. x86, yes. Thank you for that. Miniklub. Miniklub in a Minikube talk. Thank you. So Minikube has support for many architectures and many runtimes. What is a runtime? Does anybody feel like they don't know what is a container runtime in a Kubernetes? Raise your hand if you don't know what is a container runtime. Okay. It seems like everybody knows. So Minikube has support for three different runtimes for your engine, not for a driver. And also you have different for the CNIs. So we test all of that. All the green ones, we actually test them. The green ones, we don't have integration tests for that. And we have a beast of engineering out of this. You have done so much automation to do the testing. The testing Minikube is very, very comprehensive. We have 46 VMs to test Minikube in different clouds. Each cloud that would give us a different types of testing, GCP, AWS, Equinex, Metal, Azure, Maxidium, Pro, and GitHub Actions. And we also have 296 integration test cases, of course unit tests too. And we also have a detailed list of integration tests. And why we emphasize on integration tests so much? Because we are a small team that maintains a large project and it's very hard to not break things. We break things all the time. And you know, I break things all the time. And I want to save myself for myself by adding tons of integration tests. Like a user wants to start Minikube, deploys an app, and then they want to enable a CNI, and then they want to stop Minikube, and then when you want to start it again, and they want to expect all of that to be there. So let's add integration tests for that. And also for 296 other scenarios. We have a list of all of that in our website. All of the scenarios that Minikube tests. That makes my life easier. So if I accept a PR, I know it's going to not break things. But, but, but this is something I want to talk about. When you have so many integration tests in so many VMs, in so many clouds, you review a PR. You see five failures out of 300. We are all not. We all have flake tests. Who here has flake rates in their company? Like a test that flakes sometimes, but it's not a real failure. Okay. That's okay. Okay. Okay. I was a little bit terrified that nobody's going to raise their hands like, oh, flake rate is your problem. We all have no, okay. I think I saw about 12 hands for the people who did not see the hands. So we have tests that, you know, 10% of the time they fail, but they are innocent failures that they, they really not a test failure. But when you have five, six of them on a PR, you don't know which one is actual failure and which one is a real failure of that PR. And we have been burned by that in the past. So we built a system called Flake Rate System to tell you this, it comments on the PR, it tells you with a visualization, with a graphic. This test that failed on this PR has never failed on master before. And most probably is because of this PR. And we built this Flake Rate System for Minikube based on GoPoke. And I'm going to talk about GoPoke because this is, I was adding this picture right like 20 seconds ago and I was Googling this. So you can see that I actually didn't do a good job cropping the Google search. Anyway, but what is GoPoke? So we have a lot of integration tests. And if you have seen GoLang's integration test results, the raw and Minikube's test logs are very verbose. And it's good that it's verbose because if it's something that's failed, we know exactly what is going on. So a failed Minikube test log could be about 10 to 20,000 lines of logs. And that is really hard to look at if you're looking for a specific test. So I built a tool called GoPoke that converts the GoLang integration test results from raw to HTML. I'm going to show you guys an example of it. So this would be like an example of test results. It's very hard. It's like, let's say, 290 tests. It's very hard to look at what is what, right? So if you convert that to HTML, it will be like this. If you give you a summary, you can fold them and fold them. These are the failures. And this is the duration. You can sort them. And you can jump to every single one and open each one in a different window. So this helps us to squint less when we review PRs for a Minikube. So, and by the way, we built the Flake Rate system on the top of GoPoke. So if you are a GoLang guys, or girls, or girls, and you could use GoPoke to have a more human-looking, human-like, that's a terrible way where I say more user-friendly, more user-friendly way of looking at test logs. This is a diagram of our infrastructure situation. We hooked up so many clouds to one main Jenkins infrastructure. Minikube speaks your language. So we have very enthusiastic translators that have added translation to Minikube. So you can start Minikube with your language. We added the framework in 2019. Actually, you can check out that framework in Minikube repo as well. If you want to add translation to your own GoLang app, so currently we have English, German, Spanish, Chinese, French, Japanese, and Korean and Polish. So, and it's very easy to add your own language. If you're in Europe, if you're very enthusiastic about that language and you want to add it to Minikube, just go to Minikube website, search for translation. There's a JSON file basically that you just fill it out. And the JSON file, you don't have to be a software engineer, to be honest, to do any of that. As long as you have the language skills, you could contribute that to Minikube for more languages. Here's a slide that I promised you guys that will show. These are the side projects of Minikube. If you want to take a look at it or screenshot it or something, this is all of them. So I'll go over them very quickly. Sojam is a tool that we built to visualize the stack traces of a GoLang app. So if you have a GoLang app and you want to visualize what is doing what and what is taking how much and visualize that, use Sojam. TriageParty is another tool that graduated out of Minikube. I called the Minikube side project. We first built them for Minikube and then we just gave it out to the world and say, everybody could use that. So TriageParty helps you to triage issues in a crowdsourced manner. Minikube has 12,000 issues on GitHub. Can you believe that? And again, not just so many of us who maintain Minikube. It would be very hard if you wanted to triage all of it all by ourselves. So we built this tool that you could crowdsource the triaging issues. And then we actually, we have this weekly meeting called TriageParty Wednesdays 11 a.m., Pacific California time. You could welcome to join our party on Wednesdays. And GoVog already talked about that. Time to Kate. This is another tool that I really like. We built it for Minikube, but it's available for everybody. When we were trying to make Minikube fast, and Minikube used to be very slow, by the way. You probably guys already know. If you are a long time Minikube user, like three, four years ago, Minikube used to take three, four minutes to start. So we invested a lot. Minikube is not so long anymore. But we built a tool called Time to Kate that measures exactly how fast a Kubernetes cluster is ready to be used. They'll tell you, in a visualized format, that this Kubernetes cluster is ready in 30 seconds or 60 seconds. And the DNS answering is ready in 75 seconds. API server, and SCD server. So it will tell you in measurements that matters for Kate. So that way you could compare Minikube against other similar tools. If you wanna compare Minikube against, let's say, K3D Rancher, and my case or whatever. It's a great tool. Minikube CI examples. A lot of people ask me, can I use Minikube in Pro and in GitHub Action, in whatever, Cloud Build? You can see detailed examples of that. And PoolSheet is the one that we use to generate graphs for the contributors. It's like, who contributed what and what amount? Even the triage contributions. This is a slide I wanna talk about, Kubernetes-124. Kubernetes-124 is a big one. It's a really big one for Minikube. Because Kubernetes, as you guys all know, I hope you all know, maybe you don't know. They removed the support for Docker, Shim. It means Kubernetes no longer is maintaining that code. And this code has been donated to Mirantis, and the Mirantis can continue that code for us. But that means Kubernetes, by default, will not work with Docker runtime anymore. If I go back to the previous slide here, we had three runtimes, Docker, Continuity, and Cryo. That means Docker would not work anymore. But that is a really bad thing, really really bad thing. Why? Why is it a bad thing? It's a bad thing because the Docker runtime for local developers matters a lot. Because when you build a Docker image, what do you build it with? Docker builds, right? And if you wanna move that image to your cluster in Minikube, that will take a long time to just copy that image. If you build it with Docker and you wanna import it into container D, it will take some time. And we actually generated the chart that. It's 36 times slower if you do it for container D. So Minikubes decided that we really care. Our main goal was for developers. We liked the others that we have. So we continue to support the Docker runtime for Kubernetes 124. Even though we are doing it through the Mirantist at third party, open source tool. Another story with Kubernetes 124 that we're struggling with is the C Group V2. C Group V2 is causing some headaches. Anybody else being caused that headache? Okay, one person? Come talk to me, I'll show you, it's okay. So we are working on it. The beta release of Minikube supports C Group V1.24. With a gotcha, with a gotcha. What are you working on? So we decided to continue not leaving our users who really care about building images fast. You guys know Minikube Docker and command. That's one of the most popular ways to build images and it's very fast. Minikube has eight ways of building images if you are a developer. Minikube Docker M is the fastest one and we'll continue doing that for you guys. And it will be 36 times faster. And I wanna talk about global warming, completely different topic. What do I wanna talk about global warming? And Minikube used to burn people's legs. Actually, we used to joke about that. Like this is the Minikube's meeting in 2016. We used to really be bad. When you turn on Minikube, your laptop will be just, it's like start, you know, the fan noise will be up. There is so many issues that people will complain about that. We fixed all of that. But I wanna talk about how we did that. We generated a flame gripes for every function in the struct race. We exactly figured out which one is taking much CPU. Anyway, I don't wanna talk about that. There's another talk in the two QCons before. I'll link to that. We fixed the issues with the CPU, 50% less CPU usage, saving energy. But we have two new things that not many people use that. And you could use that if you wanna save energy. So there's a command called MinikubePause that pauses the API server or Kubernetes, but it does not pause your applications. What does that mean? Let's say you apply an app to your Kubernetes cluster. You say kubectl apply my app, and that will deploy an app to your Kubernetes cluster. But now, if you say MinikubePause, what will happen? It will pause the API server, Kubernetes API server. It basically pauses Kubernetes, but your app is still running inside Kubernetes. So that means you can pause Minikube whenever you wanna, you can unpause Minikube whenever you wanna apply a new YAML and pause it right after. So because you don't need Kubernetes after that. So we also developed an add-on for Minikube called Autopause that will automatically pause Minikube for you when you are not using it for five minutes. So you could actually enable that add-on called Minikube Add-ons Enable Autopause. So if you wanna be a good citizen and save energy for our planets, use that. I care about this, because I heard in the KubeCon there was a talk it's projected 8% of the global's electricity is gonna be used on data centers and software. So I mean, we could do something a little bit, not much in the software world to save energy. Minikube loves benchmarking. You guys already saw some benchmarks on my slides, but we have dedicated a section of our website called Minikube Benchmarking. If you guys go to the Minikube website now and go under the Benchmarking section, there's a section called for CPU usage. There's a section for image build. And there's a section for time to kates. And then we do them weekly, daily, and per release. We also benchmark Minikube against similar tools like kind, K3D, microkates. You wanna know how Minikube is doing against the similar tools all the time. So if you like benchmarking like me, if you wanna see more of that, go to Minikube website, Benchmarking section, we got tons of things for you to look at. Now let's talk about the whole new topic, Minikube face image. Who here uses Minikube's VM driver as opposed to the Docker driver? Not that many. So who uses a Docker driver? Okay, more people use Docker driver. Okay, so we have two base image in Minikube. One of them is for the Docker. It's a Docker file basically based on Ubuntu. And one of them is an ISO that we built for the VM drivers. And this ISO is six years old. And we basically built our own Linux. We are maintaining a Linux distro for Minikube. It's just enough Linux for Kubernetes. We are planning to graduate this project out of Minikube, just like many other projects that were graduated out of Minikube. So it will be an ISO for the whole world that is just enough kernel modules for Kubernetes. We handcrafted this ISO. It's very small compared to the similar ISOs that I've seen. Some of the ISO is like 800 megabytes, some of them a couple of gigabytes. And we first started based on a CoreOS belt route, but we diverged so much that we can no longer see anything similarity between CoreOS and the Minikube's ISO. And there's also some advantage of this is in our benchmarking showed that Minikube's VM driver is actually uses the least amount of CPU. I was surprising for myself to see compared to Docker driver. So Minikube's VM driver uses less CPU and it's because we handcrafted this ISO. I want to pivot to a chart that we have, this is based on our surveys, we collect surveys. And we have three types of Minikube users and they mostly use Minikube for learning Kubernetes or develop on Kubernetes or use it in test and CI. But what I want to talk about is a new category of you guys out there and you guys are sending surveys, you're sending blog posts, we hear you, you hear you. I think it was last year that Docker Desktop announced that they gonna charge companies for more than 200 employees and $10 million revenue, I believe. If they use Docker Desktop, Docker Desktop is a commercial product, paid commercial product. And initially, a few users posted a blog post that I am using Minikube as a Docker Desktop replacement. And I was like, oh my God, this is a new type of users. And turns out the demand is really high. Like I took a quick analysis on the survey, there's a huge amount of interest for this. And you could see some of the screenshots on that. So a lot of people are very happy with replacing Docker Desktop with Minikube. For that reason, the added feature is very, very ironic, very ironic that you could start Minikube without Kubernetes. I never thought I would see this. I've been maintaining Minikube a long time and I developed a feature that starts Minikube without Kubernetes. It's just a Minikube of VM with a container runtime inside of it. It could be Docker, it could be container-D and the cryo. So people who use Minikube as a Docker Desktop replacement, they use this flag. I just wanted to show you to you guys. Who else here uses one of the similar tools to Minikube, like kind, K3D, or micro-Kates? Okay, with your hands. There are many, many nitty-gritty small differences between all of them, but if you ask me what are the main differences between them, I would say the main difference is Minikube supports multiple container runtimes. All other tools I mentioned, they all container-D runtime. Minikube has Docker, container-D, and cryo. Minikube is more diverse in that sense. And the Docker part of it is very important for fast image build. By the way, let me pivot back to the previous one that people using Minikube as a Docker Desktop replacement. That means, as you guys know, Docker is two-part. One of them is an open-source Mavi project, which is an open-source engine, and one of these is Docker Desktop. Docker Desktop is a commercial product, but the Docker container runtime itself is a free product. So when you use Minikube as a Docker Desktop replacement, you could basically install a Docker CLI and still Docker image build in an open-source freeway and build it against Minikube's Docker. That's how people are using it. So it could get confusing. So the main difference of Minikube, I would say, is the container runtimes. We support all of them. And also, the second difference is a fast image build. 36 times faster image build, according to our benchmarks, than other tools. So if you're an app developer, who wants to develop apps on Kubernetes on your laptop, I think the Minikube is the only answer for you. And also, our integration test is comprehensive. In our namespace, I don't know any other project who is doing this level of massive amount of integration tests on physical machines. Advantage of VM drivers? There was a time, I think early 2020, or maybe late 2019, a lot of people asked me, is there even a need for Minikube to continue the VM drivers? I'm kind of a stubborn guy. And I said, yeah, I want to continue supporting VM drivers. I never thought one day I will see this level of interest in VM drivers again, but I just, I myself love the VM drivers more than the Docker driver. I don't know why. I just loved it. With my stubbornness, I continued doing it, but now there's a huge amount of interest in it. And a lot of people using VM drivers again, because Docker Desktop is a VM driver too. You're not gonna have a container on Mac or Windows ever. You need a Linux. You need somebody to visualize that for you. And I like the Minikube's ISO, it's very small. That's one of the reasons for my stubbornness on it. VM drivers clearly, clearly use significantly less CPU than the Docker driver. And one of my favorite things with the VM driver is you can hit the IP directly. So if you have a service on a host port on Kubernetes deployed, let's say it's on port 80, you can hit the Minikube's IP on port 80 directly. On container drivers like Docker or Podman, you would have to translate that port to a random port and it kind of looks ugly, to be honest. It's like Minikube IP, instead of port 80, it would be 32, some random port, you know. So it's like an extra complainess that I don't like personally. Two pieces of exciting news. Okay, I saw a lot of Twitter and we also have survey responses. We have about, I think 32 survey responses that they explicitly said. We asked the question, what Minikube could do better if you could tell us? And they told us we want VM drivers on M1. M1, as you guys probably know, is Apple's new hardware. It's ARM64 based and people want a driver that would work on it with a VM driver. You could use the Docker driver, but. So I have exciting news for you. We have key MU driver working and I tried it yesterday myself. And a huge amount of work went to this. It took us a long time and you guys were patient with us. Thank you for being your patient. It was hard to deliver this. We have a new brand new driver called key MU driver. So this means even on ARM64 and Apple M1, you could start Minikube with key MU driver. I have personal likeness for key MU. I think it's a great open source project. And my dream is to make key MU the unified driver everywhere on any platform. So you would have key MU on Windows, Linux, Mac, and they'll be all same VM driver. But I need some help guys. If you guys are experts on virtualization and ISO, come talk to me. And I actually might be hiring as well. So you could actually try the key MU driver today. Just basically brew install key MU on your Mac and then install the beta version of Minikube, not the stable one. And then Minikube starts dash dash driver key MU2. And if you wanna download the beta of Minikube, easy. Go Minikube website and then click on your platform. For example, Mac OS ARM64 beta. Make sure you choose the beta. That way you could try the key MU driver. It was very hard to be honest to deliver this. We have been working on it a few months. So we basically had to rebuild our just enough Linux for Kubernetes for ARM64. Every package had to be done again. Every kernel module had to be done. And then we had issues with AppArmor. AppArmor does not like the EFI bias because if you wanna have a ARM64 machine, you cannot have a BIOS anymore. You need to have an EFI boot loader. So we put tremendous amount of energy to make the EFI work but then AppArmor was not happy. It was like, I don't like EFI. And it was a lot of work. Kudos to Sharif Al-Gamal. He's not here today. I mean, he's in California, I couldn't make it. And really he did amazing work on this and also under Spearclint or other maintainer. He guided us through this. And I'm very grateful for having such amazing team and amazing maintainers in main queue. So try it, try the key MU driver. Another exciting news. We have a GUI for Minikube. Finally, this was something people have been asking us and I was always on the, what's the word, fence. To say, GUI, come on guys. Minikube does not need a GUI. But now we have a GUI and you convinced us. Okay? It's built in Q3. I can show you a little bit of it. I don't know if I am. Actually, so this is a Minikube GUI, a bit tray icon like this. You can see, starting a Kubernetes. I can see there's a key MU running and you can create another one like that. So this is an early development. If you wanna give it a try, go to Minikube website and search for Minikube GUI. We have instructions how to install it. Cool. New contributors are always welcome. Check out our office hours. Mondays, California time, 11 a.m. Look out for good first issues. We are very friendly, but we also want experts and I am hiring. So DM and Twitter, if you wanna work with low-level Linux stuff or if you have GUI skills like QTC++ or high-provisor technologies and build routes. Build route is the tool that we use to build our own Linux. That was end of my slides and I just remembered I forgot to put the Minikube's Twitter in my slide. So Minikube's Twitter is just Minikube underscore dev. Follow us, whenever we release, we share our release on Twitter and I'm available for any questions you guys might have, so thank you very much. If you have any questions, let me know and I'll get the mic to you. Hi, thank you a lot for your talk. I'm more interested into the VM driver and I would like to know several things about it. First of all, you just say that you will move it to a dedicated repo and so what are your plans regarding the kernel version? Because actually, if I remember correctly, the kernel version of the image is for the Lightning and I would like to know if you plan to bump it. So your question about, I didn't understand your question. Your question was about the versioning of the ISO? No, I didn't understand it. No, it is the version of the kernel inside the VM driver. Yeah, so currently we have kernel 4.9, which is a shame, I know. We have been wanting to go on kernel 5.10. However, we waited till we figured out the bootloader situation. The bootloader was very difficult to make it work and we didn't want to introduce two big new changes at the same time. So it's like, now that we have the bootloader under control, for sure we're gonna invest in kernel 5.10. I think that's the right thing to do, especially with the C-group V2 being supported mostly on kernel 5, yeah. Okay, and then regarding build-root, I see in your repository that you have a dedicated directory for .mk files. Basically, the build-root recite and so do you have some plan to upstream them because it will be far easier for you like? I like your question. I think your question, Shaz, you actually have been looked at that. So that's a great, so we do not plan to have the file system overlay, that's what you're referring to, into the open source file. That would be something that, the way we're gonna build it is like, anybody who wants to use this ISO, they will get the ISO and add their own files on the top of it. So Minicube will be one of the users of that model. That we will make a model that the overlay will be built as a part of the Minicube build, but on the same ISO generic one. Okay, and then another one remark, maybe, because I'm not sure about it. You say that you're using UEFI BIOS. What about using, particularly for the ARM port, the U-boot to basically boot the Linux because U-boot on Linux on ARM 64, it is just perfect. And for UEFI BIOS, I'm not so sure. So do you have some plan to maybe use U-boot? And I'm almost sure that within build route, you can just tick one, make menu config, and you have a U-boot. I don't think we're gonna have that option. That would be slightly above the scope of us to have that option configured. If we have a contributor that wants to take the ownership of that, we will, but one of the things that I focus on as a main tenor of Minicube is keeping it maintainable for myself and my team, because we're a small team. And that seems like something would be challenging to give that option. If we promise that option, I would wanna see the implementer of that option. It would give me assurance that it's gonna be easy to maintain. But I do think EFI is the future of the boot loaders. I mean, for Minicube. So we have delivered that for ARM64, but for AMD or x86, we continue the BIOS on boot loaders. Yeah, good question. Thank you. Go for it. Thanks everyone for coming. We're a couple of minutes over, so we'll stop it there. But if you have additional questions, I'm sure Medea will stick around a little bit. Sounds good. Thank you very much everybody. Thank you.