 Hi, this session is presented by the Kubernetes VMware User Group. This group is about running all forms of Kubernetes on the VMware infrastructure platforms from a user's perspective. I'm Steve Wong, co-chair of the user group. I work on Kubernetes and a few other open source projects as an employee of VMware. Today I'm joined by Miles, the other co-chair. Miles, I'll let you introduce yourself and take over. Okay, thanks for that, Steve. So when we get into it here, we're going to have a look through the agenda. So first thing that we're going to do is give an overview of the vSphere cloud provider and the related storage plugins. That is the vSphere cloud provider out of tree and the CSI driver that we launched about a year and a half ago now. So we're going to cover any recent changes and features in that, as well as the current architecture of the CSI driver and how we're planning on doing a migration from the old entry VCP to the new CSI driver that we have. Then I'm going to hand it over to Steve and Steve is going to talk about the new Kubernetes features that are built into our desktop hypervisors at VMware. And then he's going to close up with how you can get involved with the VMware user group, meet other users of VMware technologies that are trying to run containers and Kubernetes to share experiences and advice with each other. So let's start off with the vSphere cloud provider and what it does. So the vSphere cloud provider is what makes Kubernetes cloud native. It's the glue layer. If you want to think about it that way between Kubernetes and the infrastructure, that means Kubernetes doesn't have to be opinionated or doesn't have to have built in logic about how to interact with the underlying platform. It just knows how to do that via these cloud provider plugins. So there were two different models to begin with. There was the entry cloud providers and what that means is the core code for adding support for a new cloud was added to the Kubernetes code base itself. It wasn't really a plugin. It was a plugin from the API perspective, but it was directly in the core code. So that meant if you install Kubernetes, you had that cloud provider plugin just sort of installed by default because it was just part of the core code. That led to a lot of challenges, particularly around code bloat and the core Kubernetes repo, security vulnerabilities that weren't getting patched. If there was a bug, for example, in one of those cloud provider plugins, you would have to upgrade your entire Kubernetes distribution in order to patch that you couldn't just patch the plugin because it's part of the core code. So there was a lot of drawbacks to actually embedding them in Kubernetes. So community came together and decided we're going to move all this stuff out of tree into discrete plugins that you have to explicitly install. So we built the vSphere cloud provider, which is the out of tree version of that, and it supports stuff like how telling Kubernetes how to interact with the vSphere infrastructure underneath how to tag particular VMs, how to figure out what az a given host is in vSphere, how that correlates to Kubernetes. And it also provides a lot of the infrastructure that we use in the CSI driver around the logic of how the plugin can talk to vCenter itself. There are a bunch of primitives that are supported by the Kubernetes cloud provider spec that we don't support out of the box with the vSphere cloud provider. So things like load balancers, routes and that kind of thing, because we have a separate CNI that does that as well. So if you want more details, go and have a look at the cloud provider vSphere SIG site there on the slide. So I'm going to talk to you now about storage on vSphere as it pertains to Kubernetes. So we're going to look at the VCP, which is the old entry way of doing things. We talked about the CPI just now. And by the way, CPI is what we're calling the cloud provider interface or the vSphere cloud provider. And the CSI is the cloud storage container storage interface. And that is what we call our new out of tree version of the storage driver. So the VCP to give you sort of an architectural look at things was what I was just talking about the thing that was built into Kubernetes natively. So you can see that here. It's just a core part of Kubernetes. You install Kubernetes, you get the VCP and it enabled dynamic provisioning of volumes and Kubernetes, which is pretty much a must have whenever you're trying to deploy stateful apps on top of Cates, you need something that does dynamic provisioning of storage. And it did data service granularity at an SPBM policy level. So if you created a storage class within Kubernetes that one to one maps to a storage policy in vCenter. So storage policy is a vSphere concept, which is essentially identical actually to a storage class. So it's very convenient. We just mapped one to the other. But we talked about the drawbacks before. If there was a bug in the VCP or if there was a problem with it, you had to upgrade your entire Kubernetes distribution. So say you're on one dot 11, you know, because the VCP is quite old now. If you are on one dot 11, there was a bug in the VCP. Say we fixed it in one dot 12 and pushed that into the upstream repo, you would have to move your entire Kubernetes distribution from one dot 11 to one dot 12 a home major step. So rather than just being able to update the VCP itself, you had to update everything. So there were some drawbacks there. So with vSphere 6.7 u3, we introduced two things. So we introduced the CNS control plane inside of vSphere, and we introduced a CSI driver as well. So the CSI driver took the logic that was in the VCP and made it out of tree. That means that you can rev the CSI driver without having to upgrade your Kubernetes. So again, if there's a new feature or there's a bug or fixed or whatever in a new version of CSI driver, you just update CSI, you don't have to update your case version. And you can see that here. I've sort of tried to highlight that by showing CSI is just another pod. It's another workload that runs inside your case cluster. And it uses this new CNS control plane that we built into vSphere. So the old way of doing things with the VCP, it was kind of hacky if you look at how it actually provisioned volumes. Basically what it did was, if you requested 100 gig volume, for example, it would create a VM with no resources attached to it, but it would also attach a VM decay of 100 gigs in size. And the reason it did that was because there was no primitive to just deploy a disk. So it would deploy a VM with a disk attached, unmind the disk, throw away the VM and then re-mind the disk into wherever it was needed. Obviously, that's a lot of overhead that really didn't need to happen. So we fixed all of that stuff with CNS and CSI. So whenever we actually made this a product in 6.7 U3, the new CNS control plane uses a backend storage class or a backend storage provisioning type in vCenter called first class disks. You can have a Google for first class disks. There's a good blog by Cormacogon on that. And that's essentially like a global catalog for disks within vCenter. So it allowed this individual creation of disks without having to have a VM that sort of managed or looked after it. So we introduced that and that got rid of a lot of our technical debt that we had in the VCP. It's essentially a ground up rewrite of the VCP from a storage perspective. And it did just the same stuff. It allowed you to do dynamic provisioning of Kubernetes volumes. Again, very important whenever you're deploying stateful workloads on top of Cates. But it also added some really nice UI elements into vCenter as well for cloud native storage. So there's a whole new UI in vSphere for cloud native storage. So any of your Kubernetes clusters and any of your applications that are deployed on top of vSphere on top of Kubernetes, because they're provisioned via CNS, we expose all that information straight into the UI. So you can see, you know, an example, you can see that there is a MongoDB primary node mounting this particular volume. And all that information is directly accessible to your vSphere administrators. So they get all of that rich context from the application layer down at the infrastructure layer to more effectively help you troubleshoot things. Like, for example, we use the same PVC names as you have in Kubernetes to identify the volumes in vSphere, just anything that we could do to lower the boundary between the two teams so that they speak the same sort of language. Then, you know, that was 6.73, we've had two big releases since then we had 7.0 and 7.0 U1, and we added a whole bunch of stuff to CNS and CSI for that. The first one was we offer file based persistent volumes now if you're using vSAN as your back end storage. So if you need rewrite many volume types for Kubernetes, you can do that now if you have vSAN storage so it will automatically provision NFS shares, mount them into the appropriate containers for you. We added support for all types of vSphere storage for block. So if you only need rewrite once, you can now use that with vSAN, vVol as VMFS, and NFS storage. So any storage type supported by vSphere. And additionally, we added a few other bits and pieces. So we have offline volume resize support. There is support for snapshotting disks via Valero with our VADP plugin so we can do atomic snapshots at the infrastructure layer of volumes that are being backed up by Valero. And per persistent volume encryption as well. So you can assign a storage policy that has encryption tied to it and it will encrypt every disk by default for you. We also added support for exporting metrics from ESXi and vSAN into Prometheus. So if you're using Prometheus for monitoring, you can pull the metrics or scrape the metrics from the ESXi hosts directly into Prometheus. Do auto scaling or whatever it is you want to do on it there or maybe just build fancy Grafana dashboards. So I think we all sort of know how this works, but we're going to go through it here. This is the kind of the important bit whenever it comes to CNS and CSI as to how this makes volume provisioning work automatically. So everything in orange here is vSphere. Everything in blue is Kubernetes. So the first thing you or your administrator creates a storage policy within vCenter that defines things like raid level quality of service, all that kind of stuff. And then in Kubernetes, you create a storage class that points to that SPVM policy. It's just a name mapping really, really simple. And then you've got CSI that actually ties the two things together. So you come along, you deploy your Cassandra app. It requests a volume. So it asks for that from a particular storage class. Storage class looks at the provider and says this is the vSphere CSI hands it to our CSI driver. It will then look up the vCenter connection information. So it connects to the correct vCenter CNS instance CNS looks at the request says okay they've asked for 100 gig volume with a raid one policy applied to it. Hands that off to the system creates two copies of it raid one and then passes that back up into the appropriate worker node gives it a file system and minds it back into the container so fully automated end to end. Once you have created your SPVM policy and storage class which you only need to do once. Now, the thing that most of you are probably wondering about is, if I have this old VCP driver, how do I migrate to the new CSI driver. That's very good question. So this is something that we've been actively working on in the back end. It's not fully baked yet, but we do have beta support for this as of Kate's 119 and vSphere 7.0 you want. So if you're on an infrastructure level of vSphere 7.0 you 7.0 you one or above, you know, whenever you're watching this. And you have Kubernetes one dot 19, you can enable migration from the VCP to CSI. There is some documentation up on the GitHub site. So if you want more info on how this actually works or how to do it. Go have a look there. But essentially what it does is whenever you enable this at the cubelet level on each of your Kubernetes nodes. It creates a webhook for any calls that would go to the VCP and redirects them to the CSI driver. So the CSI driver will then take those calls and just translate them into CNS calls. So it'll do that for any newly created volumes. In addition, you can add a flag or an annotation to your pvcs. And that will say migrate these volumes, you know, or this particular volume in the background then because like I said, we added this new first class disk concept fcd concept. We need to do a disk migration at the vSphere layer. Whenever you add that annotation, it will then go and unschedule the pod that's attached to that move the VMDK from just being a standard VMDK to being a first class disk. And then it will mount it back into the pod whenever it spends a backup again. So it does the back end data migration for you as well. Like I say, this will also require a new CSI driver as well because the current CSI driver that exists out there doesn't have this baked in fully supported. But if you want to try it in beta, it's in a patch release. So the projected cutoff date for the entry cloud provider is supposed to be Kubernetes 1.21, you know, it's been said to be 1.16. It was said to be 1.18. The goal posts keep moving. But at this point, you know, we have a built migration path that is in beta. So I would be fairly confident that 1.21 is a good prediction for the VCP driver to be removed from the core Kubernetes code base. So I would imagine that that's sort of our cutoff date there. The out of tree driver, the CSI driver is already recommended for all new installations. There is no reason that you should be using the VCP other than it's the only thing that your distribution supports. But you should be able to install custom CSI drivers into whatever distribution of Kubernetes you're using. So if you can use the CSI driver from the get go, it will just make your life a lot easier. Older vSphere versions, the VCP supported from 6.5 onward because of this new core code change that we had to do at the infrastructure layer. That support will be removed and you'll have to be on 6.7 U3 or above to use the CSI driver. So that's sort of that covered. I am going to have a little look now at any of the recent changes, what they were. There wasn't a whole lot since the last time we did this at QCon Europe. So there's only just two bug fixes listed here. We have one where we fix the bug when discovering the VM IP address by subnet or the network name. So that's been fixed and fixed a bug where the node may be prematurely deleted if VM tools is slow to report its host name. So that's been fixed as well. Aside from that, not a whole lot because we are still developing some of the big major functionality. I'm going to push that up later in the year. So at this point, I'm going to hand it back to Steve. Thanks Miles. I'm going to move on from the vSphere hypervisor to using desktop hypervisors to host Kubernetes clusters. Why run Kubernetes on your laptop? Well, yes, for production workloads, hosting in a public cloud or a non-premises data center is recommended. But a local dev cluster is useful in some circumstances. When you want a learning environment and you already have a computer, it may be cheaper than paying for cloud hosting. When your internet connection is limited, it might be your only option. When you want rapid turns as in early stages of development workflows, a local dev cluster can be useful, but you need to be careful about how this fits with your CI CD tooling. These are the options for Kubernetes in a desktop hypervisor. First, kind, second, mini-cube, and third, running a lightweight distribution could be K3S, MicroKates, Upstream Cube, Atom, or something else inside of VM. A desktop hypervisor, whether it's workstation, fusion, virtual box, or something else can support all of these variants. And if you're running Linux or a Docker platform, maybe you can avoid using a general purpose hypervisor altogether. Kind stands for Kubernetes in Docker, and like the name implies, it runs the various Kubernetes subsystems inside Docker. It is lighter weight than the other options at the expense of being less like the clusters that you would deploy in a cloud. Kind's lighter weight tends to lead to a life cycle of very short-lived clusters. It's easier to tear them down and then rebuild them as you need them. And these might be more reliable for a local dev cluster use case. You aren't tempted to postpone patches as much as you are with some of the other variants, and maybe you can afford to test against multiple versions of Kubernetes. Today I'm going to demo kind in a workstation hypervisor, but see the KubeCon Europe recording, which is linked here for instructions on running MiniCube in a desktop hypervisor as an alternative. I'm intentionally doing this demo on Windows for two reasons. First, it's a little less straightforward, and if you have a Windows laptop, I think you might be a little bit more in need of an assist from what you might learn from a demo. This same thing will work on Linux or Mac, although a few of the install steps will vary a bit. They're close enough that I'm pretty confident you'd figure it out. About a month ago as I'm recording this, VMware released a new version of the workstation infusion hypervisors for Windows, Linux, and Mac. The key new feature that was added for a Kubernetes audience is built-in support for running containers and Kubernetes clusters with a new CLI called vCuttle. Now, you've always been able to run Kubernetes and Linux VMs, and MiniCube has been supported for years, but now you can directly deploy a kind cluster using the CLI, and I'm about to show you how this works on Windows. I've pre-staged some of the necessary steps because you've seen installs before, but obviously you start by downloading the installer binary for the desktop hypervisor and installing it. The Qt-Cuttle CLI is not bundled in with the hypervisor, so you need to install that independently, and this is just a standard Kubernetes component that's well documented in the Kubernetes docs. If you're on Windows and a Kubernetes newbie, I'd recommend the option of using Chocolaty as an easy way to do it with a one-line cut-and-paste to an admin command line. And you can see that command line right here. The other thing I've pre-staged is the installation of a Tekton CLI, and the reason I've done this is because I'm going to host a Tekton CI CD platform on the Kubernetes cluster that I'm about to deploy. So I'm going to cut to a demo screen. If you want to try this at home later, in the published deck, I'm including slides that detail all of these steps with the exact entry that you need to type in on your command line. So download the deck, and you can cut and paste commands from it later. So the new container and KIND feature is based on technology similar to what went into Vsir 7 for hosting containers within a rapidly-stageable VM. KIND is a binary produced by the Kubernetes project for deploying Kubernetes components into a Docker runtime. With Workstation 16, what you've got is a version of KIND that thinks it's talking to a standard container runtime interface like the one used by Docker, but it is hooked up to the desktop hypervisor. So the desktop hypervisors start with a CLI vCuttle that is intentionally similar to the Docker CLI you're already familiar with. Let me show you this by typing in vCuttle with no parameters. You get usage guidance, and what you see is the familiar command options of build, push, pull, run, tag, etc., which I'm sure you're familiar with from Docker. Let me prove it by doing a Docker or a vCuttle pull of the classic Hello World container. I know what you're saying, wow, Steve messed up the demo already, but trust me, I did this on purpose. The error tells you what to do to fix the issue. The container engine needs to be started. This is intentional so that when you don't need the container runtime, it can be completely shut down, something you might appreciate when you need the most out of your laptop's memory, CPU, or battery. So now I'll type in that vCuttle system start that the error message suggested. And after a little while, we're going to see that a container runtime is available. We can type in vCuttle system info, and this shows what is available to the container runtime if it's needed. This is configurable, and you can see the output explains what the current settings are and where the config file and the log lives. Now that that container runtime is available, let's try that Docker Hello World exercise again. First I'm going to pull the image, and then next I'll run it. What we want is full Kubernetes, not just Docker. Let's deploy a kind cluster. So the instruction for that is simply vCuttle kind. And a new window popped up, and this window is set up to use kind. And it's going to make it a little bigger here. And after maximizing this window, I'm going to do a kind create cluster. This is going to take a few minutes, probably more than one minute in less than five. The time depends on the speed of your system, as well as the speed of your network connection. In the interest of time on this recorded demo, I'm going to cut the elapsed time down, but expected to take between one and five minutes. Now that that's finished, we should have a cluster, but let's prove it by trying a few standard kind Docker and Kubernetes commands. First we'll try vCuttle images, the equivalent of Docker images. You can see that we have that Hello World image we pulled as well as a new one in the list associated with kind. Next we'll try vCuttle version. And we'll see the version of the Kubernetes cluster as well as the vCuttle client. You'll notice in the last decimal point, they're a little off, but they're both version 1.18. So that's fine. Next let's do vCuttle to look at what's lurking in the system namespace. And we see a lot of Kubernetes components that are self-hosted in the cluster. We can also do a vCuttle cluster info to take a look at what we've got. And the kind binary will also show us what it thinks it's deployed as far as clusters. We have one cluster called with a name of client kind. We'll use Qtl get nodes. We can use vCuttle ps to look at running Docker containers. And finally we'll use the CLI for the hypervisor to see what has happened with regard to running VMs. We've got one VM running. So at this point we have a standard Kubernetes cluster with standard functionality. Nice, but for that developer workflow I talked about, it's a foundation, but still not a complete toolkit. What about CI CD? Well, Tecton is a popular open source framework for driving CI CD by hosting steps and pipelines in a portable way that takes advantage of Kubernetes. The goal is that you can build, test and deploy anywhere and your CI CD pipeline can't tell the difference between when it's run on my laptop or when it's run in a public cloud. The result should be the same anywhere with no snowflake build steps going on. We can use our kind cluster to host the additional tools we need for a Tecton framework. This isn't going to be a talk on Tecton, so I'm just going to get started. But let me at least show you how you can use a desktop hypervisor as part of your CI CD process. We'll start by installing Tecton into our Kubernetes cluster with kubectl. I'm going to download and apply the YAML from the Tecton open source project. So there we have kubectl applying the YAML downloaded from the Tecton project and you can see a bunch of stuff is going on. Let's take a look. kubectl get pods all namespaces and I'm going to predict we're going to see the couple of Tecton pods. They're not in the ready state yet, but they've been deployed. So we'll wait for those to start out. So now we've got Tecton running. On CI CD steps is something it calls tasks. It's a framework for running a CI CD process on your choice of Kubernetes. Rather than run a full build test deploy cycle in the interest of time. I'm going to run a Tecton hello world example that pretends that emitting hello world is a step that is part of your build process. I'm a blog post from Brian McLean that I'm going to link in the deck. So let me deploy from the YAML and Brian's GitHub. And we've deployed a Tecton step that emulates a build step. I'm going to use the Tecton CLI describe command to examine that build step. And you can see that it's started and status of running. So I'm going to try again to look at the state of that Tecton step. And you can see now that it had a duration of two minutes and now has a status of succeeded. And I'll see what's in the logs. And you can see that log show that hello world was echoed. So this content is brought to you by a Kubernetes user group. And if you're using Kubernetes on vSphere, I invite you to formally join the group. We have meetings each month where we present tutorials and best practices. The agenda is user driven. So members are encouraged to nominate presentation subjects and discussion topics, including discussions of feature requests. We have two user tech leads who aren't here today, Bryson Shepherd and Joe Cersey, who helped us get this group started. And we're looking to grow this group into a diverse set of worldwide users. The group is also running a Slack channel, which is a great place to ask questions. The Kubernetes user channel might be a better place for generic Kubernetes questions. But if you have a vSphere specific topic to discuss the VMware users channel is a great place to find other users and also to reach code and documentation contributors. So the next user group meeting will be December 3 in the North American time zone. You can go to the Kubernetes community calendar to get a conversion to your local time zone and add it to your calendar. Use the link in this slide. You become a group member by joining the mailing list using the link shown here. And then finally, there's a link to the group's Slack channel. Miles and I are going to hang around for Q&A. Here's a link that will get you this deck. And these are our GitHub IDs, and you can also get in touch with us and the broader user community from inside the group's Slack channel. Thank you, and I hope to see you in a future meeting. At this point, I'm going to turn it back to the CNCF conference moderators.