 Kubernetes is not just Kubernetes. There's a lot more going on to make a complete picture. Kubernetes is sort of like raising a kid. We've caught ourselves saying, as soon as they're out of diapers, things will get better. Then we catch ourselves saying, as soon as they're in school, things will get better. The same thing can be said about Kubernetes. As soon as I figure out how to create a Kubernetes cluster, then I can work on networking. And after networking, I can tackle application deployment. And after that, I'll figure out how to back up and restore it. You get it. It's a constant evolution of hurdles and milestones. So what we are going to do is look at Kubernetes and everything else that is connected to it and map it to open source projects that VMware is actively contributing to and using. VMware is also contributing back to the core of Kubernetes itself. We're all in this together to create a better experience for everyone. All right. So you got your first Kubernetes cluster up and running. Now your manager is saying, we need you to do that 20 more times for each developer. What's your next move? Well, you could start sweating and cancel any beach vacation plans you had for the next few weekends, or you can look at cluster API. It's the ultimate standard for the lifecycle management of Kubernetes clusters. Why do things the hard way when there's a much easier way? Cluster API uses a desired state configuration to determine what a cluster should look like. Here's an example. Say that I want a Kubernetes cluster for my own development purposes that has three control plan notes and three workers. I put this into a specification that cluster API knows how to read. And I give that to my management cluster. Now, I didn't talk about what a management cluster does because we only have 15 minutes here. Just know that the management cluster is responsible for managing and deploying all other Kubernetes clusters. Yes, you heard that right. We can do this again and again and again. So I give the specification my management cluster and it will take over from there. New virtual machines will be created. And before I know it, the cluster is up and running and ready for the next instructions. All this happened because I specified one file of what I want the cluster to look like. And from there, a bunch of cool automation happened to talk to my infrastructure to create new machines. Oh yeah, this is also the tool that will take care of cluster upgrades. That's a pretty important feature we can't forget. So what does that actually mean to VMware? Well, I'll show you this fancy table so you can see exactly what's happening in open source and how we're using those projects. Cluster API is the building block for how VMware is provisioning Kubernetes clusters as a part of Tanzu committee's grid. It's all the benefits of cluster API, but wrapped in a tighter bundle with more abstraction to make deployments a breeze. Let's see a demo. Here's an example of a Kubernetes cluster that has one control plane node and three worker nodes. I have control over the size of the virtual machines as well as resource guarantees. The great thing is that all of the hard stuff has been abstracted and I can focus on applications and not time spent on building infrastructure. I applied this YAML spec as I would a Kubernetes deployment and cluster API is responsible for provisioning the machines and using QBADM to cluster them together. At the end, I'm lived with a cluster that is pure upstream Kubernetes and I can fire away with anything I wanted to deploy. Now that the cluster is running or is it actually running? One of the most critical pieces of a Kubernetes cluster is how pods can talk to each other and this is accomplished through the container network interface. And you guessed it, VMware has an open source solution for that too. Antria was built as a Kubernetes native project that does network connectivity and security for pods. It has some standout features such as a comprehensive enterprise grade network policy model, visibility and insights into the network and improving network performance, but not just by using OBS, but also through the enablement of smart nick hardware offload. By default, every Tanzu Kubernetes cluster is automatically configured with Antria and we can see the running pods in the cube system namespace. No extra administration steps, just getting a new cluster ready to start accepting application deployments. And when clusters are lightly loaded and running smoothly, life is good. But what happens when contention occurs and resources get constrained? Antria delivers new functionality by providing insights into network flows from Kubernetes objects and namespaces. Here's an Antria flow capture labeled with source and destination pod name and namespaces. In other words, in terms of Kubernetes objects and not just IP addresses. And we can move on to comparative flow stats in terms of services and consumers. Once again, from a Kubernetes object perspective. Back to the fancy table, Antria is the default CNI deployed with Tanzu Kubernetes grid. There is no post cluster creation deployment necessary. All right, now we're ready for our first application deployment. Do you know where your images originate that are being deployed? Like when you say use image engine X, it actually pulls down from a public registry. And you're probably moving beyond public images at this point. And you want your own custom or modified apps. It's time to manage your own container registry. This way you can take care of authentication and authorization, as well as add more layers of security. Well, say hello to Harbor. Harbor is the only cloud native computing foundation container registry that has been moved to a graduated and stable project. Over the years, we have listened to what you want and managing your own container registry. It's the most feature rich and widely used container registry within the CNCF landscape with features like multi-tenant isolation, role-based access control, image vulnerability scanning, and an extensible API. It's no wonder why thousands of users trust Harbor with their container images and why this is a graduated project within the CNCF. Here we see the administration interface with lots of knobs to turn. Not only is their image vulnerability scanning, but Harbor has the ability to add third-party scanners. This will show the health of images we have running in our clusters right now. We can take this a step further and replicate Harbor not only to just other Harbor instances, but to different registries. Replication rules can be created that determine whether Harbor will push or pull images, and we can specify if it's only a single image, one repository, or the whole thing. We can also set project quotas that will enforce the amount of space that can be used by images. There's tag retention policies, online garbage collection, and much more. Check it out after this video at goharber.io. So yep, VMware's got you covered there as well. Alright, now it's time to deploy some apps. We've got a container registry, but what if we even wanted more apps? How about a bunch of open source applications? CubeApps is a web-based UI for deploying and managing applications in Kubernetes clusters. Browse Helm charts and Kubernetes operators from public and private registries, while also performing day two operations like upgrades or rollbacks. VMware wraps this all up with enterprise support with the VMware Tanzu application catalog. So check this out, the process is super simple. The first thing we want to do is design our catalog, which maps to a private repository for our organization. Then we select if we want a container or Helm chart repository. We choose our base OS, then we link this to our image registry, such as Project Harbor. Last we decide what apps we want. It's as simple as hitting checkboxes for different open source projects, and then hitting submit. From there, all the images will be pushed to your registry, and it doesn't just stop there. Not only can these images be inspected, but any time there's a new version upstream, it will be synced and tagged appropriately. No more guessing if you have the latest version. Apps, who would have thought? VMware does that as well. Now that our Kubernetes deployments are getting bigger, we need to think about how we can continue to scale without burning up more IP addresses. This is where we need a Layer 7 load balancer. This operates at the application layer and deals with the content of a message versus the transport layer, which typically would define HTTPS access. Layer 7 makes smarter load balancing decisions and can even increase your application performance. So let's take a look at another open source project from VMware called Contour. This is a high performance ingress controller for Kubernetes. It functions as a control plane for Envoy, which is another verifiably trusted CNCF project. With features like dynamic reconfiguration and secure ingress delegation, it's your answer to having a feature-rich Kubernetes ingress controller. Contour is all about routing based on the application endpoint. In this example, we have delegated the root domain to the namespace called team A, meaning that any service, such as slash demo or slash example, will default to the root ingress route that has an application running on port 1112. After that ingress route is applied, the root domain becomes active and valid. Now navigating to this load balancer IP address, it gets me to the desired application endpoint which maps back to the container running on port 1112. Taking this a step further, I can do easy blue-green testing by creating two pods and assigning them the labels of blue and green. Within the root ingress route, I specify the route to match to be either blue or green, apply the YAML, and reload the page to see version 2 of my application. Project Contour is in the incubating stage within the Cloud Native Computing Foundation, so pay real close attention to this one. Alright, now we're cooking. We've got our applications running, scaling, and servicing lots of users. But as a developer, I may want to see my application dependencies outside of the command line. Sure, it's nice to run a few commands to see my services linked to pods that are linked to persistent volumes, but how can I visually see this? Welcome Project Octant to the party. Octant is an open-source, developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster and its applications. Octant provides a visual interface to managing Kubernetes that complements and extends existing tools like kubectl and customize. Use Octant to debug your application or filter what's happening in the cluster using labels. Octant is the only graphical user interface that also provides plug-in extensibility to make it look like whatever you want. So here's Octant. It shows everything running in a cluster. The first thing to do is select the Kubernetes namespace, which will populate the deployments, pods, services, config maps. I mean, you name it. Everything in a namespace. So we click on deployments to get a deeper view, and we can see the labels, but we can also scale replicas directly from the UI. So here's the corresponding pods, its template, and we can even edit the version of the image on the fly. By doing that, the event log kicks off and the rolling upgrade begins. All right, if you're not impressed yet, what about seeing relationship mapping? This allows me as a developer to see how my deployment is connected to pods, load balancers, ingress routes, persistent volumes, secrets, and everything else. It's an easy visualization tool ran directly from my desktop. Cool, another VMware open source project. But what's the one thing we always forget about until it's too late? Womp, womp, it's backup and restore. When containers first became popular, no one cared about backing them up because they were stateless. But things progressed. And now we have databases and other applications that require persistent volumes. Containers still rock in situations like these because it's all the benefits of a container, but the storage is decoupled, making it super portable. Lights, camera, Valero. Valero is an open source project from VMware that backs up and restores, performs disaster recovery, and migrates Kubernetes cluster resources and persistent volumes. It's so simple. Point Valero at a pod, deployment, or an entire cluster, set a schedule for when backup should be taken, and rest easy knowing if something happens to your cluster, a restoration is only a single command line away. Do you want to get notified in Slack whenever a backup is complete? You can do that too with the new Valero webhooks. For this example, let's use Wordpress, the most widely used blogging platform on the internet. It uses a MySQL backend and both services require persistent volumes. We can see that Steve's awesome blog is running and we've got our first post published. Let's create a backup of everything in the default namespace. Depending on the storage environment, different processes will occur. If you're on AWS, EBS snapshots are created. If you're on vSphere, the volumes will be copied to an S3 compatible datastore. Again, it depends on your environment. And let's replicate a scenario that none of us ever want to experience, and we will delete the deployment, services, and all the persistent volumes. Poof. It's gone. Well, don't start sweating yet. We've got this. So let's go to a completely different Kubernetes cluster and use Valero to restore the backup from the storage location. After just a few seconds, our deployments, services, and persistent volumes are all running once again. We can refresh our blog, and everything is back to normal. Crisis averted. Let's put Valero on the board. All right. Let's imagine we are six months into using Kubernetes in production. Everything is going great. We just want to make sure things continue to stay great. To make that happen, we want to do some health checks, just to make sure someone didn't install something that would make our Kubernetes cluster be nonconformant. What is nonconformant? Well, I'm glad you asked. You see, the upstream and open source version of Kubernetes is really the gold standard. That's what all applications and integrations are tested against. If you start manipulating the cluster to a point where it basically becomes a fork, then you've got a snowflake. And it's going to be impossible to determine if new applications or integrations will actually work. Oh, hey, there's Sonoboy. Sonoboy is a tool that does just that. It will scan the cluster to make sure things are healthy, and you can rest easy knowing that it's not snowing in your data center. So we've got this for one cluster, but how do we do it for many clusters or across any cloud? Enter Tanzu Mission Control. Here's the Tanzu Mission Control Web Interface. We can create Tanzu Kubernetes clusters directly on AWS and vSphere, but we can also attach any Kubernetes cluster running anywhere, not just for visibility, but for also control. I can also perform upgrades and scale clusters directly from here too. Tanzu Mission Control has a number of policy types available, including access, image, network, security, quota, and even our own custom policies that are all built on top of Open Policy Agent. These policies can be applied to multiple clusters and everything is centrally managed. By drilling into a single cluster, I can see the overall health of it, and I can take it a step further in the inspections tab. And you see those green checkboxes? That's a son-of-boy conformance check and making it visible to you. We can even run CIS benchmarks and we can set Valero backup schedules from the data protection tab too. Whoa, that's a lot so far, but we did it. And to be fair, there's still more to tackle, but we can't make it happen with this short video. So here's the other open source projects that relate to Kubernetes and application development that VMware is actively contributing to, and you should go check out. Well, that's all for today, folks. Find out more at tanzu.vmware.com slash open source.