 Sometimes it can be tough to control cabin fever, especially if you live in a cabin, but today we're going to learn how to control cabin fever with Kubernetes. So let's go, chop, chop. Welcome to my little cabin in the mountains outside of Boulder, Colorado. Today we are here to talk about how to contain your cabin fever with Kubernetes. I often get the question, why are containers such a hot topic? It really ties back to digital transformation. As customers are looking to innovate in their business and create new business opportunities through software. Containers are the most effective and efficient way to develop and deploy software today. Containers make this very easy because they're lightweight and portable, enabling you to create safe development environments for your application developers, and scalable deployments for your production environments, both on-premises and in public clouds. In fact, NetApp has been in the container business for a long time. We released the very first plug-in for Docker back in 2015. We provided the first external storage provisioner for Kubernetes with Trident. We provide full commercial support for customers with support contracts. We're a major contributor to CSI, the Container Storage Interface, and we were the first to provide advanced data management features like cloning. Let's start our deep dive into containers with Project Astra. Cheyenne, you're our specialist for all things Astra. Can you tell us a bit more about the project and what it's all about? Sure. Thanks, Ingo. Why Project Astra? NetApp, we know that Kubernetes is the platform on which enterprises will run next-generation workloads, and even some traditional ones. As customers wrote new apps or moved existing apps to Kubernetes, the biggest problem was the ability to consume high-performing persistent storage. NetApp, we solved that problem with our open-source Trident offering, which allows applications running on Kubernetes clusters to seamlessly access persistent storage from NetApp storage portfolio, both on-premises and in the public Clouds. Today, Trident is used by hundreds of customers backing thousands of production applications. With the success of Trident in the backdrop, when we talked to our customers, it became clear that we were going down the right path, but there was much more we could do. Primarily, our customers needed three things. Advanced data management capabilities for their Kubernetes applications for data protection, disaster recovery, and migration. Our customers created a lot of Kubernetes clusters and they wanted a tool by which they could quickly and easily move their applications from one cluster to another, and they wanted NetApp to do all the heavy lifting of managing all of this for them. This led to the conception of Project Astra, with which we are addressing these problems. You now have advanced application-aware data management capabilities including snapshots, backup restore, clone, audit, and more. You can seamlessly move your applications across Kubernetes clusters, no matter what they're running, and you now have access to a fully managed Cloud service with consumption-based Pego billing and no software to download, install, and manage. Astra is extremely easy to use and does not require advanced Kubernetes management skills. Simply register your Kubernetes clusters with Astra. Astra will automatically discover the apps running in your Kubernetes clusters. Once discovered, simply choose from a catalog of data management operations like snapshot, clone, backup, etc. that you want to apply to the app and off you go. To get an even clearer idea on how Astra works, let's see a quick demo from NetApp's own Garrett Muller, head of engineering for Project Astra. Hi, my name is Garrett Muller and this is Project Astra. This is the dashboard. You can see that I have a single Kubernetes cluster registered for this demo. I'll need two, so let's register another. I'm going to reuse my Google Surface account to discover all of the GKE clusters available to Astra. Let's add the one in Austin. Astra knows where the cluster is and what storage is available there. In this case, that's NetApp's native cloud volume service in Google Cloud. Let's make the standard service level the new default. As soon as the cluster is under management, Astra goes to work. It has already discovered an app, something called TPS Reports. Let's manage it. That tells Astra this is something you care about. Now that Astra is managing it, you can see everything that it knows like the live state in Kubernetes, the protection status, the Kubernetes resources that make up the app and the storage backing it. Let's start protecting this app by providing a protection schedule. You can see that the app is only partially protected because a backup hasn't actually been taken yet. Let's go take a look at this app. Hey, it's in a text company headquarters. Nice. Let's kick off an app backup. What does this do? Astra knows that this app is backed by MariaDB. It takes a consistent app snapshot including all of the volume and Kubernetes state. Then it pushes all of that to a bucket for safekeeping. That done, let's go back to our app. Uh-oh. HQ is on fire. At the same time, I went and deleted the running app from Kubernetes. You can see that Astra noticed that the app is no longer running, but that's okay. We have an app backup. No demo tricks here. This app is really dead. Let's get it back. We do that by cloning the application from that backup, but we're going to drop it into our branch office in Sydney, Australia because our Austin office isn't in very good shape right now. Astra is intelligently reversing that backup process from earlier. Not only is it restoring all of the data in those volumes, it's recreating all of the Kubernetes resources necessary for that application to run in the new cluster. Let's see how we did. Great. The app is back up and running with the state we captured in the backup. But wait, that's not quite right. We're down under now. That's better. Hope you enjoyed it. You can see how excited we are about Project Astra. What do you think, Ingo? I think it's amazing to see how simple data management or containerized applications has become with Astra. Thank you, Cheyenne. Thank you, Ingo. Now that you have your applications and persistent storage infrastructure deployed and in use, a key topic for day two operations is optimization of both compute and storage. Here is our resident specialist for this topic. Welcome, Kevin. Thank you, Ingo. It's great to be here. It's really exciting to be talking about Ocean and extending the Kubernetes story even further and on top of what Astra is doing to the compute layer. When I talk about the layers of Ocean, this graphic shows what we're doing here. On the top layer, it's all about right sizing. It's about the pods and the applications and the workloads that are running on top of Kubernetes cluster. How do we keep those running as efficiently as possible? Then in the Kubernetes cluster, how are we communicating with the scheduler? That's that second layer. That second layer is the Kubernetes scheduler, the container scheduler, how are we making sure the scale and the right sizing of the application as it needs more and less infrastructure? How is that being orchestrated? Then finally, that third layer, the layer on the bottom is all about the infrastructure of the Cloud, and how efficiently are you provisioning that infrastructure? Are you using the right pricing models? Are you getting the right type of compute? Ocean takes care of all three of these layers when you're deploying workloads to any type of Kubernetes or containers solution. So to move on, what we're going to do is we're going to break down how Ocean actually works within a containers orchestrator. On the left, you're going to see that there is a control plane and on the right, there's a data plane. Now, the control plane is kind of the brains of the operation. It's where all the magic happens. It's very small and infrastructure-wise, maybe only about three nodes, but that's the decision maker of how things need to run. Now, on the right side, there's the data plane, and these are the worker nodes. The worker nodes can scale from zero to thousands of servers. This is where the major cost of the infrastructure of a Kubernetes or any type of container orchestrator, this is where it's going to be, and this is where Ocean does its magic. Ocean can communicate with GKE, AKS, Custom Kubernetes cluster. It can communicate with any of these master servers and then scale intelligently the nodes and infrastructure that's needed based on the pods and tasks that need to be scheduled within that cluster. So how does Ocean know how to do this? Well, Ocean is very intelligent when it looks at the pods and workloads that need to be run and then it finds infrastructure that matches the attributes of this application. So whether you need CPU or GPU or you have different types of CPU to memory ratios of your pods and applications that need to be deployed, Ocean goes out and uses the entire inventory a cloud provider has to offer and then brings the correct infrastructure at the best pricing model and the best optimization for that application to the cloud. So within the infrastructure that's running, Ocean is constantly looking at the pods, how they need to be added to the cluster and then finding the right compute underneath to make sure that everything is aligned. And what's very important to realize is that this is for both scale up and scale down. So everybody thinks about scale up and you always need to make sure you have the right number of your CPUs, right number of memory and every pod needs a different requirement, but also on scale down when servers need to be removed from the cluster, you need to make sure that you're removing the right infrastructure so there are no unscheduled pods in the cluster and Ocean is always, always, always making sure that pods can be scheduled and that you're never running with an unschedulable workload. Furthermore, Ocean gives you advanced call showback and visibility to everything that's going on in your cluster. So what we're looking at here is namespaces, pods, containers, they're all detailed so you can do showback and chargeback within your organization or just understand how an application is running. So now what I'm gonna do is I'm going to bring up a demo and we're gonna see how Ocean scales and we're gonna see the efficiency of the auto scaler and how that works. In this demo, what we're doing is we're gonna be sending a lot of traffic to a Kubernetes cluster, an application running there. As we send more traffic, the horizontal pod auto scaler is going to request more and more pods be scheduled within the environment. Initially, traffic is fairly low. So if we go to our Ocean cluster, we're gonna see that about four pods are scheduled and this takes about one node to serve the right amount of traffic. Now what we're gonna do is we're going to greatly increase the amount of traffic that is going to this application. As we increase the traffic and as the horizontal pod auto scaler needs more and more pods to serve this application, we're gonna need more infrastructure to do that, more CPU and more memory. You can see here, we're going to 50, 75 nodes. We'll eventually get to need 100 pods to serve this application. If we go back to Ocean, Ocean is communicating with the Kubernetes scheduler, understanding that these pods are very, very CPU intensive and adding more and more nodes to the cluster to ensure that this application can serve 100% of its requests. Now it's also making sure that these nodes are bin packed. So there's efficient as possible. They're also all coming out of the spot market in this case. Ocean takes all of this into account as it's scaling up the nodes. It's making all of these decisions for you. Now, as the application's reaching peak capacity, what we're going to do is we're gonna pull these requests down. So we're gonna go from about 100 pods needing to be used back down to maybe one pod. We're gonna just rip all the traffic going to this application away. Again, Ocean understands what's going on. So it's going to scale back intelligently the nodes that it needs to at the right time and eventually your cluster goes back to one node. So again, Ingo, thanks for having me today. It's a great pleasure to talk about Ocean and it's very exciting to see all the new technologies that are coming out at Insights this week. Thank you, Kevin. It was really fantastic to see what Ocean can do for our partners and our customers. I really like this concept of automatically right sizing resources up and down to eliminate resource sprawl and over provisioning. Thanks again, Kevin. Containers and hybrid multi-cloud create incredible opportunities for all of us from a technology perspective, a business perspective, but also from a career transformation perspective. However, some people have somewhat unrealistic expectations. To be clear, containers are just one aspect of your journey to a hybrid multi-cloud world using cloud-native technologies. To really make an impact to your business, you need to look beyond just spinning up containers and claiming persistent storage. With NetApp, we are with you from evaluation and development through production and ongoing operations and optimization. We help you optimize both storage and compute and we provide full visibility across on-premises and cloud. The data management expertise that you already have today is a strong foundation to get started in the space. Here are some great sessions to attend that cover the main topics of this mega session across Astra, Ocean and Cloud Insights. You can get started with our cloud services today at cloud.netapp.com and you can engage directly with our developers over at the pub at netapp.io and who doesn't like hanging out at the pub once in a while. With this, thank you so much for attending our session today.