 Greetings, everyone. Welcome. My name is Minakshi Kaushik. In this video, I will provide a brief overview of two new inter-site services, inter-site Kubernetes service, and inter-site workload optimizer. I will also demo how to use these two services to build a private Kubernetes Cloud in a few clicks. As you can see from IDC info brief, complex applications, half of which are microservices, are rapidly growing everywhere at Edge, Data Center, and Cloud. Customers are looking to quickly build infrastructure, gain complete application and infrastructure visibility, and optimize application performance. The two new services addresses these customer requirements. Inter-site Kubernetes service allows customer to deploy and lifecycle manage production-grade clusters across the globe on-prem or in public Cloud. On-prem, customers have the option of choosing either VMware or avoid a hypervisor tax and use Cisco's Hyperflex application platform. Customer can also deploy Kubernetes clusters on bare metal. Let's take a look at how Inter-site can orchestrate complete infrastructure stack starting from server firmware management to hyperconverged layer and then to Kubernetes layer. This is Cisco's Inter-site UI. Under Profiles, you can orchestrate different layers of the infrastructure stack, such as the server and fabric management, the hyperconverged layer, and the Kubernetes layer. To create a Kubernetes cluster, requires a few simple steps. I have already created a Kubernetes cluster. Let's see how simple it is to walk through the different layers of the infrastructure stack. Kubernetes cluster, the dashboard of my cluster, and I can look at the different nodes in my cluster. To drill down to the infrastructure on which these nodes run, I can successively click the hyperlinks. My infrastructure is running on EMware ESXi, and here are the details of that ESXi host. I can drill down further and look at the details of the host. The host is running on this UCS server, which I can drill down further and look at the details of the server. So I can manage each layer of the stack very easily. Let us now look at Inter-site Workload Optimizer. Inter-site Workload Optimizer bridges gap between application and infrastructure. The application or workloads can be any virtual machine or containers running on-prem or on public cloud. Inter-site Workload Optimizer provides two main functionality. First, it provides complete visibility of application and its underlying infrastructure stack. Second, it provides continuous workload recommendation in three major areas, performance optimization, cost production, and policy compliance by matching application resource demand with infrastructure availability. Let's look at Inter-site Workload Optimizer in action. Inter-site Workload Optimizer creates a graph of relationship between workloads, containers, and virtual machine, and its underlying infrastructure. In my Kubernetes cluster, I only have a cluster on-prem. The Inter-site Workload Optimizer can also look at workloads running on public cloud. Inter-site Workload Optimizer also provides recommendation on the best possible cost for these workloads. For example, I am running a simple book info app and a fully loaded deployment on my Kubernetes cluster. For my fully loaded app and infrastructure VMs in similar situation, the Inter-site Workload Optimizer's performance recommendation is to scale up. I'm running a three-note Kubernetes cluster. The placement action provides efficiency recommendation by recommending to move my container workloads to even out the loads. Similarly, the stop action also provides efficiency recommendation to stop applications with zero traffic. This concludes the brief overview of Inter-site Kubernetes Service and Inter-site Workload Optimizer. Please attend our virtual booth at KubeCon and live sessions. We look forward to seeing you there. Thank you and have a nice day.