 Are we good? Hi, good afternoon, everybody. So the lunchtime lull has worn off, and everybody's attention has perked back up. OK. Now, again, out of personal curiosity, I asked this during our first session, and I was very surprised by the answer. For how many people here, this is your first open stack summit. OK, I'm constantly surprised by the response. Well, welcome. I hope you're having a great week. This is the second of four Cisco sessions that we have in our room today. This one is on the Cisco Virtual Infrastructure Manager. We call it CVIM. Chandra Ganguly is going to start off one of our engineering directors. I will let him introduce the rest of the speakers when he's all set. And we will build some time in for some Q&A at the end, about 10 minutes. And Chandra? Thank you, Gary. Good afternoon, everybody. My name is Chandra Ganguly. I'm the director of a platform engineering for Cisco Virtual Infrastructure Manager. With me today on the stage is Ajay Kallambur and Ninima. Ajay is the principal engineer for CVIM. And Ninima is one of the principal engineers for Cisco Container Platform. This talk is a multi-part talk. This is talk one of two of subsequent to this talk. There will be another talk where we'll talk about how we have taken CVIM and got an adopted IRONIC and essentially running container workloads on it. So this is really a multi-part talk, if you will. So in terms of the agenda, what we will talk about is we'll give you a high-level platform overview of what Cisco Virtual Infrastructure Manager is. And then we will talk about how we have evolved this platform across multiple dimensions, be it the hardware, be it the software, be it workloads, be it networking. Subsequent to that, we will talk about another platform of Cisco called CCP, Cisco Container Platform. We'll give you an overview of that. And then how we have CVIM is using CCP as an application. So as a result, we can evolve this platform to not only host workloads that are running in VM, but also in containers and bare metals. That's basically how we'll go through. And we'll conclude this talk with a demo and a summary of where we are today and where we are going in the future. So at a high level, if you look at it typically today, again, why did we even do CVIM or Cisco Virtual Infrastructure Platform? When we evolved in this journey about three years back, what we found is customers were essentially doing a lot of this cloud deployment on their own, DIY. And while it was taking them, let's say, anywhere from a week to two weeks to get it going, the problem was how do you do day-end operations, not just somehow hook or crook, get it going. And they were having a lot of problems when there is a security update. There was one piece of the compute node or the controllers went bad. You wanted to add more storage nodes. Those problems were not thought through. So then the other option was people went through what is called the system integrator, which is basically take the best of breed from multiple different components and bring it together. Well, that kind of works. But what happens again is many of those are done through service contracts. As a result, the service integrator does it once. But how do you replicate it in 20 sites, 30 sites? That becomes a challenge. How do you scale this? And in the lack of many cases, in the lack of automation, it becomes like a snowflake. But every site has slight different variations of what you have. So what we decided is these are the two problem statements on the basis of receiving was architected and even envisioned that you would like to have a fully automated and true automation, meaning one for any command or any operations of one click, one button, one CLI approach with the rest API front end, which will not only do the basic install of the cloud on day zero, but lifecycle manage it across end releases, end software updates, end security updates, and whatnot. While we are doing that, also the cloud today initially started with a very Cisco-centric hardware. It has, I will see as we go through this presentation, how it has evolved to handle third-party computes or third-party complete infrastructure, how it has evolved to switches and storage and networking as well. So that was at a high level what our ambition was three years back when we started this. One thing we also made sure that we do not deviate from the HC model. So we have a very, very clear demarcation of where the cloud CVM infrastructure work stops and expose all the OpenStack APIs consistently so that if you are running in an OpenStack cloud somewhere else, you bring your workload on the CVM, but you shouldn't have to change anything if the automation is done using the OpenStack APIs. And what we have seen over the last three years actually is as customers have brought in third-party VNFs, this is not just Cisco, VNF, just third-party VNFs. My only question to most of them has been, has this worked in OpenStack before? What scale parameters did you use? What is the scale at which you ran it? Did you need any tricking of the OpenStack parameters? Because we know many of the times the VNFs are not truly cloudified yet. And as long as we know of those and through a feature option or optional feature, we can actually add up the cloud to that, we have seen its work all the time. So given the fact that essentially, so that's basically what we have at a high level where we are. How did we achieve it? We achieved it, one of the things we are in partnership with Red Hat in terms of the OpenStack and the kernel bits. We do take the reports from Red Hat, add a monthly sync app, we do that. And then we have our own automation through an Ansible Python-based automation that we deploy this entire cloud. Jay will talk in the next few slides on slightly more details of our architecture. And there you will see why this has worked elegantly for us. But the key is, whatever is marked in green is essentially what the scope of CVIM is. There's a REST API in front where through which you essentially can manage multiple instances of the cloud from the same UI interface. Also, you can write your own data pool or own client to control those APIs and manage the cloud. It's up to you. Essentially, everything comes to the REST API. So I'll hand over the next few slides to Jay who will talk about the architecture of this CVIMA cloud or infrastructure. Thanks, Chandra. So given the fact that we saw a high-level picture of what CiscoVim does, at a fundamental level, it deploys OpenStack and manages the lifecycle of the OpenStack cloud. But how did we do that? How did we deploy OpenStack? So we started, CiscoVim has been around for about three years. So this project started in about 2015. And at that point, we had to make a critical choice of how do we run the OpenStack control plane? And we took a gamble because three years back it was not common to run OpenStack control plane in containers. We started at just about the same type of an upstream project called COLA. And we kind of went similar timeframe. So one of the things we do is the entire control plane of OpenStack has been containerized in our case. And we have learned from three years of experience of caveats of running OpenStack in containers. There are lots of things that are at play when OpenStack services run in containers that you won't see when they run in hosts. For instance, SE Linux, container permissions, exposure to the host volumes, all those kinds of things, router namespaces, network namespaces, very differently when you run it in this process versus container. But what does container gives us? The one thing that we can do is that we can update the cloud seamlessly with minimum downtime. And when there is no DB schema change, as in, let's say you're talking about a particular release of OpenStack like Queens. When you have minor bug fixes, there is not going to normally be a database schema change. We do allow rollback. So if your update happens and your update fails, there will be an automatic rollback and all the services will be restored to where they started from. So that's useful to have. We don't support rollback in major version upgrades. So if you look at our design, it's very simple. We have everything running in containers, including the entire OpenStack control plane and also the Ceph services like Ceph Mon, Ceph Manager and so on. We have a simple HA design where HA proxy is running in active, active, active for all services with the exception of Galera, which is better run and an active backup. If you look at the bottom portion of this slide, this is the various steps that the installer actually goes through in terms of standing up in OpenStack Cloud. So you start with, we look in more detail in the next slide, you start with bringing in all the packages that you need. So one thing that this installer supports is it's some complete air gap installed, which means technically you need no access to the internet to do the installation. So you could basically do this installation completely offline. The first step is an input validation. Any installer needs input, right? You need to tell it what are your bare metal nodes available, what is this IMC information into those bare metal nodes and so on. The first step we do is we validate that all your input is sane and if it's not sane, we fail right away so that way you don't have to learn much later in the installer that something is wrong. Then we do a bare metal install. We do it for Cisco servers, UCSB series, CC series, third party servers. There are few third party platforms that we support right now and that list is growing. Then there is node level setup. It's not just about, in addition to maintaining OpenStack, you also need to make sure that each particular node has host packages, kernel is updated, things like NTP are configured. So those base things, common node level setup is done. Storage is set up, so we have, you can either run, Chandra will talk a little bit about it. You can run storage co-hosted with compute nodes, controllers, or you could run dedicated storage nodes. So we support all those kinds of configuration. Then the OpenStack service orchestration happens. That can also include an ironic bare metal install. Abhishek will talk about it in the next presentation. And then we have an optional CCP install which is, you can now stand up a cloud which not just supports virtual machine workloads and bare metal workloads to ironic, but can also support container workloads through Cisco container platform. Anilima will talk about it in a little bit detail. And final is a complete self-test which is we run throughput tests, VM launch tests, everything at the end of the install to make sure that everything works successfully. So moving on, so this is what I talked about in terms of you have two modes of doing the install. The first mode is what is called a connected install which means you have access to the internet. So in addition to the nodes that you need, we ask for an extra management node which is from where the installation actually happens. In future, this will be in HA. So you can either do a disconnected install in which case you basically download all the artifacts into USB, plug it into the management node and then begin the installation. So what then happens is the first step is we stand up a Docker registry on the management node. Remember that all our services are run as containers. So this Docker registry is from where the controllers, the compute nodes and all the other nodes will basically pull the Docker containers from and do the installation. So this is an example of how we do an air gap install, completely air gap install and you could either operate in connected mode or disconnected mode. So moving back, Chandra will now talk a little bit about the evolution of CiscoVM, the different platforms. Thank you Ajay. So Ajay has kind of already eluded how we have evolved this platform to handle software updates and upgrades but that's not the only dimension in which it has evolved. Our customers actually have kind of demanded and asked for evolution of the platform across footprints. We'll talk a little bit more in the next slide what that means. Hardware support, we started with Cisco UCS, CCDs and BCDs. We moved into HP third party compute and then now we are going to a pure NFV as a software and have getting into support for quanta late early part of next year as a third party cloud infrastructure. We have also evolved our networking to go from OVS to VPP, obviously SRIV and including both ACI and VTS as the SDN controller. Obviously there's ML2 plugin included as part of the offering too. In terms of storage, where we have evolved is we started again with Ceph. The Ceph itself has evolved into single backend and multi backend because we have our customers who needs high IOPS so we have SSD based Ceph or hard drive based Ceph in the same cloud so when you host a VM you basically can attach to a right backend infrastructure. We also support now NetApp and SolidFire and SwipStack. All of that is now part of the offering. You have to decide on day zero which way you want to go, based on your requirements, we can go with that way. Today our second part of the talk will focus on how we have evolved the workload support from VM to containers and then also bare metal and you'll see that. In terms of the hardware footprint, again we started with a full on where you have dedicated controller, computes and Ceph nodes. Lot of our customers are the service providers. They have come back and said, you know what? Yeah, dedicated Ceph nodes is good but you know my workload doesn't need a lot of Ceph storage so I'm burning a lot more real hardware rather given the amount of hardware I'm using compared to what I'm using it for. So then came in the evolution what is called hyperconverged nodes where some of these nodes can actually act as both storage and computes. But that wasn't good enough for many of our customers because you'll see a lot of other customers wants to push this cloud to the edge. And today as you know edge means real computes at the end, we are still evolving to that point so what we had to do in between is called a micropod where now you can take three servers, make them act as both control compute and storage nodes and then as your compute capacity increases you can just go ahead and add more computes to that. Obviously in that solution the storage is limited to the first three nodes. So not only did we evolve just from the footprint point of view, we've also evolved on the hardware that we support. So initially very early on we started with Cisco, all Cisco UCSC or V-Series. But then obviously third party VNF started showing up which was not a big deal really from our point of view because as long as they were adhering to the OpenStack APIs. But then there were customers who had enough other third party computes and they wanted to reuse them for the cloud. So we basically said okay the control plane still is Cisco, the data plane or the computes can be third party, that's something we supported. And now very soon we are actually supporting a full third party infrastructure on which NFVI will run. Throughout all this evolution also the NIC cards have evolved from just Cisco VAC to Intel NIC and that evolution has also happened. One of the core part of this talk is essentially what is called the evolution of the workload. So again we started with VMs which is standard. Today Neelima is going to talk about how we have taken this platform and added container support on that. So this is using CCP as an application into that. Abhishek talks subsequently will talk about how we are taking this and also supporting Ironic which on which you can now can run containers as well. So with that one use case that I would like to talk about before I hand it over to Neelima is here in which is very, which is putting into this region is Daesh Telecom. Daesh Telecom has taken the Cisco VIM infrastructure platform and actually have deployed it in production from order to deployment in less than three months on their, what they call is that edge program. Don't get confused with the edge technology, with the edge program of Daesh Telecom. Here they're talking about they've taken the cloud, full on cloud, put it as almost on the boundary of the premises of their zone of control and they're essentially running voice traffic on that with very low latency. And so it's very relevant to this region and we are continuing to work with them for expanding this clouds from four to end and they're working with, they have, they have more requirements coming through and we are working with them on evolution of the cloud and the platform. With that, I will hand over the button to Neelima. She will talk about Cisco Container Platform. Thanks Chandra, good afternoon everyone. So what is Cisco Container Platform? It is a turnkey solution for deploying production grade container clusters. We deploy 100% upstream native Kubernetes. The Kubernetes ecosystem itself is very large and there are lots of options for anything that you want to do. Cisco Container Platform provides a curated full stack with everything from logging, monitoring, we have a built-in Docker registry. Everything is tested and supported by Cisco. We also have the platform optimized for hybrid cloud applications. We have a partnership with GKE and Google and we have an architecture where you can run applications spread across GKE and your on-prem cluster. Last week we announced our latest offering of CCP which allows you to manage your EKS clusters from your on-prem CCP dashboard. So you can deploy your local clusters and clusters on EKS and then spread your applications across and access resources across the two clusters. It provides integrated networking and storage options. CCP runs today on Hyperflex, vSphere environments, AWS and today we are gonna show a demo of it running on CVM. So for all of these environments we have options for multiple options for CNI plugins as well as storage provisioners. We provide a flexible deployment model. We support VM deployments across multiple infrastructures. We are also going to support bare metal very soon. Abhishek will be talking about it in the next talk. So what are the features that Cisco Container Platform provides? We start with being able to deploy multiple clusters on any infrastructure that you choose. We also do the complete life cycle management of these clusters. Everything starting from giving you the OS image to provision your virtual machines or bare metal nodes to upgrading the OS, applying any patches that are required, installing Kubernetes, upgrading Kubernetes, installing the add-on services on top of Kubernetes and upgrading them as well as doing any kind of repair operations. So if you're one of the physical hosts goes down, we automatically detect that and start up a new node in a different location. So the complete life cycle is managed by CCP. We also integrate with Active Directory and AWS IAM. So if you have an application that is running on-prem as well as in AWS, both of them can use the same IAM. We are looking at integrating with Keystone for the CVM integration. That's a future roadmap item not yet completed. So we're gonna look at the integration of CCP with CVM. Ajay will first give us an overview and then we'll go into the deep dive of the architecture. Thanks, Dilima. So remember that we talked about a single cloud which can basically support both virtual machine workloads and container workloads. So how do you go about implementing such a cloud? So assume that you have an open stack cloud app which is Cisco VM. Now what we do is CCP is basically like Nilima explained, it is basically Kubernetes orchestration as a service. So what we have here is the CCP control plane is basically installed on a common open stack tenant. So the CCP control plane, which consists of a bunch of virtual machines is installed on the common open stack tenant. And then the CCP API is exposed through like a floating IP of open stack. At this point, you can invoke that CCP API to create one to N clusters in open stack. You can create as many Kubernetes clusters in open stack and you can create them at this point of time, we are supporting virtual machines. In future, we will be supporting, installing these on ironic bare metal nodes. So as you can see here, when you install Kubernetes in open stack, we basically have integration with the open stack cloud provider. So we use Cinder for persistent storage. We at this point, the authentication is done through Keystone V3. This is being supported from Queens and there's no support for Keystone V2, so we're pretty much doing Keystone V3. And exposing of all your Kubernetes services is at this point done through Neutron LBAS, we will be exploring something called the Octavia ingress controller in future. So with this integration done, now you have a single cloud which can support both virtual machine workloads and container workloads. And now Nelima will talk a little bit more about how this is done and also the demo of how this whole thing plays together. Thank you, Ajay. So how many of you here are familiar with Kubernetes architecture? Okay, so let's have a brief overview of it so that we level set. At a high level, these are the primary components of Kubernetes. You have one or more master nodes and you have worker nodes on which you run your workloads. The three primary components of any Kubernetes master are the Kubernetes API server, controller manager and scheduler. When you create a resource in Kubernetes, whether you do it from the CLI using kubectl or you go to Kubernetes dashboard or you use the Kubernetes API, you're talking to the Kubernetes API server. So you can say create a replica set with three pods. Kubernetes API server takes that request and stores it as a key value pair in the NCD data stored. Once it is stored, it is up to the controller manager, the appropriate controller for that object that you have specified. So for a replica set, there will be a replica set controller which is invoked by the controller manager which comes and says, okay, there is this object that's been created by the API server, but does that exist? Are there the three pods running? No, so let me go ahead and create those pods. So the management of the object itself is done by the controller that is running in the controller manager. And once these objects are created, they need to actually be scheduled somewhere and the Kubernetes scheduler takes up on the task of scheduling them on one of the worker nodes or the master nodes, depending on the requirements that you've specified. So all of these components in Kubernetes are extensible. Everything can be added onto. For example, if you have a new type that you want to define and Kubernetes does not know about this type, you can define it as a custom resource definition. Once you define a custom resource, Kubernetes now can start consuming objects of that type. You can go ahead and say, kubectl create that type. And so it takes it stores it in HCD, but what do you do with it? So you can also create a custom controller where you can say, when I see this object of new type or even an existing type, I want to do this new behavior. So this pattern of defining a custom resource and a custom controller together is called the operator model within Kubernetes. So we're gonna show the demo of CCP running as an operator. This is an example of how you would define a custom resource. So you can say that, okay, this is the API version for our custom resource named as an open stack cluster. And once you define it and give it to Kubernetes, you can see it listed at the bottom as a new custom resource that has been defined. Then you can go and say, okay, that's the custom resource definition and now instantiate it. So you create a new object where you say, create me a new cluster with three masters and two workers and a bunch of other information. What network to use, where do you wanna place it? All of that is provided in your specification for this resource. So once you say that, take that combination of custom resource and the controller, you're able to actually create a cluster. And that's exactly what CCP does. CCP is bootstrapped using a single node Kubernetes cluster because CCP runs as a Kubernetes application. It then creates a three node control plane on which it installs itself again. For high availability, it is a multi node cluster and it runs as a set of operators within Kubernetes. The first one is the cluster controller, which is in charge of creating one or more tenant clusters. Obviously, if you're looking at a container as a service platform, then you're actually interested in creating more than one most of the times. So you create these tenant clusters and then you say, I want n number of nodes in this cluster, I want 10 in another cluster. So there's a node group controller which comes and understands these node types and instantiates them appropriately and manages them as well. There's also an add-on controller which handles the services that we provide, whether it's logging or monitoring. All of these services are managed by the add-on controller. We're gonna next look at a demo. This is a scenario that we're gonna demonstrate today. We have a workload that is running in a CCP tenant cluster. First, we'll show the control plane running as an open stack application within CVM. And it will show us creating this demo cluster using CCP and launching an application in that Kubernetes cluster. The application is a WordPress application but it's going to consume a MySQL service that is going to run on a bare metal, on a VM, okay? We're gonna show how Elbas is provisioned as well as Cinder volumes are provisioned both from Kubernetes as well as for the VMs. This is a topology that we're gonna demonstrate today. The control plane is running in a five VM cluster and it has a dedicated open stack tenant network. The tenant cluster is also running as a five VM tenant Kubernetes cluster with one MySQL, there's a separate MySQL instance also attached to the same network as the demo tenant. So with that, we will go to the demo, right? So first, we log into CVM as a user, CCP user, which is who has the access to the CCP control plane. We can see that there are five VMs running there attached to the network as we talked about. This is a dedicated open stack network. And then we take the custom resource that we had shown before that we talked about before with three masters and two workers and we do a cube CTL apply. So we're basically creating that cluster using CCP. We can then start seeing this object of the type open stack cluster appearing within Kubernetes. We can monitor it just as you monitor any other Kubernetes object. So you can actually do a cube CTL describe open stack cluster with the name of the object that you've created and you can see the status. So it shows you the status of each of the VMs that has been created, the spec with which you've created it and any other resources that it may have created and associated with that cluster as well. So in addition to looking at it from open stack as from Kubernetes, we are also gonna take a look at how does it look like from open stack, right? So within open stack you can start, you can see the VMs get created. So that's one VM right now. We'll slowly start seeing more and more VMs come up within that tenant. There are also a bunch of resources, neutron resources that CCP creates to support this Kubernetes cluster. So we create a network, an internal network unless a few provide your own network. We create a router to attach this network to the external network and we have load balancer created for each cluster. This load balancer provides you access to the Kubernetes API server across the multiple masters because we created a three master cluster, right? So now we can see CCP creating this cluster and while that's happening, we'll go ahead and start up our MySQL VM on which we'll run the MySQL server. As this is a plain VM, there's no Kubernetes involved here, it just has some attached storage. But we are gonna put it on the same network as our tenant cluster so that it can directly access the tenant cluster and the services, the Kubernetes services running on the tenant cluster can access this MySQL server. Once the SQL server instance is ready and we'll go back and see that our demo tenant cluster has been created, we can check the status of the VMs now. We can see that the MySQL service is still coming up and it takes a little time for the Kubernetes cluster to start spinning. We can see that the two masters ready and in a minute we'll see all three masters come up at which point we are ready to go ahead and start deploying workloads on this master. So we can log into this master, I'll get the cube config for this master and then start deploying the applications there. We can inspect the resources that have been created, the VMs that we have, the volumes that are required to manage persistence for your Kubernetes cluster itself. So the HCD data is stored as a sender volume. You also have MySQL data storage is stored as a sender volume. And as I said, we create that load balancer which frontends all your Kubernetes resource API server requests. We can log into MySQL and check that it's running up and running. It does not know about WordPress because we haven't deployed that application yet. We can log into the Kubernetes cluster and check that there are no resources, no persistent volumes created, no deployments yet. But there is a storage class that CCP deploys by default. This is one of the add-ons that we have. So you have sender. We create a secret to tell our application about how to access the MySQL service. And we refer to the MySQL service using a Kubernetes service, even though it's actually running in a VM because we want to keep it generic and keep Kubernetes native. So we specify it as a Kubernetes service with an endpoint which is an IP address which corresponds to your MySQL IP address. At this point, the application itself can seamlessly refer to the MySQL server. So let's go ahead and deploy this application. It's created. Let's deploy the MySQL service as well so that WordPress can start accessing MySQL. So both of them are created, but not yet ready. So how do we access this application? The way to access it is through the load balancer that it has requested. We can see that an IP is generated. Let's go and see if it's ready. No, it's still starting up. So while it's starting, let's see what are the resources that got provisioned through Kubernetes. So there's a dynamic volume that is being created using sender. And there's a load balancer, which is created by the OpenStack cloud provider within Kubernetes. We can see that the WordPress application is able to access MySQL. So we're gonna look at the databases and see that now a new database is created. And within a few seconds, the application will be ready as well so we can actually check that its status is ready and then go back and see if we can now access the application. So once we are able to hit the application, basically we've been able to connect your application running on a Kubernetes cluster to your MySQL server, which is running on a VM. With that, I'll hand it over to Chandra to summarize. Thank you, Neliba. So thanks to both Ajay and Neliba. So in summary, essentially the CIVM platform has continued to evolve to address customer needs and market trends. Of late, we are obviously seeing a lot of interest in the field to essentially come through requirements about multiple dimensions of revolutions, which includes both footprint, hardware, networking, storage, workloads. With that, I am opening it up for questions. Any questions, comments you might have? Thank you. Before we'd go into Q&A, we do have about three or four minutes left for that. Just wanted to let you know two things. One, we do have demos of the CIVM product in our booth in the marketplace if you wanna join us there. Secondly, if you missed anything, and I saw a lot of people taking pictures, videos of all of the presentations are up on the OpenStack Foundation YouTube channel, usually within 24 hours, if not this evening. So if there's anything you missed, feel free to go to the videos up on the OpenStack Foundation YouTube channel. With that, I've got a mic here. We've got two mics, three mics in the aisle. We can take a couple of questions. I have a small question about the architecture of the different models you have. You have slides with four models. I don't see ACI there. Is there any reason for that? So ACI is part of CIVM and ACI, so maybe here. So if we look at networking, we already include ACI and VTS as the SDN controller. It's just how much content I can squeeze in here, no matter of that. Yeah, so top-of-rack can be standalone without ACI, like N9K, or it can be a third party, but we also support the full ACI fabric. In fact, most of my customers who have, typically from 50 to 100 nodes, they're all running with ACI. And ACI is also two ways, like you can do ships in the night or you can have neutron plug-in program ACI. So it can be dynamic. Okay, one more. So as far as I understood, you can install CCP on vSphere, right? Yes. But there's no support for NSX right now? No, there is no explicit support for NSX right now. And do you plan to do this, or is this a no-go because you have your own solution? We don't have it in the near-term plan. Yeah. Okay. So, again, based on customer requirement, if the customer requirement comes and we have to do NSX, we do it. Again, one of the things that our philosophy has been, is there a true requirement versus these are all science projects? And we basically prioritize it with a lot of requirements. Okay. That looks like it. Chandra, Nalima, Ajay, thank you. Again.