 All right. Good morning, everyone. My name is Hari Kanan. I work for VMware. I'm joined here with my colleagues, Giri and Mayan, who do the actual work. So they'll probably get into a little bit more details. I'll set the stage, what we're trying to do here, and we'll go from there. I think the purpose of this session is to discuss how we adapted OpenStack to provide a enterprise-grade Kubernetes distribution on vSphere STD stack. That's the primary purpose of what we were trying to do to use OpenStack to run on the vSphere NSX LAN. But then we will extend this to support other clouds such as AWS and Azure and other things in future. But the primary purpose of this exercise and this talk is to describe how we adapted OpenStack for the NSX vSphere LAN. So that's the primary purpose. Standard stuff, some of this as VMware, we are not allowed to discuss in terms of futures, products. So let's get the basics out of the way. Most of you are familiar with what Kubernetes is. I don't think we need to explain, but just to get levels at the conversation here. So we have a bunch of master nodes that run the basic core services for Kubernetes. The controller, API service, HCD, scheduler, those run on the master nodes. We spin up the master nodes as VMs running on any hypervisor. And then you have the worker nodes where you have the kublets and the proxy servers. That is something that you scale in and out depending on the size of the cluster as well. So there's just some basics, levels at the conversation. Most of you here are familiar with it, but this is kind of the deployment profile that we are looking at in the end state. Deployment is probably something that looks similar to this. So as a product manager, typically we start thinking about what are the personas that are involved whenever we try to build a product. The mileage might vary from company to company. There is no one standard set of personas, but we try to identify capabilities and roles based on some of the personas that we have a conversation with. And as you are pretty familiar, primary audience for what we had been historically building and selling has been the IT admin community. So as a cloud administrator, there is a responsibility for any organization, or a cloud administrator, and the role is evolving. But fundamentally, this person is responsible for determining which particular cloud or which particular environment most developers typically don't care whether it's running on AWS. Most of our product that we are building is in the enterprise context and we are not looking at primarily targeting the startups that are pretty much running all their operations on AWS or Azure or any of the Google or any of the cloud properties. We are primarily focusing on data centers in large enterprises, the banks, the financial institutions and so on. So those are the type of customers that we go after. And there there is a defined role called cloud administrator who is more responsible for defining, okay, I'm going to run in this data center versus another one or probably combine it with a public cloud. So the cloud administrator has some roles and then he creates various projects for different lines of businesses or various initiatives, project A, project B or maybe even defining for different lines of businesses. That's one role. And then the right side, that's from a provider perspective, we look at those roles. And from a consumer perspective or the consumption side, you have these two roles. Some of many organizations combine those roles, but some of them keep them separate. They devops admin and then the developer. So the developer primarily doesn't care where his app's run by enlarge but definitely for a dev test we have seen various environments being used but he's more interested in the consumption of the software, not in the actual infrastructure layer going forward. And then there are two consumption models. Again, it is a maturity curve that we have seen many customers start with any elemental container provisioning mostly from their Mac or running some simple Docker compose and then they as they are comfortable as they start building real life applications they start moving into complex stacks like you know use Docker swarm Kubernetes is now the de facto container orchestration platform. So that's what motivated us to start building what we just will be talking in the rest of the presentation. But many of them start on the left side. I want to build, deploy my own Kubernetes cluster and then I want to try my application. That's many customers try to do that. And then there is a more mature model where I don't care too much about the actual deployment of Kubernetes. I would just start building my applications. Give me an endpoint. I will use CoopCuttle and other things that I would like to start building my applications. I don't care too much on how the Kubernetes itself is instantiated. So those are two deployment models that we have seen and initial product that we are building is again going back to who our primary customer base is which is the IT department in a shared environment providing to multiple lines of businesses and different projects. We are focusing more on the right side for now and then we will be addressing the needs of the left side as well. So when we started doing this project sometime in fall of last year we decided okay you know the question natural question that comes to everybody's mind is okay what is the big deal Kubernetes people know how there are five different ways to stand up for Kubernetes. What is so difficult or I mean 2017 is it's still a big problem for us and it turns out actually there are some challenges and here are some of the specific things we identified. The first thing is what I just said which is there are numerous ways to do it and while these things are work and by and large many companies and many teams have adapted it the customers that we typically go after the enterprises are looking for a more prescriptive enterprise grade capability for Kubernetes and many of our customers are actually looking for a enterprise grade Kubernetes on the vSphere NSX what VMware calls a SDDC stack software defined data center stack. So that is actually there is no one way to do it there is no authoritative way to do it. So that becomes actually a bit of an interesting problem there are five different ways which is a good problem to have but yet there is no prescriptive authoritative way to do this. So that was number one issue and we said okay we will come in and we will define a more standardized way to deploy Kubernetes and then there are few other interesting challenges that we found which is okay in a KVM environment you can set up a Glastrefs or SF or any other shared storage but in a VMware stack those capabilities are not very mature are not well defined that means we had to find a way to support persistent volume so that is another challenge that we had to overcome and then you had another few other issues such as you know there are multiple open source container networking technologies you know you have Flannel, Calico and so on but nobody has provided a easy way for all of this to be stitched together and deploy for a customer and be supported as well and within our own company the NSX which is our lead flagship SDN solution had started working on building a container networking capability and then we wanted to make sure that we can integrate that as well and the other thing is while some of the open source as well as NSX had started working on even pod level micro segmentation which is another key requirement the enterprise customers were asking for the we had to stitch multiple layers both VM level micro segmentation as well as pod level micro segmentation becomes an interesting problem and we wanted to have a single stack that would provide both these capabilities for our customers so this problem started creeping up and for example if I want to stand up a service as a Kubernetes service then I had to figure a way out to set up a load balancer again this becomes there are some NGINX or HA proxy based load balancer but yet at the same time we had a prescriptive way to set up NSX has a load balancer and how to stand up a load balancer that customers are using elsewhere in the same Kubernetes context as well and then there was no defined way to run in 1.6 of Kubernetes there is a lot of improvements with respect to RBAC but when we started sometime in the fall of last year we didn't really have a very authoritative way to do RBAC in Kubernetes there are some homegrown solutions but the integration with LDAP or any AD or LDAP which is a key requirement for any corporate environment was not well defined and again the notion of authentication and authorization was not well defined either so someone has to stitch these things together so these are some of the specific challenges in terms of multi-tenancy, LBAS, micro segmentation, container networking all these things were turning out to be a bit of a challenge and especially in the context of running it in the vSphere NSX context and hence we started doing this and soon we realized that in order for us to stitch all these individual components together there is a notion of how do I spin up a VM, how do I spin up a network and how do I spin up a bunch of storage devices and this has a name right I mean this is nothing but an IS capability and Kubernetes has built plugins to various IS capabilities and we realized that actually OpenStack is the right platform to do that so we started using OpenStack and the very first version of OpenStack integration that we did for actually supporting Kubernetes on VMware Stack had we used the full-fledged OpenStack while that was very useful and some of our customers OpenStack customers loved it the feedback that we received was hey you know OpenStack is great if I have needs to run parallel stacks both VMs and Kubernetes that works fine but if I just doing a separate project to run Kubernetes I don't want to get the overhead of OpenStack and that's what set us in a slightly different path early this year and we started figuring out how to build a thin IS which based on OpenStack and figure out how to deploy enterprise-grade Kubernetes using OpenStack but very stripped-down version taking only the minimum components and still be able to provide a Kubernetes distribution so that is where I think we'll spend the rest of the presentation today on explaining what we did to stripped-down OpenStack to its bare minimum and be able to stand up a Kubernetes deployment for VMware Stack so again the last thing I would want to say is while this is how we started to provide an OpenStack-based Kubernetes you can have either a full-fledged OpenStack if you are an existing OpenStack customer or your use cases require both spinning up VMs as well as containers then we support both models but in future we also plan to support other cloud providers such as AWS, Google and so on so that is the product is architected in a way where it has cleanly abstracted both the infrastructure layer and the Kubernetes control life cycle management so that you can keep adding other cloud providers as we proceed so with that I would like to hand over to one of our key developers for this project and he can explain a little bit more in detail on how we took the full OpenStack stripped it down to its minimum and be able to provide a sort of an appliance model to stand up a Kubernetes distribution. Giri? Thanks. My name is Giri. I work with Mayan and Hari as part of the cloud management business unit at VMware. So as Hari mentioned one of the things that we realized while building the Kubernetes product was there is a need for an infrastructure as a service layer that offers multi-tenancy that provides the capability to bring together compute networking and storage and provides an interface that's well-adopted within the community that also works with Kubernetes in a way that leverage the Kubernetes cloud provider portion of it and also provide a way to deploy the Kubernetes nodes in a much reliable fashion. So that's the first part. So one thing where the infrastructure provider helps is to deploy the Kubernetes nodes, the master nodes and the worker nodes, provide high availability and provide a reliable way to upgrade Kubernetes also. So all these capabilities are offered by the infrastructure provider. So that's the first aspect of it and we also need as part of data operations we also want to provide auto-scaling, provide an ability to resize the clusters and this is something the infrastructure provider is handling that. And the second part of it is Kubernetes has very well-defined interfaces that works with different cloud providers that provides you some of the services like I'm going to jump on there. For example, load balancer and a persistent volume and also have authentication and authorization. There are like interfaces well-defined in Kubernetes and we wanted to support that and using this infrastructure provider. So here is an example where there's a service being deployed and one of the requirement as part of deploying the service is to have an external load balancer and we are leveraging this infrastructure provider to offer this as part of the service deployment. And the other important aspect of it is when you're deploying a pod and you're looking for a persistent volume and we need an ability to deploy a back-in that is from VMware and cloud providers and also have an ability to move around. So when a pod is moving from one virtual machine to the other we need an ability so that it can be whenever Kubernetes reschedules this pod we want this volume to be moved along with that. So now let's double click on the very specific infrastructure provider. As Harry mentioned, we want to support multiple providers and number one priority is VMware SDDC stack and we obviously are supporting open stack and we also want to move towards like Azure and AWS in future. So as part of the SDDC provider so what we are supporting for networking is the legacy VMware distributed switch that offers you very rich L2 and L3 capabilities and we are also supporting the NSX vSphere. We are leveraging NSX vSphere for L2, L3, load balance as a service and security group which is a micro segmentation for all these control plane nodes. We are leveraging that. And you might have heard about NSX-T which is a brand new or an enhanced platform coming from the VMware networking and security business unit and it offers multi-hyperversal support. Basically you can have both VMware ESX and KVM and it offers, it's very rich in the container network solution. It offers micro segmentation both at the virtual machine level and at the pod level. So we are giving multiple choices here. I mean one of the key takeaways is like if you are using the Kubernetes distribution you have a choice to go with the legacy networking using the distributed switch or a full-fledged NSX vSphere or if you are looking for some advanced container level network and configuration you can go on with NSX-T also. Another aspect of VMware SDTC stack is it provides a very nice abstraction for the storage backends in the form of VMFS. And we support NFS, ISCSI, VWALL and VSAN all these different types and we support that as part of the VMware SDTC stack. And the other capability that we have as part of this distribution is the multi vCenter support. Let's say you want to have high availability or if you want to scale out, add more compute node and expand the Kubernetes cluster nodes then we also offer this multi vCenter support. So you can scale out, not depending on your demand. So in order to provide this infrastructure as a service vCenter and NSX by itself is not multi-tenant capable and they don't come with a default IAS. So what we wanted to do was leverage OpenStack for that. And VMware already we ship VMware integrated OpenStack which is proven to be very simple to deploy and it gives a very nice story about upgrade and other things and our thought was like why not leverage OpenStack for doing this because it already stitches together the entire VMware stack. It takes vCenter, NSX, vSAN, it stitches them together and it provides a nice multi-tenant IAS capabilities. We wanted to take that and first thing is we containerized all the bare minimum services that we need for the Kubernetes. We picked Keystone and Nova, Cinder, Glance and Neutron. So we took these core services and we packaged them as Docker containers and we put them all in an OVA because that's a very common workflow for any VMware admin is to deploy OVA. So we package them as containers and then we run inside a virtual machine and that's how the thin IAS is offered. The workflow would be something like you create a provider in this case it will be a VMware SGDC provider and once that is deployed you can deploy the Kubernetes cluster depending on the number of master nodes and the worker nodes that you want and we also work with an existing IDM. One of the things that we are leveraging today is Keystone that offers multiple domains. Basically you can have an LDAP backend or a SQL backend and basically we leverage that and then provide this IDM capabilities and we also hook that with the namespace concept in Kubernetes. So that's one of the important plumping that we have done. We use Keystone for authentication and authorization. So we plumb the namespace concept in Kubernetes with Keystone. So let's say you have a namespace we map that to a project on Keystone so that no users from different tenants would be able to access pods and other services from other namespaces. This is how the overall network topology will look like. We provide again clear network isolation for both the management plane, control plane and finally for the applications also. They have their own load balancer. They can leverage the LBAS v2 is something we are using for the load balancer and we are using that for both the control plane. When we are deploying multiple master nodes in order to provide high availability we are leveraging the LBAS v2 to get the load balancer and we use the same thing on the cloud provider side for all the services that are deployed on the Kubernetes. And while deploying Kubernetes, upgrading is all very important. The most important aspect of running Kubernetes in production is about the day to operation. So we offer ability to resize the cluster and add some key pairs and all these capabilities are built in as part of this product. As I mentioned, we are using Keystone for IDM and we also have plans to integrate with the VMware IDM, the native VIDM. With that, we'll switch to the demo and mine will take over. Thank you. Any questions before we jump on to the demo? You want to do now? Jump to the demo and take the questions from there. Okay, so we talked about a lot of slides and probably need to do this before I do it. So as Giri said, deployment of this is pretty simple. It's an OVA file. It's a pretty standard deployment for anybody that would use to work with vCenter. The steps that you need to do is, again, straightforward, just select the cluster of VMs of a host that you want to work with, select the data store, give some static IPs and IP ranges for this to leverage, and that's pretty much it with regards to what is required. We can see here that we're creating this VM and once we power it on, we get the IP in order to log in into this solution. Once we log it in, we can see that we have those two concepts that we talked about. These are the providers and the cluster. Let's start with deploying a SDDC provider, VMware SDDC provider. We offer, obviously, the option to load from file if you saved previous deployments or if you failed on the first one and want to do a new one. Give it a name and a type. Again, things are pretty straightforward here on the deployment part. Select the vCenter on which you want to deploy and use as a provider. Select the NSX manager that you want to use in order to provide your network capabilities, and that's pretty much it over here as well. In this demo, we're showing NSXV usage. On the back end here, we're doing a lot of configuration and basically spinning up this thin IIS, thin OpenStack that we've talked about, configuring it, making sure it knows how to work with a different underlying infrastructure. Once this is done, we have our provider ready for deploying some Kubernetes clusters on top of it. Deploying the clusters is seamless in the sense of it doesn't really matter which provider you have underneath that, if that's going to be an OpenStack or a SDDC one, it's going to be the same. You just select the provider, give it a name, how many master nodes, how many worker nodes you want for this cluster to have. The repository to use for different configuration and again, quite straightforward, quite easy to achieve. Once we're doing this, on the back end, you can see on the left side, different things are going to happen now. We're going to create the right networks, we're going to create the load balancers, we're going to create the right VMs for this. Everything that was asked as part of this creation of the cluster, all the stitching that needs to happen if there's a persistent volume that you need, then it's going to be part of this. Once you have that, everything is going to be ready. Again, we're fastening forward here, obviously. But you can see in a minute that some of the VMs are going to create it. You can see on the bottom side that we create all the networks and stitching that needs to happen. And once this is ready, let's give it another second, once this thing is ready, we're going to get the Kubernetes endpoint that as an administrator, I'm going to pass on to my developers or my DevOps manager depends on their organization. And this is pretty much for the developer, this is a regular Kubernetes deployment. You can use it and you can manage it from your tools that you use to do that. Before we can do that, we actually need to create some users and permissions for that. So in this, we've seen a SQL backend, so we need to actually create the user. So we create two users. We're going to create Joe and Tom. If you have LDAP integration, obviously you don't need to do any of this and you don't want to do it from here. The LDAP is going to take care of all the users. And all you'll need to do is the next step, which is for a specific Kubernetes cluster, we're going to create namespace and going to assign the users they're going to use. So we're going to create here two different namespace. One is the Palo Alto one and we're going to have Joe assigned to that. And the other one is going to be a Beijing one, namespace and we're going to add Tom to that namespace. And now let's jump to the developer itself. This is Tom's, sorry, this is Joe's laptop and he's going to do, I can say this is the context of Joe and he's using the Palo Alto namespace for the cluster that we've created. You can see there are no pods and no services here. We're going to use a guest house, regular pretty standard guest house YAML file for Kubernetes deployment. And you can see on the bottom here that one of the requirements that we get here is to create a load balancer for this application. So let's deploy this. Again, this is, as you can see, kubectl, the regular way to use your Kubernetes. We're deploying this on the backend. This is going creating the pods on the VMs, the worker VMs that we have for that cluster. We can see here the different pods are running and we can see the services are being built, the different networking. You can see we're still pending on the front end to get the external IP. What happens in the backend is that we're going to the NSX, asking for create a load balancer over there. And once this is there, it's there, you can see that it has a load balancer as an entry point for the front end. And again, this is all being done automatically behind the scenes, all the configurations being done. You can see the 192, 168, 112, there is the external IP for this specific environment. One more thing I want to show you here is if we jump to the Tom context, the other developer, and in this context it's Tom, if you remember he belongs to the Beijing namespace, but he's going to try to use the Palo Alto namespace here. And he's going to try to look into the different deployment and try to play with it. And you can see that he's getting denied. The way we do it is actually aligning the namespace with the Keystone concepts for projects, and then once he's denied with a project, he's denied with a namespace specifically. One more thing we're going to show here is scaling up. You can see there is one master and one worker nodes. I'm going to ask to scale up this into three worker nodes, kind of a cluster. And again, you can see on the back end in vSphere that we've created the different nodes over there, and scaling up is just as simple as that. All the networking, all the everything that needs to happen on the back end, behind the scenes happening there, and you've got this. That's pretty much it. If you've got any questions, please use the mics here, because this has been recorded. We'll be happy to take it. I had a question. This will make sure you understand the controller, like the management interface that you're building. Is it based on something like is the plugin localized, or is it a custom development that you guys created, or is it open source? The provider. The screen where you say add cluster, add provider, that's part? That part is going to be part of the solution we're doing. I'm probably going to change, obviously. This is the very first preliminary stages, but this is going to be part of the solution. It is being built as an independent service. We haven't decided whether it will be shipped primarily as an independent solution, or for sure it will be bundled with both our OpenStack distribution, as well as the realized solution for sure. But whether it will be a separate package or not, we haven't defined yet. As regards to the open source part of it, some of the components that we built already, like the UI layer, we're going to integrate with something called admiral, which is our open source container management interface. That's already an open source project. A lot of it will continue to open source most of what we did. Again, I want to emphasize here that what we demonstrated here is primarily to support the VMware stack, which has some additional or different challenges than running Kubernetes on a KVM stack. That is our primary use case that we are addressing here, but we are using OpenStack to address those challenges. The plugin that we use, Cinder plugin, we are using Cinder plugin for persistent store, key store for authentication. We're using NSX load balancer through Neutron. So those are how we integrated OpenStack seamlessly into this project. Any other questions? Is this available to the appliance or is it still in development? It is. We actually would have gone beta as of yesterday, so you should be able to try it out in beta sometime this week. Can you install this on an existing VIO installation for one? Yes, absolutely. If you go back to the provider model, can you just, do you have the slides? How do I go back? Yeah, so if you go back to the slide, you can see that there are two, out of the four models that you have, there are two that I would like to call your attention to. First, if you don't have a full-fledged OpenStack, including VIO, then we will take the Stina plans installed as a single VM, and every OpenStack service that we need is running as a container. We'll just plug it in like an appliance that you don't really need to install the full-blown OpenStack. But if you have an existing OpenStack deployment, right now we have certified or rather tested with only our version of OpenStack, which is VIO, but technically there is really no limitation because at the end of the day we use, our OpenStack is as open or rather it exposes the same interfaces as any other OpenStack, and we don't use any exotic projects. So we just use the basic project, Keystone, Nova, Cinder, Glance, those are the core projects is what we are using. So we should be able to plug it into the existing OpenStack environment as well. But current testing has been done only with ours, but we don't see any reason why it should not work with any other OpenStack. Any other questions? Is the community running on bare metal instances or on VMs? Yes, the answer is no, they are not running on bare metal, they are running only on VMs. So the way it works is we spin up a bunch of VMs for the master node, for the master nodes we spin a bunch of VMs on vSphere NSX, and maybe if we had the time we would have gone into a little bit more detail, so we use a Terraform template to talk to OpenStack to spin up the control planes. So the VMs, I mean both the control plane and the worker nodes, they are all spun up as OpenStack VMs. They are OpenStack payloads that we use at Terraform templates to spin them up. You can use that OpenStack just like a normal user, right? Are these, can you, do you have any stitching for an application that's running? So as you can see in the demo, you deploy it, you don't really have the notion of I'm deploying an OpenStack here, right? So we just spin up a provider there, it's all been encapsulated into the scripts and all the things we're doing behind the scenes in order to reduce the amount of manageability that's required for this. So in the thin IS model, we have one admin account that will be the one that's spinning up the control planes. So if you are a regular user, you don't have access to those VMs and you can't go through your horizon or keystone unless you are that user. I was referring to a hybrid application, not those VMs but some other VMs that you want. Yeah, yeah, yeah. I mean, it depends on, it's basically we are using neutron networks. Any network that you can use, we should be able to, that's how we use the product. So if you have an overlay, then it's all limited to that particular tenant. You don't really have access outside of it. What's the question more on, can you use the thin IS to deploy regular VMs without... Yeah, and applications that require a mix of containers and VMs where you... Yeah, I mean, it's not just containers, can we also deploy regular VMs that are to open stack way? I mean, that is doable too technically, yeah. But it's more focused on deploying the Kubernetes control plane at this point. Which version of Kubernetes do you install? So right now we have the beta version we started a little before 1.6 came out. So we are using 1.5, but when we plan to release this in the next few months, we will be actually switching to 1.6. Those are things that we haven't defined. I mean, I would go back to the same model that we have for OpenStack, right? I think for OpenStack, the current version supports even kilo version of OpenStack. We support our customers. I think the most recent version that we support is... Currently on Mitaka, we skip in every release, but we see from customers that we're running our environment, they're also on kilo and they're not rushing out into upgrade. So as Harry said, we haven't defined exactly how we're going to support different versions, but we do anticipate kind of a similar requirement from customers. For new customers, definitely one of the latest and greatest. For existing customers, usually they're already deployed, things are working, they're not rushing out that fast to upgrade. So again, it depends on customer demands and we'll support what's needed. Any other questions? No? Out of time? No, maybe. All right. Thank you for your touch. Let us know if you need any other questions offline. Thank you.