 I'm going to introduce from Google, Aperna Sinha, who is the project lead for Kubernetes. We're really lucky to have her here today. She's going to give a version of what's going to be the keynote at KubeCon now, and a short synopsis of that for you and a couple of demos. So we're going to go right into it. And thank you, and take it away. Thank you, Diane. I feel very fortunate. I've got slides, I think, or head slides. There we go. All right. Good morning, everyone. I'm Aperna Sinha. I'm a product manager at Google for the Kubernetes project. And today, I want to, first of all, welcome all of you, and I am really delighted to talk here a little bit about Google and Red Hat's contributions, as well as hopefully demo two of the new features that are in this release. I was hoping to do a live demo, but we will probably have to look at a recorded version. So I think with this audience, I don't really have to introduce Kubernetes. I think you all know that this is a project that started from the Borg heritage of Google. But really, even though it is based on years of experience at Google, running applications and distributed systems, the goal of this project is not about Google. The goal of this project is to create a platform for the rest of the world to be able to run applications on distributed systems at high efficiency and high utilization. And this platform goal is not something that Google can achieve by itself. And that's one of the primary reasons why, from the start, Kubernetes was started as an open source project on GitHub and then donated to the Cloud Native Compute Foundation. There is a community that we have worked hard to develop and foster around this project that works on and basically shares this goal with us, which is to create this common platform. And this chart shows the evolution of that community. So you can see, whereas in the beginning, Google and Red Hat were, in fact, quite prominent in terms of the contributions and the commits that were made to the project, over time the number of independent contributors that aren't necessarily associated with any company has grown, as well as the number of companies that are part of the community has grown. And this diversity of companies and individuals is extremely important to that platform goal. If you're trying to have a project that is the platform for the rest of the world, then you need the diversity of the different environments that other users work in. And throughout all of this, Google and Red Hat have both learned from the community but also continue to play a formative role in the Kubernetes project. A few of the statistics that point towards or indicate that we are actually achieving some level of success towards that platform goal. First of all, you can see there's a lot of user and customer interest in Kubernetes. And also, there's a lot of interest from contributors. You can sort of see here on Google Trends over the last two years a huge increase in the Kubernetes interest but also 30 plus commercial distributions of Kubernetes to date. These are essentially companies that have taken Kubernetes and adapted it to different environments. And that ranges everywhere from bare metal to VMware. I think we have Wendy from VMware here in the audience. We have Quintin from Huawei. So it ranges everywhere from bare metal to VMware to of course public and private clouds and ranges from China with Huawei to IBM. With this variety, I think Kubernetes has a shot at achieving that goal. I'd just like to give you a flavor of the contributions and the level of depth that Red Hat and OpenShift has had in shaping the Kubernetes project. So this slide shows a current snapshot of all the special interest groups in the project. This is always changing. We are adding and collapsing groups. But the next slide actually shows how many of these special interest groups are led by an engineer from Red Hat. Next slide, please. Each group has multiple leaders but as you can see a lot, this is I think about 40%, the ones that are red have a Red Hat engineer leading that group. Leading the group means that you are helping to shape this specific area. So you're helping to shape storage and scheduling and network and how those things work in Kubernetes. Hopefully to those of you in the audience that are OpenShift users, this gives you a lot of comfort. Your needs and your requirements have a tremendous hand in shaping the direction of the Kubernetes project. And really this is no different in the 1.6 release which is launching this week. Next slide, please. So the Kubernetes 1.6 release, the major theme of this release is multi-workload, multi-team, large clusters. There are many features in this release. Large clusters is of course one of the features. But I want to emphasize two particular standout features and that is role-based access control and storage classes also known as dynamic storage provisioning. These two features are important because they add critical functionality that I think changes the game of what you can do with containers in production. They're also important because Google and Red Hat have had a large hand in driving these two features to stability. So our back is all about fine-grained access control and it is moving to beta in this release. It moves us fundamentally from a single user to a multi-user cluster. It's a huge step forward for the product. Storage classes and the defaults that are associated with them support stateful containerized applications. And as you know, that is also a huge step forward because it expands the market for what containers can do. For many months previously, stateful applications were not considered appropriate to put into containers. And what dynamic storage provisioning does in terms of automating storage is really fundamental for stateful support. So those are the two that I'm going to hopefully demo today. And of course, cool new features are nice, but I think many of you here are users of OpenShift and many, hopefully, enterprise users as well. How many of you care about stability? Yeah, that's everybody. So 1.6 was a release where the entire community banded together and emphasized finishing features. Not launching new features, but finishing them, moving them from alpha to beta to stable. And over 20 features were graduated to either beta or stable. Next slide. Now I'm going to go into the features. So the first one is role-based access control, which is really, now that you have a large cluster or multiple large clusters, how do you schedule multiple teams into those clusters such that they don't interfere with each other? And they have the right set of permissions. Next slide. Yes. This change, one of our founders and lead TLs, Tim Hawken, characterizes the introduction of RBAC beta as it's like we went from DOS, which was single user, where everyone can see everything, to UNIX, where you see only your things and there's the principle of least privilege. So it's that type of big change for us. Next slide. This is what we look like before fine-grained RBAC. You have here a three-node or 5,000-node cluster. You have multiple pods, multiple workloads, in fact, that belong to different teams. But there isn't a good way, through the Kubernetes API, to set up authorization. And so authorization is, by default, at the cluster level. And all pods have the same authorization. It's kind of vanilla. It looks the same. We did have a mechanism called ABAC, but that is more based on a static local file, whereas RBAC is truly dynamic and is through the Kubernetes API. So next slide. With role-based access control, the picture looks something like this. You can isolate into namespaces. Here we are showing the workloads of the blue team in a blue namespace and the workloads of the green team in a green namespace. And what's more important is that on a per namespace, per resource basis, you can set which roles have what actions over what resources and what namespaces. This is actually very powerful. There are many, many use cases for this. Here's just a couple of examples. We have Alice, she is a user. Her role is user, and she can list, which is like view permissions, end services, services are type of resource, and end and HR are namespaces here. So she can view services in the end namespace but not in the HR namespace. You see the level of granularity. It's at the per resource, per namespace level for each role and each user. That's nice. And of course, with this level of granularity, there's a huge world of permutations that are enabled. So we see some other examples. Bob has more admin type rights, so he's not just viewing, he's creating. He can create pods in one namespace, not the other. The scheduler, the third example, is actually a system role. It's not a person, it's a system role, and this role can read pods but not another resource, which is secrets. So now, let's get into the demo. Hopefully, you can play the video, and I'm not sure if it's gonna be as large as I would have liked, but hopefully you can see something. So in order to do this demo, I have created a three node cluster in Google Container Engine in Google Cloud, and yes, can you see the screen? Okay, good. People in the back may be hard, but okay, so now this is proceeding. In order to show this demo, you actually need three users or you need more than one user, so I'm gonna pretend here to be multiple users. In this first tab, I am kind of the super cluster admin, and the other two tabs are actually, I'm going to be a green team and a blue team. The first thing that I'm doing here is as a super cluster admin, I'm going to create a service account for the blue team and fetch those credentials into a local file, and the same thing for the green team. Now I'm going to the blue team tab, and I'm going to configure cube cuddle to use the credentials that I just created for the blue team. This is all set up, so basically, I'm setting up the blue team in the blue tab, and then I'm gonna do the same thing for the green team. So the green team is also going to go ahead and get credentials, and actually, could you pause the demo for a second? I think we have moved ahead. So, yes, here. Let me create a namespace for the blue team. I've gone ahead and created as a cluster admin a namespace for the blue team. But if I go to the blue team at this point, I haven't given them access, which is why you saw the error where the blue team wasn't actually able to access the namespace. So, now I'm gonna give the blue team access to the namespace. This is actually showing you R back. So, one of the defining things in R back is this concept of cluster roles, and these are some of the default user roles, admin, edit, view. There's also system roles, which I'm not showing, I've hidden the system roles. Let's look at what the cluster role admin can do, and there are many things here. This is just looking at a subset of them, but they have a cluster role admin has granular permissions over resources, many resources and sub-resources, and these verbs create, delete, list, watch, these are some of the things that the admin can do. So, now that we know what this role is, what we want to do is to create a role binding. We want to create a role binding to the blue namespace for the blue developer, the blue service account. So, hopefully this will move forward. Yes, this is what a role binding looks like. So, this is a role binding, and what this role binding says is, for the blue namespace, I would like the user blue team developer, which is a service account for this example. I would like that user to have admin role for that namespace. So, that means everything you saw above the blue developer can do, create, delete, watch, et cetera, for the resources in the blue namespace only. Now, we're going to create this role binding object. It's been created. Let's go to the blue service account, the blue team, and see previously he wasn't able to access the namespace, now he is. Cubecuddle get pods for the blue namespace, and we don't get the error. Of course, there are no resources yet, so let's go ahead and create some resources in the blue namespace. We're going to create an nginx deployment. Nginx deployment, now let's see if that's been created, get pods, yes, it's running. So, the blue developer has access and has execution permissions in this namespace, okay? There are no services, but this is all working as intended. Let's now do the same thing for the green user. We're going to create a green namespace, and we are going to create a role binding for the green user, exactly as we did. Now, this is for the green namespace, the green team developer is going to have admin permissions, just like the blue one did, except only in the green namespace. Okay, so we'll go ahead and create the green binding. And let's see, I think, let's go and see if the green user, yes, and the green user is able to get pods, there's no pods, let's create an nginx deployment here. Actually, I think we're going to check that he can't have access to the blue namespace, that's right. So, you see that the green user has access to the green namespace, but not to the blue namespace, this is what we want, right? So, this is great, let's run this forward, and, of course, create an nginx deployment, see services, there's no services, so this is working as intended. The last thing that I want to show you is cross namespace permissions, so we're going to, now, let's say the green user wants to be able to monitor the blue namespace, all the resources in the blue namespace, but not to change them. Actually, going back to the blue namespace, I'm just showing that the blue user does not have any permissions in the green namespace, so cannot get pods, cannot get services, the blue user should not, we did not set that up, right? This is working. But we want to give the green user read access to the blue namespace, read not write. And so, I'm going to show how to do that, of course the blue user cannot do that, the green user cannot do that, only the admin, the super user here can do that because that person can see all the namespaces, so you can see as the admin, hopefully you can see this, admin can see the blue engine X and the green engine X deployment, as well as a bunch of system deployments that are on. Now, I'm going to create a role binding for the green user to have view permissions in the blue namespace, so here you see this role binding, namespace blue, I want view permissions for the green team developer to the blue namespace. And of course, there are many permutations you can do, this is just what we want to demo here, let's create this binding and we'll go back to the green namespace and we will show, let's see what we can do and yes. So now the green user can get deployments in the blue namespace, so you can, she or he can see those, let's try and delete this, let's try and delete something in the blue namespace. Okay, there we go, namespace blue delete deployments and as expected, don't have write permissions, cannot delete anything, so this is great, this is exactly what we wanted to show and that concludes the RBAC demo. If we can switch back to my slides, please. Thank you, I'm not sure if I'm running over time, please let me know, Diane. All right, great, so that is role-based access control, I think for enterprise deployments, where you want to have multiple teams, multiple workloads, this should prove very valuable. It is not yet on by default, it is available though as beta and in future releases it will go and be default. Next slide, so that was the demo, next slide. Okay, the other feature I said I would talk about is dynamic storage provisioning, which enables and is the backing for stateful workloads. Let's see, I think I will present a couple of slides and then move to the demo for this as well. Just quickly, I want to explain what is happening with dynamic storage provisioning. So in dynamic storage provisioning, the idea is actually even in non-dynamic and static storage provisioning, the idea is that there is a cluster admin who creates Kubernetes view of storage. So Kubernetes view of storage is through the persistent volume object, which says, okay, I have a storage of X type in Y cloud and so many gigs and this is what Kubernetes should do with it after my claim to this storage is gone, like either recycle it or keep it around. And then the user, we have kind of, we want to isolate the pod from the actual details of the storage so that the pod is not specific and is actually portable across deployments. And so we have created this concept of a persistent volume claim. The claim is a request, it's a request for resources that says I want X amount of storage of a particular type in case that storage class types are defined. And so the PVC, the persistent volume claim, when it gets created, it binds to any available persistent volume that meets its requests. It's a claim out there saying I need five gigs. If there's any volume out there that has five gigs that's available, I want to bind to that. And once that binding takes place, it's consistent, it stays there. A pod can associate with the claim but the pod is ephemeral. It goes away or it moves between nodes. The persistent volume claim stays bound to the volume and keeps the data for that pod in that volume so that when the pod comes back, it again associates with the claim and it has access to the same volume. This is extremely important for stateful applications. So that's the main mechanism. You can go through the next couple of slides. I just show here's a pod. The pod is associated with the claim and you can delete the pod and you can bring the pod back and everything is there as before. What dynamic storage provisioning does is the changes in the previous slide what we were looking at is the storage exists. It's out there. The claim comes along and it binds to whatever's available. This is wasteful because someone has to provision that storage in advance and the storage has to sit there. That's not what we want if we want efficiency. Dynamic storage enables the concept of abstract storage classes so the cluster admin can still say, yes, I have a storage class that is an SSD or a standard disk or whatever but it doesn't actually need to be provisioned until the pod and the pod claim, the persistent volume claim is created. So that's the essential gist of dynamic storage provisioning and if we have time we can do the demo. Okay, so this is the demo of dynamic storage provisioning. Let's look at, again, this is the three-node cluster in Google Cloud so you are seeing the local disk that is attached to each of the nodes. We haven't created any additional disks. First I'm going to show you the manual method so I'm gonna create a disk here. I'm asking Google Cloud to create a disk of size 10 gigabytes and it's a standard disk, I'm gonna call it manual disk one. Okay, now I see that this manual disk one has been created in US Central A with 10 gigabytes as requested. The old old old way, which is a bad practice, is to inline this storage in the pod manifests. So hopefully you can see the screen. Here's the pod manifest. I've said that the disk that I wanna attach and mount here is the GCE persistent disk. Its name is manual disk one, its file system is X4. This is very, very specific, right? This pod manifest cannot go anywhere and it can only use that disk. So this is bad. What we want the pod manifest to look like is actually very independent from the details of the storage. So here it's gonna call a persistent volume claim. It just has the name of the claim. It doesn't even say what the claim is. Doesn't say how much. Doesn't say what type of disk, nothing. And let's look at the claim. So this is now a very portable pod manifest. It can come and go from cloud to cloud. It can come and go from time to time. Hopefully I will show you the manifest for the PVC. Yes, so here's the PVC manifest and this manifest is also fairly generic. It just says that I want five gigs of storage and it does, it can declare a storage class. I'm gonna come back to explain storage classes. It's a concept in dynamic storage provisioning. But here we've given the empty string which means that I don't wanna use a storage class. I don't want any storage class. Just give me any five gigs that's available. That's what the claim is saying. Okay, I think we're gonna create this claim. Sorry, I should probably have recorded it faster. Oh, okay, yes. And I wanna show you the persistent volume itself. So I have to create the claim, right? But now I also need to create the volume because this is manual provisioning. And the persistent volume manifest is where all of the details are. There I say that it's a five gig storage. It is actually a GCE persistent disk. Here's the name and the reclaim policy. So this is Kubernetes view of that 10 gig disk. I'm saying Kubernetes view can use five gigs of that 10 gig disk and please delete it after you're done. You could also set this to reclaim policy to recycle it or retain the disk. But I'm gonna delete it for easy cleanup. So this is manual provisioning. I have previously provisioned the disk. Then I've told Kubernetes about the persistent volume. Then I've created the claim. It's a portable pod. That's nice, but it's still very manual. Okay, now I've gone ahead and created the persistent volume. So the volume is created. You can see that it's created and it's available. Status is available. Now I'm gonna create the PVC, the claim, and I'm gonna bind it, or Kubernetes is automatically going to bind it, right? So the PVC came along and it said I want five gigs. Oh, happens to be a volume already. Five gig volume, let me bind. So now the status of the persistent volume has changed from available to bound. And that completes the manual storage provisioning. What I want to show you is how easy it is to do dynamic storage provisioning. So I think the next step is I'm gonna clean this up. Yep, delete the manual PVC and that should delete everything. And I'm gonna show you that it's deleted. Yep, it should delete the PVC and the PV and also delete the disk. So now there is no disk. With dynamic storage provisioning, like I said, I don't have to pre-provision the storage. All I need to do is create the PVC, the claim. But let me first tell you the concept of storage classes. So the storage admin can still come in and declare that there are multiple types of storage available without provisioning them. Here, the admin has created a fast storage class, which is an SSD. And when we do get storage classes, we see that the fast class is available, as well as the default storage class in 1.6 for Google Cloud is a standard disk. So those two storage classes are available, but no storage has been created. Admin has said what type of storage is available, but nothing has been created. Now the persistent volume claim comes along and it says, yeah, I wanna use that fast class, whatever it is that my admin said. I want 10 gigs of it. And when I create this claim, you will see that everything happens automatically. So I don't need to create a persistent volume. I don't need to provision the storage by itself. The PV has been created. You see this PV with this long numerical number and it has the right delete policy and then it's been bound to the PVC. And if we look at the disks and we get disks, you'll see that the storage has also been provisioned. So this is automation. Nice. No wastage. And then I think I go on and I show you the default and how to do default. But I think for the sake of time, we can skip that. So this is showing dynamic storage provisioning and the automation enabled there. Again, there's a separation of roles. And if the admin wants, they can set policies around the storage classes, but there's no waste associated with things. We can go back to the slides, please. So in 1.6 in this release that's coming out this week, dynamic storage provisioning has moved to stable. So it is fully ready for consumption in enterprises. Sorry, previous slide. Yes, and there are a number of sensible defaults that have been set for the different cloud providers. These are some of them. So we saw in Google Cloud, it's a GCEPD. In Amazon, it's a EBS volume and so forth for OpenStack and others. There are also a number of other storage features. This has been a big release for moving storage forward. So there's support for user-written and user-run dynamic PV provisioners, which is very nice, as well as a number of third-party plugins that have made it into the release. That's it for storage. Just have one more slide on the future of the project and where we are going. So I think, again, we want to be the platform. We're trying to build a platform for the rest of the world to run distributed system applications. And that requires multi-workload, multi-team, efficient scheduling in a cluster, in large clusters or multiple clusters. Some of the roadmap here around security, we're gonna make our back default. There's also network policy, which allows pods to say, okay, I have access to this network or this part of the network, but not be available to take requests from any part of the network. So that's network policy. We will continue adding more features to stateful application support, upgrading stateful applications without downtime. That's on the roadmap. Also, GPU support is extremely important for those of you running machine learning, and there are quite a few running machine learning, including TensorFlow and other types of frameworks. So that's coming soon. In fact, there is an alpha implementation of multiple GPUs in the 1.6 release. And then I mentioned multi-workload scheduling. So there's several features in this release for custom scheduling and advanced scheduling, but we will continue to move forward on that work to make it efficient to schedule multiple different types of workloads in a cluster. In terms of extensibility, there's work, that's alpha work in this release on different cloud providers separating those out and making each of those more powerful. And also the container runtime interface, the CRI for Docker is beta in this release and going forward, we will be adding support for many other runtimes. Just provides flexibility to our users. And then lastly, service catalog, service catalog, SIG service catalog, and the work that they're doing there enables Kubernetes to consume services outside of Kubernetes through a service catalog. And it uses the Open Services Broker API, which has a heritage in the Cloud Foundry Foundation. And so that was actually, I think a couple releases, last release that was made available and there's a longer roadmap there. So with that, next slide. I just wanted to thank everybody and welcome you to Berlin and encourage you to try out 1.6, which should be coming out later today or Tuesday, Pacific Time. Thank you. It's incredibly smooth. Thank you.