 Hello, everyone. Thanks for coming today. My name is Bick Lee. I'm a co-founder and chief architect at Platform 9. And today, I'd love to spend the time to tell you about Platform 9's SaaS-managed OpenStack and Kubernetes. This is a remote-controlled cloud management product. And I'm sure from yesterday's keynote, you may have heard this emerging term, remote-controlled OpenStack. And you may have wondered what it really means. And we're going to dive deep into that. So before we begin, I'd like to give a quick intro to the company. We were founded in 2013. Most of us came from a virtualization background. So we have experience with virtualization and cloud management software. And we're very well known with our managed OpenStack product. And more recently, we've added support for Kubernetes as well. We're also the creators of Fission. You may have heard of Fission. This is an open source project that brings AWS Lambda-like functions as a service to Kubernetes. So it allows you to write functions as a service in a very portable way. So today, I want to talk about the value proposition of SaaS-managed, remote-controlled OpenStack and Kubernetes. Why is it and why is that beneficial for customers? And we'll just dive deep into a demo. And I'll hope to spend most of the time showing you the interactive demo and the features. OK, so to summarize briefly, what Platform 9 allows you to do is deploy OpenStack or Kubernetes on the hardware of your choice. This can be your private machines behind your firewall. Or it can be your cloud accounts such as AWS and Google. And you can do both. And you can manage everything in a seamless way. And we offer this as a service with a strong SLA. And what that means is we run, operate, maintain, and upgrade the cloud for you. So that's what Platform 9 is all about. Now, let's talk a little bit about the delivery model. We do this in a remote-controlled SaaS-managed way. And what this means is, if you think of, for example, OpenStack, what we've done is we've taken all the control plain elements, such as Novak API scheduler, all the controllers. And we're hosting that on our public cloud so that it makes it easier for us to operate it, maintain it, keep it healthy, upgrade it, and hide the complexity of these details from you. The data plain elements stays in your data center or in your cloud. So your data, your workloads stay with you. That is why we call this a remote-controlled management model. To give you an illustration of how it works, when you deploy Platform 9, for example, on your bare machines, what you're going to do is you install a Platform 9 host agent, which is shown here at the bottom. And you put that on every machine. It's completely zero configuration, so it's just like a package. You install it. It runs. There's nothing to edit. And it will connect automatically to the per customer deployment unit that runs in the cloud. This is the control plane that I was talking about. So that's the part that we manage. And then, using our portal, you can authorize these nodes and then specify what each node, what role it will play. So for example, you can say, this node is going to be a hypervisor for compute, or that node is going to be a cinder volume node. And depending on your choices, we will then deploy these yellow boxes. So at the bottom, these yellow boxes are the data plane elements I was talking about, such as Nova compute for Nova. And then, in the cloud controller, we will deploy the server side components, such as Nova controller, API, and so forth. And then, once this is all done, we manage and monitor all of that using our Platform9Ops platform, which is our automation and our support team. This is what the picture looks like for OpenStack. For Kubernetes, it looks very similar. We deploy master nodes and worker nodes. And we also put a cluster manager in the control plane so that you can create, delete, scale clusters at will. OK, so what are the benefits of this SaaS-controlled, remote-controlled model of delivery and management? First of all, since it's a SaaS model, it's very easy for a customer to just sign up for an account and log in and start being productive and creating clouds very quickly. This is the promise of SaaS. More importantly, since everything, the control plane, Keystone, all the metadata is in the cloud, it makes it very easy for us to allow you to do a single sign on using a single account and then manage multiple regions across the globe. For example, you can have an East Coast OpenStack region and a West Coast, and then be able to log in just once and then manage everything from a single UI. We also support multiple stacks. As I said earlier, out of the box, you get access to OpenStack and Kubernetes. We don't separate the two. You get both. And this helps really a lot with organizations who have both virtualized workloads who also want to get started with containers. As I mentioned earlier, you get to choose the infrastructure of your choice, whether it's your bare metal machines or your cloud accounts, such as AWS and Google. And this is also another reason why we enable hybrid cloud scenarios, where you can combine capacity that is on-prem with capacity in the cloud. And I hope to give you a demo in a few moments. We do seamless upgrades. And since this is a service, we provide an SLA. We monitor your cloud. When things go wrong, our automation system detects it. Whenever we can, we will automatically fix it. If it's a problem with your hardware, then we will send you a notification through email and work with you to resolve the issue. Let me go to the demo. This is an example of our OpenStack and Kubernetes dashboard. So one of the first things that you can do, and I mentioned this earlier, is since I'm already logged in, I can very easily switch to a different region. So here we have multiple regions, some even running both KVM and others running vSphere. We also have an AWS region. Now, the reason we have an AWS region is Platform9 also allows you to use the OpenStack API to manage resources on AWS. This is something we call Omni. It's an open source project that we're contributing to and that we started. And Omni was announced at Barcelona summit last year. I also mentioned that you get both Kubernetes and OpenStack. So in this example, you can use this stack switcher to switch between Kubernetes and OpenStack views. And when you look at the dashboard, you see a combined view of their capacity as it's being used by both Kubernetes and OpenStack. So we aggregate all the information about how much compute storage networks and memory you use. This shows you more details about the Kubernetes view. We show you clusters running on different stacks. Right now, I have a cluster here running on bare metal machines. And I also have one that is being provisioned on AWS. And later on, I'm going to show you how it looks on Google Cloud. OK, so let me move into a demo of something I referred to earlier, which is hybrid cloud scenarios. Now, since we have this global control plane that can manage clusters and capacity across different regions, cloud providers, and cloud stacks, this really enables very interesting scenarios. Now, many of you may have heard of the term cloud bursting. This is something that we sometimes like to joke about, because it's something we've heard about for many years. But I haven't actually seen a compelling demo of how you can do this. So today, I'd like to show you how cloud bursting might look like when using Platform 9. So the demo I'm going to show is this is a CI-CD type of workload. So imagine you have many, many developers checking commits into a GitHub repo. We have set up a webhook in GitHub that whenever there's a commit, it'll fire off a message. And that message will go into a queue. And the queue will be your queue of jobs. And on the other side, you have worker nodes or worker pods which pull messages off of the queue and launch a build and test job. So the graph here is showing the depth of the queue. And what you want is for the depth of the queue to either be zero or stable. You don't want it to go up very quickly. Because if it does, then that means that you don't have enough capacity to satisfy the incoming requests. So what I'm going to do is show you these worker nodes or these pods are currently running on a local Kubernetes cluster that is running in our data center. And the gauge at the bottom is showing, on average, we're achieving about 10 builds per minute right now. Let me increase this to something like 18. This view shows the CPU usage on our local nodes. And the local cluster is using the bottom two nodes. And you can see that their CPU is completely pegged at 100%. And since I've increased the commit rate, we see that the graph is slowly rising. So this is bad news. So we're not keeping up with the load that is coming in. So what I'm going to show you is the local cluster belongs to something called a federation. So Kubernetes has this feature called federation, which allows you to take multiple clusters across regions and then combine them to work together. And right now, the local cluster belongs to a federation called Fed 3. And it's the only cluster in that federation. So it's running out of capacity. So what I'm going to do, I'm going to take advantage of another Kubernetes cluster that I have running on Google Cloud. So to add capacity, the first thing I do is add a Cloud provider, which describes my Google account. Let me refresh the clusters list. Now you can see that we've instantly discovered new Google clusters running on GKE, which is the Google Container Engine. And the next thing I'm going to do, I know that cluster 1 from Google has a lot of spare capacity. So I'm going to add it to my federation. So I picked Fed 3 and I attached cluster 1. So you now see that our federation has two clusters. This command line interface shows us the pods that are running on the local cluster. And what's happening here is it started with six builder pods. But since the federation now has two clusters, what Kubernetes is going to do is move half of those workloads to the new cluster. So it's terminating three pods. And what you'll notice, if you look at the bottom gauge, is our builds per minute just shot way up. Did you notice that? So Kubernetes is now distributing the load across these two clusters by moving half of the pods to the Google Container cluster. And with this, we've added capacity to our Cloud. And hopefully the graph will converge towards zero. Sometimes it takes a little longer. OK, so this is good. So the graph is stabilized. And as expected, it will slowly decline until it reaches zero. So in summary, SAS-controlled and remote-controlled Platform 9 cloud management allows you to very easily manage multiple clouds, clusters, regions, and even different cloud stacks across many geographical regions and do it in a very seamless way. It enables very interesting hybrid cloud type of scenarios. What I've demonstrated is dynamic capacity expansion where you can augment capacity that is on-prem with something that runs on Google or AWS. So I hope with this that you got a good idea of the capabilities of the Platform 9 product. And I will be available after the talk if you have any questions. Thank you very much for your attention.