 Check, check. Great. Can everyone hear me? Okay, great. My name is Madura Miskowski and I'm one of the co-founders in VP of Product at Platform 9 Systems. And Platform 9, for those of you who are not familiar, we're an enterprise tech startup. We were founded in 2013. And we've pioneered a pretty unique model around deployment of OpenStack and Kubernetes, okay, where we deploy either OpenStack or Kubernetes or both as a SaaS service. So quick stats of our Platform 9, since having founded in 2013, we've had global traction with customers. We've had customers not just in United States, that's where we're located, but also across the globe in Europe and parts of Asia. And then we've been recognized with a number of awards for building or identifying a unique model around SaaS managed infrastructure, including most recently, the Gardner Cool Bender Award for 2016. So this is a quick pictorial describing what Platform 9 does, right, which is our expertise is in taking leading open source frameworks such as OpenStack and Kubernetes and transforming them into a software as a service model. And what does that mean? It means it's a private or hybrid cloud deployment for your infrastructure. So if you're a Platform 9 customer, you give us just a bunch of Linux servers, they could be coming from your private data center or a public cloud. And then you go ahead, you start by deploying a lightweight agent or a virtual appliance on that private infrastructure. And then using that model in a matter of minutes, you transform what you have into a fully managed OpenStack or Kubernetes cloud environment. And once that happens, we provide a fully managed service on top of that. So it means we monitor and manage your infrastructure 24-7. We provide you with alerts. We create tickets on your behalf when we identify some problems or issues with your infrastructure or with the software stack that's deployed, such as OpenStack or Kubernetes. And if it's a software issue, we will heal it or we will fix it automatically behind the scenes many times before or without your ops teams or internal teams even recognizing that there's been a problem. Right? So it's 100% SaaS managed solution. We perform completely zero-touch updates and upgrades to, again, both OpenStack and Kubernetes so that your teams don't have to invest the OPEX cycles in doing that. So the topic of conversation for us today is using OpenStack to control AWS or building a unique kind of hybrid model using OpenStack. And to see it that, I have a couple of interesting data points for you. So this one was one of the questions that was asked as part of the most recent OpenStack user survey. The survey ran about a month ago, and OpenStack Foundation runs a pretty powerful, pretty popular user survey. They run it about twice a year. And this is one of the popular questions that gets asked every time during the survey. And the question goes, what are your top business drivers for OpenStack? Or why do organizations choose OpenStack? In about 92% of responders in the most recent survey responded by saying that this is one of the top reasons why they choose OpenStack. And I'm going to read it out loud for you. It says, standardized on the same open platform and APIs that power a global network of private and public clouds. This couple of other data points that I have, this is the right scale state of the cloud report. Right scale again, runs a pretty popular survey about once a year or so to get the pulse of the enterprise cloud infrastructure consumption. And the most recent survey had this interesting stat in it. And it says that about 82% of enterprises have a multi-cloud strategy of some form that they're implementing today. And then it also says that the hybrid cloud usage has increased to 71% this cycle compared to the previous cycle where the usage was somewhere around 58% or so. So what all this data leads to is something pretty obvious. It's that multi-cloud has become the necessary evil today. And I call it evil because it does introduce a certain non-trivial amount of complexity. It's part of your cloud infrastructure management strategy. And if not done right, it can also result in a vendor locking, which means that you're restricted to working with a particular vendor, say Amazon or Azure, and you're restricted to the pricing model or the data gravity that they provide. And so the proposal that we've had or the experiment that we've been doing, and we made announcements around this earlier today as part of the OpenStack 2 keynote, which is what if we could use OpenStack to manage not just your private cloud infrastructure or endpoints, but also the most popular public clouds of the world. Or in other words, transforming OpenStack API layer to be that single, unifying interface to manage not just private but also public infrastructure. And the benefits of this model are multiple. For once, it creates or it transforms what is today an OpenStandard, which is OpenStack. And it transforms it into an API layer that now you can use for your entire cloud management. So what it gives you is it lets you reuse the tooling that has been built for OpenStack today, such as the OpenStack CLIs or a whole bunch of open-source libraries that are used in conjunction with OpenStack today to manage all of your cloud infrastructure. And you don't need to create and manage independent silos where OpenStack is on one end and your favorite public cloud is on the other end, et cetera. And so what it provides to the end users for administrators, it gives them one standardized unified platform with some of the additional benefits of unified multi-tenancy because they can now use the OpenStack Keystone layer to manage multi-tenancy not just for their private cloud deployments but also for Amazon AWS or Azure or GCE, et cetera. So you can integrate your single-silan or other authentication and authorization policies as part of that process and now that they extend seamlessly across private and public. And this is typically a big pain point from administrators' perspective when they are helping their end users consume public cloud, which is quota management, right, which is public cloud is all about pay-as-you-go and self-service. And before you know it, your public cloud's budgets are extending or going through the roof and IT only finds out towards the end or many times they don't even find out about it, right, because these get expensed through expense reports, through employees. So if you can extend OpenStack's keystone-based quota management across not just private but public cloud endpoints, it instantly gives you the benefit of having that unified quota and policy across all your cloud endpoints. So lots of great benefits, right? But the most important benefit from my perspective or our perspective, really, is that it gives the DevOps folks and developers one single API to rule them all, right, one API to standardize on, one API to abstract away all the underlying, integrated details of various different cloud providers. So it gives you the choice of consuming the right cloud platform that fits your business needs and your strategy without having to worry about the excessive tooling that you're going to have to build around it. So with that, we made the announcements around this earlier today, but introducing OpenStack Omni. And Omni stands for everywhere, right? And our hope is that this will make OpenStack literally go everywhere where there is cloud infrastructure, right? So it'll let us or let the OpenStack community standardize on OpenStack as that single cloud management API layer. So this project is fully open sourced. It's available on github.com slash platform nine slash OpenStack omni. And work is underway to convert this into blueprints for OpenStack. So our goal is to make this into a core set of drivers that OpenStack provides or starts shipping with. What's been built behind the scenes is we started by building a set of key drivers to make the integration between OpenStack and AWS possible, right? So we've built drivers for NOVA to control EC2, Neutron for VPC and networking management, sender drivers for elastic block storage. And then finally what we got as part of this is keystone for authentication and our back and policy and quota management. And the beauty of this model really is when you start by building the core drivers for the key OpenStack services or components, the peripheral services just start working out of the box, right? So the next thing we did after having these drivers is we said let's try OpenStack heat or let's try OpenStack morano and see if it just works. And it did because heat relies on all the core OpenStack services to provide the resources that heat consumes. So what that gives you is you can now use heat as the orchestration layer to orchestrate across your different cloud endpoints. Some of the next steps, we've started by building the drivers just for Amazon AWS. What we would like for the community to do, including ourselves, is to extend that work and provide drivers for the most popular public clouds, right? Such as Azure and GCE and old software, et cetera. And then we would also like to continue and do some additional interesting work to make OpenStack that single hybrid cloud management platform, right? So just adding capabilities like extending keystones so that you can do cloud bursting. And you can specify some policies that say consume the private cloud endpoint till it reaches, say, about 80% capacity at which point if you're still getting new workloads, then start bursting to a public cloud. And this is my public cloud of choice. But then scale back when your workloads really start scaling back as well. That really starts making OpenStack powerful. In that point, you do not need to invest in a separate proprietary stack, for example, for your hybrid cloud management. So with that, I want to switch gears and give you a live demo of OpenStack Omni. Okay, so what we have here is an OpenStack deployment. It's a complete vanilla empty OpenStack environment. In this OpenStack environment, it's configured with the set of drivers that we just spoke about. So it has the Nova, Glance, Sinder and Neutron drivers so that it can manage and control this AWS environment. As you can see, the AWS environment is completely empty as well. We have the EC2 dashboard and the VPC console, and they're both showing that there's really no instances, no data created so far. So we're going to start by creating some network objects as our first order. Okay, so let's switch to the network topology view. And now I had pre-created an external network to start with, and the way this maps to Amazon AWS is it's part of the Neutron driver, but it really maps to a block of Elastic IP addresses on the AWS side. So every time you bridge an interface to the external network and a VM tries to acquire a floating IP, behind the scene, the request will be routed to Amazon and an Elastic IP would be fetched and provided for that request. Okay, so let's start by creating a tenant network. Let's call it tenant net one. Let's call, let's also create a subnet, tenant subnet. I'm going to give it an internal block of IP addresses and internal cider. Then I need to specify an allocation pool. So let's do that. And so now the private network has been created. So this network is specific to this particular tenant. Now that that is created, I'm going to create a router. Let's call it router one. I'm going to create it on that external network that was pre-created. So once we have this router in place, we can now bridge the private network to the external network so that the VMs on the private network can start acquiring external IP addresses. Right in the, so the router is created. It's really nice. So let's go ahead and add a new interface for this router. So I'm going to select the subnet of the network we just created. I'm going to add this interface. And so this is giving me the building block networking structure right before I can start deploying my virtual machines. Before I start creating a VM though, let's switch to the Amazon VPC management dashboard. And we started with nothing here. And let's refresh that. There. As you can see, one VPC got deployed behind the scenes. A subnet got deployed. And a bunch of other things are created. A network echo and internet gateway corresponding to the router that we just created. And as we zoom into this subnet, this is that private block of IP addresses, the private site that I had specified. Right? This is the tenant subnet that I just created. So this was neutron driver in action relaying the neutron commands, transforming them into the corresponding AWS requests and creating the VPC and subnet, et cetera behind the scenes. Okay, so far so good. So now let's switch to compute and start by deploying a virtual machine. So I'm going to launch a new instance. Let's call it web server. I don't like a space in between them. I'm going to give it a T2 small flavor size. Now keep in mind that the list of flavor sizes is something we had to completely hard code because AWS doesn't provide any APIs for syncing their flavor sizes, et cetera. And I think that's true with every public cloud. So I'm going to select boot from image. I'm going to select a demo AMI instance. Now something to note about the AMI instance, what we've also built, and this is part of the open source work that we've made available, is what we call a discovery unit. And what that does is it discovers any AMI images that that particular AWS endpoint might have and it registers them into the open stack glance catalog. And this is so that it makes it that much easier for you to get started with this integration. You don't need to worry about how do you seed your catalog with Amazon AMI images. You don't need to separately upload them or et cetera. It just kind of works out of the box, right? Okay, so I'm going to select the tenant network for this virtual machine. I'm going to go ahead and launch it. So the VM is getting deployed. The VM request got successfully accepted and now the VM is being built. While that's happening, let's go ahead and do a couple of other things. So the first thing I'm going to do is I'm going to allocate a floating IP address for this VM because I want to make sure that I can access it from outside. So I'm going to select that external network that we had in place. I'm going to go ahead and allocate an IP. Okay, so the IP successfully got allocated and you can see it's the 52. Amazon elastic IP address that got allocated. The next thing I'm going to do is deploy a volume because I want to make sure my virtual machine has sufficient storage space. So let's call it volume one. Let's make it 10 gigabytes. Okay. And behind the scenes, these sender drivers are kicking in and it's relaying the request back to Amazon AWS. So the volume just got created and it's available. Now let's go ahead and attach this volume to the instance that we just created. Okay, so I'm going to select this web server. So the volume should get mounted on that web server VM. Excellent. So the volume got attached to the VM. Now the only thing left to do is to have the instance acquire the floating IP address. So let's go ahead and do that now. So this is the floating IP we just allocated and we're associating that with this virtual machine. And so what we just did is we deployed a virtual machine. It got deployed on a private network. Then we mapped it to a public network. We gave it a floating IP address and we attached a volume, an independent volume to the VM. Let's switch to the Amazon EC2 side and see what happened behind the scenes. So we started with nothing. I'm going to refresh the scheme. As you can see, one instance got deployed, about two volumes got created behind the scenes and an elastic IP got allocated to the virtual machine. And as I zoom into the instance, that's my web server right there and it got the t2.small flavor size. It did get the public IP address and it got that additional volume that got mounted on the VM. So everything worked as expected. Now a couple of things that are work in progress is the mapping of availability zones and regions from AWS to OpenStack. And there is a bunch of nitty-gritty details behind them that really need to be worked on. But essentially what this gives you is it gives you the ability to control AWS as your public cloud endpoint using OpenStack. So with that, that's the end of the presentation. If you have any questions, feel free to ask me here. Or you can also drop by our booth at booth number A6. And also feel free to try out the code. This is all open source work and we're hoping that the community picks it up and starts by contributing more number of drivers, extensions, etc. So we would love to hear your feedback. Thank you.