 Hello, good morning, everybody. My name's Adam Gammelman. I work on the Ubuntu server team at Canonical. And I'm here to talk about some of the work we've done over the last couple of cycles around automating and making the deployment of VMware and OpenStack easy on Ubuntu. I'm a developer on the Ubuntu server team. I joined the team during the OpenStack Diablo cycle, which kind of was the Ubuntu oneric Ocelot release for us. And at the time, we were transferring our entire preferred cloud infrastructure from Eucalyptus to OpenStack. So we've kind of been around since the beginning of OpenStack and Ubuntu. And on the team, I help ensure OpenStack remains a first class citizen in Ubuntu. That means working on the packages that we ship every cycle, making sure the latest and greatest OpenStack gets shipped with the latest and greatest Ubuntu, and also making sure that stuff gets backported to the Ubuntu cloud archive for our LTS users. I also work upstream on the OpenStack stable maintenance team, keeping an eye on the stable branches for the stable releases of OpenStack and helping those point releases come out. And most recently, I've been working at Canonical on the OpenStack Interop Lab, aka Oil. And that's kind of where we've been doing a lot of the work around VMware integration with OpenStack and Ubuntu. Additionally, I just kind of help the teams at Canonical make sure that OpenStack and Ubuntu deploy easily for customers. So we have many engineers who go out to customer sites who need OpenStack and they want Ubuntu as the host operating system in a way to deploy that easily. Fortunately, we have tools to do that, and I'll talk about those in a minute. So I think at this point in the week, we kind of all agree on OpenStack, I think. We like what the value it proposes. If we haven't deployed it already, I'm sure most of us are probably considering how do we deploy it in the future. Unfortunately, not everyone is starting fresh. I've talked to lots of people who are in the process of ordering lots of new hardware to roll out brand new clouds and brand new data centers. But for many people, we have existing infrastructure. We need to incorporate into our new clouds. And this presents a challenge for people, especially people who use the OpenStack and coming from other places, places more vCenter, VMware centric. It's kind of difficult to understand how vCenter and vSphere integrate with OpenStack and how it fits into your architecture. But fortunately, there's been quite a bit of work done in OpenStack over the last two cycles to make it easy for OpenStack to integrate with existing ESX and vSphere properties. Most notably around NOVA, during the Grizzly time frame, there was the addition of the VCD and ESX drivers. And more recently in Havana, there's been quite a bit of work around network and storage to make sure that those resources are compatible with existing vCenter appliances. But for people coming from that world of virtual appliance installation and CD-ROM-based installation and the more prepackaged cloud world, this kind of infrastructure looks very intimidating. And it's hard to figure out where your existing infrastructure fits into a graph like this. But luckily, at Canonical, within Ubuntu, we've been developing tools to turn this whiteboard architecture into a reality for users using a tool called JuJu and another tool called Mas, which I'll tell you about. But if anyone's been to the Ubuntu booth or saw a Mark Shadow Wars keynote on Tuesday, you might have seen something like this. This is what we call the JuJu GUI. And it's a tool we use to kind of take this and make it a reality. So for developers and operations and DevOps who usually sit at a whiteboard and spec out what their infrastructure looks like, we're kind of making it easy to do the same within a GUI, but also translate that to physical hardware or virtual hardware and deploy services as you would mock them out on a whiteboard. So what is JuJu? JuJu is an orchestration framework that lets you deploy and integrate and scale services instantly and easily. It's kind of machine agnostic. JuJu has a concept of machine providers, and it kind of abstracts away all of the details about where you're getting hardware for your workloads or where you're getting machines for those workloads, whether they be virtual or otherwise. Like I said, it's pluggable. And so far, we have support for a number of public clouds, rack space, HP public cloud, Amazon, Azure. We also support deploying workloads using JuJu to internal on-premise private clouds via the OpenStack API, as well as the actual bare metal servers and local LXC containers and virtual machines. That's kind of sit, JuJu kind of sits in the middle of the stack of tools that let us deploy OpenStack. At the bottom, we have what's called MAS, Metal as a Service. And this is a machine provisioning system similar to tools like Cobbler for anyone who's interested, who has experience with that. It allows us to kind of take racks of unprovisioned servers, dedicate one server to provisioning by installing a MAS server, kind of just boot up the rest of the servers in the racks, set them to net boot. It automatically enlists into MAS. Commissions gets put into the MAS inventory. And once it's there, these servers are at the disposal of anyone who wants to use them via an API. So it kind of just abstracts away. Hardware provisioning puts an API in front of it and makes it look kind of like a cloud endpoint. From there, we can use JuJu and its pluggable machine provider framework to pull machines from and deploy workloads to. And workloads get deployed in JuJu using what we call charms. An analogy of JuJu charms is kind of similar to puppet modules or chef cookbooks to the pieces that describe that plug into JuJu and tell you how to deploy a specific workload. And those aren't necessarily open stack specific. We have charms to deploy all kinds of workloads, everything from MongoDB to Minecraft. But in terms of open stack, we have charms for deploying open stack itself, the core services of open stack, as well as charms to deploy specific things that plug into the back ends of those open stack pieces. So for instance, we have charms to deploy specific network plug-ins to Neutron. We also have charms for extending open stack to interface with more external resources, things that aren't part of core open stack. Things like Chef storage clusters, or in this case, VMware clusters. We have charms that support all of the core open stack components, so we make it very easy to deploy all of these. But we also support deploying each one of those in different configurations to kind of turn every cloud you deploy into its own little snowflake if you want it to be. We, of course, support all of the more standard drivers in all of them, the defaults. But over the last six months, eight months, we've been expanding those supported options to cover other things. So for instance, if you're not really interested in full loan KVM virtualization, we make it very easy to deploy Nova in a configuration that uses LXC in the back end for lightweight containerization. Same is true of storage and networking. And with the work we're doing in the Interop Lab at Canonical, we're constantly extending this to support more options. So what does a charm do? What's the point of a charm? I mean, how does it differ from configuration management and more traditional service orchestration? One of the points of the charm is just to condense everything you would get out of a document or a how to into something that's easily reusable and easily to distribute. So here's a snippet of the official OpenStack Havana documentation for configuring and deploying keystone identity service. I think it's about five or six pages or so. If anyone's configured keystone on their own manually, you probably know that it's an awkward procedure. There's lots of URLs to manage, UUIDs, credential sets, and there's lots of docs to read. We try with Juju to take these long deployment manuals and condense them into simple commands. So everything you would read in the first five or six pages of the Havana keystone documentation can be summed up in this. Juju deploy keystone. One analogy people like to make is apt-get. You no longer go to a website, download source, run configure and make, and build binaries from source to get a web browser or a word processor. You do apt-get install Chromium browser or apt-get install LibreOffice. The same is true of Juju. We're trying to take that analogy and apply it to services running outside of your system. But installing and configuring the initial system is only part of the process. We also need to make sure that the services we deploy integrate well with the other services in our environment. And Juju has a really powerful concept known as relations. Each charm is responsible for configuring and managing the service it's responsible for. But the charms can also define and export interfaces that define how it interacts with other services within the environment. And you can create relations between these services and kick off a process where one service has knowledge about how to initiate a relationship with the other service and a back and forth, kind of like a ping pong authentication conversation. So in the case of Keystone and Cinder, to properly configure Cinder in your cloud and your Keystone identity service, you need an API endpoint configured in the Keystone catalog. And you also need Keystone to generate service credentials for Cinder. If you're doing this manually or scripted, it's clunky and it takes its aeropron. With Juju, we open an interface between the two services where these requests can flow back and forth. And the code to do that and the logic to do that is encapsulated within the charms. So in the case of Keystone and Cinder, Cinder might tell Keystone about its address and its endpoint for the Cinder volume service. And Cinder, on the other end, would go and stuff that in its database, generate service credentials, and pass that back over through the pipe to Cinder, who then plugs that into its configuration files and restart services accordingly. And in the end, when the relations settle, you have a Keystone catalog that has an updated Cinder API endpoint and you have a Cinder service that's authenticated to use that service. Once we've defined that interface and it's working well, we can then apply the same interface to other services in the cloud. So Keystone's a good example because it takes many relations to other services in the cloud. Glance, once we have the identity service interface defined for Cinder, we can easily add a relation to Glance and the back and forth is almost identical. And then we can do it for Nova as well. And at that point, we have a cloud deployed with Keystone Cinder and Glance in a Keystone catalog that's completely populated with all of the endpoints and credentials. Reading the docs to get that done is, I don't know how many pages, there's scripts like DevStack that can do that, and I don't know how many lines of code, but with just a simple collection of commands that you run on the command line, you have this happening in real time. So that's a high-level overview of Juju. I don't know who's used Juju in the past or who might have visited the booth and got the rundown on how it works, but if there's any questions about how it works and how it differs from things that you might have used, that would be a good time, no? So we try to automate that component, like we've designed the interface between Keystone and other services to kind of make that automated so that every service in the cloud gets every machine running a service that needs to authenticate with Keystone in some way, gets a unique password, and gets added to a project and a tenant on the Keystone side. The project and the tenant is configurable, so if you wanted to change where those services are being created in the Keystone authentication database, you would configure that on the Keystone side, and then it would regenerate the credentials, send it out through the interfaces that are associated with the service, and whatever's on the other end of those knows how to handle that. Does that answer your question? Anything else? OK, so that's kind of the high level overview of how charms work, the basics. Everything I've described about Keystone and Glantz and Cinder, these are what we call principal services, and these are kind of the first class services in Juju. And when you deploy these, typically, if you deploy JujuDeployCinder or JujuDeploy MySQL, Juju's going to go out and fetch a machine of some kind, whether it be an LXC container or a virtual machine or a bare metal server, and place that service on there. The charm in Juju itself assumed that it's the only thing living on that system. So you shouldn't ever run into a case where you have two services running within the same system namespace, because there's potential for collisions on ports and packages and config files and things like that. To make it possible to kind of co-locate things in rich principal services, we have a concept known as subordinate charms. And subordinate charms get deployed alongside principal charms and operate within the same system namespace, and the point of them is to enrich and kind of supplement the principal charm in some way. And this is how we're kind of tackling the Nova problem, the deployment process in our OpenStack deployments. We don't want to pollute the wrong word. We don't want to bloat the main principal Nova charms with information that not everyone needs. We don't want to explode its configuration files with all VMware specific bits. So we can encapsulate all of the VMware logic in a subordinate charm that gets attached to the principal. And if you're not interested in using VMware, you just don't need to deploy that. It operates in the same namespace as the principal service, and it's operating on the same service in the same configuration files. But it's more specific in its use case. Subordinate charms, they encourage the same principles as the primary services, encapsulation, reuse of interfaces. But the difference here is that they deploy to the same machine unit as the principal. We use this also in the OpenStack case for networking bits. So basically anything that requires some external appliance running, whether it's a vCenter cluster or, in this case, a neutron service related that needs information about external MVP or NSX appliances, we just encapsulate all of that into a subordinate charm that we attach to the principal. There's other use cases for subordinates outside of OpenStack or within OpenStack. A good one is monitoring and logging. It's easy to develop generic subordinate charms that can take interfaces to any number of principal services that do common things like set up remote logging access or monitoring, configure any number of principal services whether it's my SQL, RabbitMQ, Nova to be monitored by some external Nagios process. So all that said, here's a diagram of what OpenStack looks like deployed by Juju. And I'll switch over to a live environment running this. So apologies for not having any pretty icons. What I'm pointing at right now is layered behind a few VPNs. And it seems to have some problems fetching all the icons. But here we have an entire OpenStack cloud running. If you look, you'll see the various services you'd expect. We have a Glant service, a RabbitMQ service, a MySQL service, volume, et cetera. And the lines between them are all of those relations I was talking about. So if I focus in on Keystone, you'll see it's got a bunch of edges to the other various OpenStack services that require endpoints in Keystone. And the MySQL side, it's tied to many services that require some kind of persistent database connection. Over on the Nova side, here's that subordinate I was talking about, right? Each charm has a number of configuration options exposed to users. We try not to make those options a gigantic list, but only expose the collections of config options that are really necessary to get big bucket functionality configured in an opinionated way, I guess. So for instance, Nova has just some generic Nova configuration options that you would set. The database name it would use, the database user it would request, various networking configuration. And this is all kind of generic for any OpenStack Nova service you would deploy. But on the VMware side, we have the various configuration options that just describe how we connect an existing Nova service to an external VMware service. And separating them like this makes it easy to keep the principal service clean, less bloated, less things to test, and segment it out and separate them out a little bit. So any questions so far? So now we've got an OpenStack cloud deployed using Juju and Maz. And now what can we do with it that we weren't able to do before if we were just running a vCenter cloud on its own? Fortunately with Juju, as I said, it's back end. Where it goes to get hardware to put services is kind of abstracted away, so it's very easy to take the Juju tools you've used to deploy your OpenStack cloud and point them at the cloud itself, or perhaps give users of the cloud access to use Juju against your cloud and start deploying workloads on top of that cloud. All of the charms and all of the services we were able to deploy exist in what's called the Juju charm store. And you could think of this kind of as the Ubuntu software center of services or the app store for services. Users can browse and look at all of the charms that have been developed. Some of them are rated based on quality and number of downloads and how well they're maintained. And pick and choose what they want and browse how different services interact with one another. So if you're deploying WordPress, you can go find your WordPress charm, deploy that, find out it needs a database, deploy that, and add them together. And I'll show you how this works in the context of the cloud we've deployed. So here's the Juju environment that's running all of our infrastructure, right? This is the OpenStack cloud we've deployed. And now if I take a step back or act as a user of this cloud, I can switch to another environment, which is a blank Juju environment, only running one service, which is the GUI that I'm looking at. And I can go in here to look at the charm store and find different services I'd like to deploy. So let's say I need a MongoDB database for my application, it's as easy as dragging it, setting various config, anything that I would need to specialize here, hitting deploy. And what's going on now in Juju in the back end is going to the OpenStack cloud that's running underneath and requesting that the service be spun up. In the case of OpenStack, in the case of the OpenStack cloud we're running, we should see two instances running, one of them spawning. That's the MongoDB service we've just spun up. And if we look a little bit into the cloud that we have deployed, we'll see a couple of compute nodes, one of them the VMware vCenter, the other one a KVM, just a generic LibVirk, KVM, Nova Compute service running. In this case, the MongoDB service we've deployed is spinning up on the VMware cluster. And that one's running now. So with time, if we had a couple more minutes, this would be spinning up, turning green, and we have a live MongoDB service running. We can then go through the charm store interface here and pick other things that might interface with the MongoDB charm in some way. And those interfaces are described within the charm store right here. So it tells you all of the different services that would be able to consume that service in some way. And browsing for the OpenStack charms, that's very much similar. And we could see that the Keystone Identity Service can take relations to various other OpenStack services, and that's how we kind of piece together huge charms and services. So we're able to deploy relatively easy OpenStack, and we're able to connect that to existing vCenter property relatively easy. We still have some work ahead of us in terms of the deployment story. There's quite a bit of work going on upstream in OpenStack itself about improving support of VMware in OpenStack. But a little bit downstream in Ubuntu, we have some things to work on ourselves. We'd like to be able to deploy the OpenStack Cloud I showed you, but also divvy up what hypervisors are able to support what kinds of workloads based on host aggregates and cells and that kind of stuff. So we're in the next six months going to be adding support to that, to the juju charms and to the provisioning stories so that when you deploy KVM alongside VMware everything is properly tagged where it needs to be within Nova and instance types are set up accordingly so that those can be addressed by juju. So when you say deploy MongoDB or deploy MySQL, specify that you want that workload to live on a VMware cluster. And when you deploy WordPress or something in your app tier, you're OK with putting that on a cheaper virtualization layer that you have more room to scale out. We're going to be working on better volume support. Yeah, so I said that's a good question. Juju has the concept of constraints. So when you deploy a workload, a service of something you could say constraints, in some case you may say I'm deploying a database that I need 64 gigabytes of memory. Juju will go to whatever machine provider and kind of figure out based on what that provider exposes. And it might be instance types, that kind of thing, what type of machine best suits those needs. In the case of my juju environment, when I boot and strapped it, I've set a environment-wide constraint pointing it at an instance type configured in the cloud that has aggregates that tell it the scheduler to put it on a v center. One of the points on the roadmap is to make that a little better so that that's configurable ahead of time, how things get divided up, where services go, how aggregates get exposed to the users, and make it a little bit more richer for the users of juju, and make it easier for juju to target workloads to specific hypervisors. Question? Sorry? I'm sorry? So it's all of unto. The question was, what's the operating system used when I went and deployed MongoDB into the cloud? We saw MongoDB got deployed, but where does it actually go? If I flip over here, I'll show you it. Juju will go to its machine provider again and find out how do I get a machine to deploy MongoDB. In the case of OpenStack, it will go to Glance and find out and query there, and find out what images are available, and find me an image to deploy MongoDB. In this case, it went to a precise VM. Where is it? Right here. Precise VMDK. So it's all Ubuntu. Juju is integrated. It relies on features within Ubuntu, things like Cloud in it, and some other things we've developed to make sure that when the image comes up, the cloud image, Juju is able to reliably bootstrap it into the Juju environment and get it set up so that Juju can control it and get its agent set up. So everything underneath is Ubuntu. That's the host, the infrastructure level, and the guests. Not right now. We haven't done any work to support that. Yeah, so that's a really good question. There's a traditional way of doing that, right? So when you bring up your Mongo app and your Ubuntu system, there's various ways of making sure packages get kept up to date for security updates and things like that. You'd hook it up to an external service, something like Landscape. We've also been doing a lot of work around the images that we publish to be consumed by cloud users so that if you're deploying an application, you can be sure you're getting an officially supported canonical cloud image of Ubuntu that gets we publish new cloud images every time there's relevant security updates. And if you're using Juju and you're integrated into the stream of images, you can simply request a new image and you're sure that you have the most up-to-date base operating system without dealing with package updates or that kind of thing. And if you're truly moving towards a cloudy kind of way of doing things, you can imagine deploying six nodes of yesterday's image. And then when the time to update, deploy six more of the newer and migrate your workload or cut half of them over and deploy three of the newer and work out your upgrades like that. No, we start by grabbing just a plain base Ubuntu image, just a base cloud image that we publish that's just the base operating system that gets pulled from the machine provider that you're using and deployed to its hardware in whatever way that machine provider does it. In the case of OpenStack, it's Glance and Nova. EC2 is a little different. The base image gets booted with some metadata injected into it that CloudInit picks up. CloudInit is an early boot utility we use to pass in information to new cloud instances. Within that metadata, there's information about what it needs to do to check in with Juju. Once it finally gets bootstrapped and checks into Juju, Juju sends over the charm that I was talking about. And that describes how it installs MongoDB, that kind of thing. So the process would be deploy MongoDB, Juju gets an instance from its provider. It could be a machine or a virtual machine. Gets that, puts the charm down, and the charm immediately runs its install hook, which in the case of Mongo might be apt-get install MongoDB server, and then it starts from there. So it's not pre-canned. And in terms of co-location, we have different ways of handling that. If you're interested in co-locating Horizon with Nova API or something like that, we're working right now to, we've just landed support for co-locating services on the same machine via LXC containers. So you would put each of those services in an LXC container on the same machine. That way you're sure that there's no conflicts between the two, and you can easily tear them down and keep the machine in place. You could remove the services from the machine and keep the machine state in place. Any language you want, actually. So there's no, Juju doesn't put any constraints on the language or the DSL used within the charm. If you're comfortable writing bash, you can write your charm in bash. If you're comfortable using something like Chef or Ansible, you can do that as well. And you can have a charm written in Python talking to a charm written in Puppet as long as that interface I described is adhered to. There's no real reason why this wouldn't be compatible. We actually spent quite a bit of time over the last six months rewriting a lot of our charms on a standard Python framework. And while we were doing that, we were allowing users to upgrade during the rewrites. As long as we adhered to the interfaces, there was no real issues. We had different charms and different languages all over the place. Yeah. So bundles come from some work that I did a while ago when I first started working on OpenStack. So let me see. Sorry to bring up the terminal, but it's better if I show you this way. So a typical Juju deployment looks something like this, Juju deploy my SQL, Juju deploy RabbitMQ server, Juju deploy Nova compute, Juju add relation Nova compute to my SQL, Juju add relation, Nova compute, RabbitMQ, et cetera, et cetera. And you kind of have to do this for every service in the cloud. So if your topology is only a few services, it's not that big a deal. But for OpenStack, that diagram I showed you earlier, this turns into this gigantic list. In the early days of Juju and OpenStack, for me, I was looking for a way to declare all of this ahead of time in some YAML syntax or JSON. And that's kind of what has given birth to Juju bundles. So we're able to declare all this ahead of time in a YAML syntax and pass it to either the Juju GUI or some other utilities we have to actually make it a reality. So for anyone who saw the demo on, I'm sorry, Mark's keynote, bear with me a second. I'd love to show you. It's a very easy way to declare deployments ahead of time. And we have different ways of generating deployment configs for different variations of topology. And I've recently, in the last couple weeks, been trying to figure out a good way to add placement policy on that. So when you're deploying to maybe six nodes, make use of those six nodes as best you can, you're going to 100 nodes place services according to your resources. Yeah, let me see if I can get to one of the instances if I'm networked in. That goes back to the cloud init stuff I was talking about. So that provides a way to authenticate when a fresh image comes up, that there's a user created with pseudo access with any number of specified SSH keys. So when your instance comes up for the first time, you have an easy way in as the first time login. Since I bootstrapped this all in an external environment, it's relying on SSH keys that aren't on my laptop. So I can't log in and show you. But there's good stories for how you would pass all that authentication information in before boot. That would be configured in Maz. As long as you could get in there for the first time, you might have a charm that actually hooks up your Mongo node to a puppet infrastructure that's managing user or a subordinate charm that might hook it up to an LDAP structure, something like that, that handles all that for you. Or you can just have a subordinate charm that you attach to MongoDB that just has a hard coded list of SSH keys to add to the authenticated keys. I'm sorry I'm having trouble hearing you. The question was, so currently we've been trying to keep the number of config options we exposed via the charm somewhat limited so that if a user is just perusing the config, it doesn't look like the Nova config example where there's 800 possible variations of config. There's several ways of handling that. One, if it's something that's generally usable by many people, it would be to just add it to the charm. Another way would be to actually work with subordinate charms to have that stuff applied via there. Or hook what you've deployed into something that manages that externally. Yeah, there's potential for them to override. It's not like Puppet or Chef where you have every 10 minutes of checking in with the master in overriding config. Juju's very much event-driven. So if there's something else that changes in the service graph external to the service, it might trigger a relation event via any number of relations that might cause the config to be regenerated in some way. So it's very much assumed that the config files on disk are managed by Juju. And if you go and put something in there, it may not be there when you go back. Sorry? Private networking? Any connection to the internet? Yeah. If you're using MAS to satisfy your hardware resources with your own physical hardware, there's ways of making sure all the resources Juju needs are located within your network. That requires, since you're bringing up an image and app getting installed, it's probably assumed you have a local app repository somewhere on your network. The images need to be supplied to MAS in some way. So on the MAS server, you might poke a hole in the firewall or put the images in there some other way. There's ways of working around it. If you're using a private cloud, there's ways to make sure everything Juju needs outside of the app repository is satisfied in glance, for instance. So it's doable for sure. So I think we're at time. Thanks for coming. We're at the booth for the rest of the day. And thank you.