 Brad Marshall is presenting on Juju developments at Canonical. Hi everybody. So what I'm going to be talking about this morning is just how we do deployments of OpenStack, deployments of applications onto OpenStack at Canonical. It covers there's a fair bit of material, so I'll probably go a bit fast through some of it. So if there's questions, just let me know. So one thing we've done at Canonical is made OpenStack the primary deployment mechanism for our applications and moving all of our applications as much as we can across to OpenStack. We've been using OpenStack in production for I think it's nearly three years. So that's a reasonable amount of experience with it and we've had a lot of growing pains across time with it. And we've gone through multiple iterations of the way we do the deployment as well over time, which is obviously meant improvements over time both between the software and the deployment methodologies. So basically the stack we use is obviously OpenStack with Ubuntu. We use MAS, which is Metal as a Service, which is essentially the equivalent of an installer if you're in the old hardware install mode. Juju is our actual deployment technology. It's a service orchestration tool, landscape to do monitoring, and Mojo is our new piece, which we use for continuous integration of services. So we do testing of all our services before any deployments happen. So we use MAS and Juju to deploy OpenStack onto the bare metal. And then once you've done that, you then use Juju again to deploy on top of OpenStack. So you're making a bit of an inception sort of thing, and then you use the landscape to manage the VMs inside that. That went well. And we then use, as I've said, Mojo to validate it. So has people here heard about Juju at all before? You don't count. So as I said, service orchestration for the cloud. You can do OpenStack, MAS, AWS, HP Cloud, Azure, Joint, DigitalOcean. I think there's a new one. There's Arm Online Labs. We can do that as well. Vagrant, local containers, manual. Some reason we can't do Google Cloud just yet. But there are providers out at all the time. The Juju works is it has what's called charm, probably the equivalent of the playbook in Ansible or your puppet manifests. And that defines how the service is created and how it interacts with other services. And it expresses a relationship between the services, which is the real value that you get out of this. Can deploy. And the containers. We found that quite useful to be able to deploy into containers not to be using a whole machine for everything. So that's a quick example of how you do a deployment. It's just simple. You tell it to bootstack the whole service and then deploy however many units that you want and what the constraints you want on. There's also a little GUI you can use to drag and drop. This is just a really basic example of using Squid to WordPress with Memcation MySQL. So you can see the relationships defined where all of the green dots are. And that's where it tells it. So you don't actually have to tell MySQL, oh, you need to create an account for the WordPress to use by just defining the relationships that gets created for you. And it just handles that. WordPress is a bit of a simple example, but gives you the idea. This will show you from the other side so you can actually see what the machines are doing as well as seeing where the services are. That is not good. Back down to the metal as a service. This is how we actually deploy to the physical hosts. It basically lets you treat bare metal as a cloud guest. And then you can use, but as well as doing bare metal, I'll also do KVM so you can use virtual machines inside there as well, which gives you a lot more granular control. So this is an example of the Web UI. You can see how many nodes are in the service. And then inside there, you can see how much RAM it's all got. And that's just example node. So from there you can see how you could release the node, delete it, grab it for use by somebody. You can allocate them out to different users. If you want. This is possibly more interesting bits of what we've been doing. We use continuous integration. So our developers build up a service. They then write a test suite for that so they can say, for this test we want to deploy to this particular revision and then upgrade to trunk or we want to upgrade, remove something, rollback, whatever, just so you can validate that what you expect to be happening with this application is happening. Like I said there, as well as upgrades, scaling it out, making it bigger, making it smaller, just to make sure that everything is happening as you expect. We're starting to use it in deployments for staging to start with and then we'll move on to production so that you know that when it rolls out you've tested and you know what's going to be happening. And it's recently been open sourced. So it's the URL up there. So we've got multiple different places where we use OpenStack. We've got a development testing stack called CanoniStack that everyone inside the company uses to get a feel for how things go. ProdStack is where we actually deploy the things to. Going through here. StagingStack is actually just ProdStack with a different tenant. So you've got very good... There's no difference between dev and staging and production. It's exactly the same thing. So you know if it works in staging it'll work in production. ScalingStack, the LaunchPad PPA has recently moved to that and got a huge increase in speed there where you're allowed to build your own packages and that's been working quite well. There's a couple of other OpenStack integration labs where we do testing of different components of OpenStack together and BootStack is for clients for clouds... Managed clouds for clients. Here we go. So CanoniStack is like I said we've got a couple of regions, Grisly and Havana. Going to work on a third region by Paul up there to do trusty and Juno with actually using VMware as the storage and compute just to try it out. And it's been quite valuable for the developers to get a grip on how do you deploy things into OpenStack, how do you use it. And it's been quite popular perhaps. So it's been quite overloaded, but just gradually increased that. ProdStack 4 is the production. I'll just quickly go through this one. It's for internally developed services in general. So we got a lot of Ubuntu SSO stuff on there. There's a bunch of other internal services. We're using IceHouse and I might just skip over ProdStack 3 because that's the old version that we're just migrating from. It was Folsom and using very early version. I think it was Seth Argonaut which had a lot of bugs in it. So we've been moving on as quickly as we can. That's examples of what's running. Ubuntu SSO is probably the highest volume traffic sort of thing that runs through it. Ubuntu Phone when it comes out, the click package service for that will run on this. A whole bunch of others. This is where I'm trying to get into a bit more detail with this one. We're using IceHouse and Trosti for ProdStack. And this is where we're using, it's fully HA, using Juju and LXC for the containers to run the OpenStack services. And we use a base deployment of that. That takes about 45 minutes start to end just with a simple commander on. The only thing we don't have in containers is the Nova compute, Swift and Seth, which as the other guys were talking about is not very well suited to being containerized. So essentially we've separated out the stateless services and we're using CoroSync and Pacemaker for HA for the IP addresses for there, but we also use treating it as just a normal web service and using HA proxy in front of the services so you can actually scale out using that. So while we've got HA, three nodes in HA, it'll actually use all of them to scale out. MySQL, we use Pacona for the active-active clustering and both Rabbit, MQ and MongoDB use their native clustering which works quite well across the hosts. Kind of rushing this, but it's fairly straightforward. There are obviously no HA for Nova compute and Swift and Seth just use their normal replication across. But all of those are natively onto the bare metal, not onto containers. Trying to give you a bit of idea how it's actually architecture. The three nodes in the middle, the other thing is actually running on the nodes. Bit hard to see, but... And then this is how it looks for the storage and compute where everything runs compute and then we split out Swift and Seth across that. Just like to show a bit of the complexity of the relationships that happen between all of the different things and this is all managed by Juju for us. So it just handles the relationship. If something goes down, it's aware of it. That's the deployment history, so it's just a reference. You can see over time as we've gone from Folsom, Grizzly, Hibana and kept up with everything and we've gone from doing... The first deployments were manual, just logging in and configuring the nodes as normal. Then we moved to using Puppet, moving to Juju and Mass after that and that's really given us a lot of advantages. I'll talk about soon. I think I've mostly covered what I've got here. As I said, developers give us a service that they want to run. They work through back and forth with the operations to get into a usable state and from there you can deploy it, make sure it's all happy and then roll it into production. We've actually started looking at getting the developers to run staging services themselves on exactly the same OpenStack setup so they can get a much better feel for how it works and what needs to happen. That's been showing some good results and it's given a lot more interaction between ops and dev as well, which has helped. The developers understand a lot more about what you need to do to run in OpenStack. There has been a learning curve. We've had a lot of back and forth trying to get them to understand what's the Swift thing and how do you use it, how do you put storage into it and how do you use things properly underneath it but it's been improving over time. Just dealing with software bugs, things have improved as well. It's definitely also brought a lot of advantages. We've been able to scale up in terms of load really easily. It's a trivial ad unit. When we do a release, we're able to scale up the web server and it'll just handle the load. The other real win is the reproducibility of deployments. We know that what we're deploying every time is the same and the devs know that what they've given us will be what they get on deployed out to the service. It's been nice being able to try and shape a bit of the upstream in terms of what happens in Ubuntu as well. Sorry, I've rushed it a bit, but I didn't have much time. Any questions? I'm just surprised by how old the release is. Canona Stack is running. What motivates not upgrading? Your slide said that Canona Stack was running. I think it was Grizzly and Havana. Neither of those are supported by upstream anymore? Yes. They just seem old to an upstream guy. Basically, it's been a matter of time and being able to get a new region deployed. That's why the new region we get deployed... The intention was always to try and keep on... Two different versions, close to the latest and the one before and then just keep hopping them, but it hasn't really panned out that way just from time reasons. Hopefully once we can get the Juno and Kilo, so we probably go for the one before and then try and get the other ones brought up to speed. No one else? I'll be around, so if anyone wants to ask questions or anything. Maybe no one. Don't want to keep you from lunch.