 Thumbs up from the man. Oh, good. Good afternoon, welcome. My name's Mark Baker. We're going to be spending the next 40 minutes talking about production proven tools to deploy and manage OpenStack lifestyle operations. Just trips right off the tongue, doesn't it? So for those of you who don't know, this is a canonical session. And hence, we'll be talking predominantly about tools that surround Ubuntu OpenStack and Ubuntu. So it's good to see everyone. Well, so much orange in the audience. It's always good to see. So we want to kind of talk about, to open this with, about how software is changing, right? That modern software has different characteristics about it. And over the last few years, we've been going through a phase change, right? A phase change where software, typical software installed inside enterprises was on big boxes, monolithic applications, very vertically integrated, lots of HA technologies to ensure that these things stay up. If we wanted more performance, we would add more CPUs, add more memory, scale up in that environment. And of course, as we've more recently moved to different styles of applications, right? Applications which are actually composed of many components. So big applications, not necessarily big on a single box, but they are big across an architecture, right? And a lot of the scale up applications that you'll see, lots of good examples of whether it's Hadoop, or NoSQL stores, or kind of web farms, those types of applications have changed, right? They're very different to the applications that we used to manage years ago. And these environments get split across, you know, not one or two, or even back in the day when I started my career back in Oracle many years ago, to have an Oracle Parallel Server, as it was called then, split across more than two machines was very unusual, right, or three machines. That was considered big. But modern software applications now can span very many, right? And those components live all across those 10s or even hundreds of machines, right? This is familiar? This is something that we refer to in canonicals as being big software. So we see this as being big software problems. How do we address the needs of applications, these big software applications, that span multiple machines, 10s or hundreds of machines? Oversack is big software. It's comprised, how many, anyone know? If I had a prize, I'd give them away, but I don't. So you just get the satisfaction of knowing. Anyone know how many projects there are in Oversack today? What was that? 97, that's not quite that many. So they're actually, according to the project navigator, there's 54 projects in Oversack today, right? So that's a lot of different components. If you were to install all of them, right, one, that would probably take you quite a long time. But two, that's going to be a complex beast to manage, right? There's a lot of moving parts that you'll have to manage. I don't recommend installing all 54, by the way. If you want a much better, tightly-defined subset of that, it's enough to stand up your cloud. We can help you that with an Oversack distribution. So Oversack is definitely big software. Other big software applications that many of you are probably running, anyone running any of these? Most of these good, most of these software applications are comprised of many components. Can vary between half a dozen to how many is Cloud Foundry now. It's about 20 or so, 21. I know a lot of different components. So there's a lot of moving pieces. And this requires a different approach, right? It's managing traditional software that's fewer pieces. You can get by with traditional config management tools. But we think you need to take a slightly different approach. Deploying this big software, the first problem we hit is provisioning hardware. And let me ask a question. How do you provision hardware? Anybody? Anyone use Foreman? Good? Pixie booting and a bunch of scripts, right? A lot more people. USB sticks or CDs, DVDs, right? So how you provision hardware is a problem. It's a problem even actually if you just have a dozen servers. But if you've got hundreds, then you do not want to be walking around data centers, obviously with USB sticks. And if you're managing that with your own bunch of scripts, like so many people are, the management of that scripts, those scripts can be troublesome. So we recognize this problem. And actually, we developed this piece of software to solve a problem internally to begin with. Develop something called MAS, starts for Metal as a Service. The M, followed by Azure Service, wasn't taken. So we grabbed it with Metal as a Service. And MAS is a system that automates the bare metal physical server provisioning. It was actually originally based on Cobbler when we had people know Cobbler. We originally based it on Cobbler, and then kind of decided that there was a better way to do things. And so we wrote it to use its own stuff. But it's all based upon pure open source protocol, not open source, open standards, open protocols, right? So MAS is based on a lot of the things that you're probably managing with scripts today, TFT boot, and TFT boot, Pixi booting, IPMI, all of those good things, right? It allows us to essentially maintain an inventory of hardware as a readily available pool that we can then act as an endpoint to be able to deploy workloads to, right? So I can say very simply, go and interrogate the pool. What do you have available? I would like a server that has, I don't know, 512 gigabytes of RAM. I want you to have at least four CPUs. I need 12 cores, whatever it is. Give me a machine that looks like that. Mas will look through an inventory. Here's one that's available. Boot it up. Boot it up. Lay down. Operating system of your choice. So this is not Ubuntu specific. You can then lay down all sorts of different OSes. Of course Ubuntu, but others too. So Windows, Suzer, Rail, or Enterprise Linux of your choice. And then, as I say, act as an endpoint that something else can go and say, OK, now, Mr. Server that has Linux running on it, I want to deploy Nova. I want to deploy Horizon. I want to deploy Hadoop, or Storm, or Spark, or something else. And that's the exact process I have just talked through. So I'll skip through. Its endpoint is there specifically to say, on its own, it's just providing, say, this pool of hardware. But the interesting piece is when you start interacting with that API. You can, of course, go to the UI, provision a server, SSH into it. But it really comes useful when you start having it acting as an endpoint. Now, you can drive this with Chef, if you like. Chef users. Good. Good shout out for Chef. So you can drive it with Chef if you like. You can, of course, drive it with Juju, which is the piece I'll be talking about in a minute, or indeed, manually. The way that it's configured up is that we'll have a region controller, something that acts as a region. Typically, you'll have one, two region controllers in your data center for redundancy, but then a cluster controller. And we see, typically, customers deploying cluster controllers per rack. The cluster controller is what intercepts that pixie boot, provides the IP address management, allows you to be able to boot that machine, provide the images to it. The type of capabilities, you can read that through. I'm not going to read them all out. But it's more than just bare metal provisioning, in the sense that it does give us VLAN tagging. It gives us some IP address management. It gives us a few more capabilities beyond just boot a server given an image. It also gives us some interesting storage pieces, which we have time we will talk about. The new release, actually, was just released in the last couple of weeks, high availability. So this needs to be an HA environment, of course. Extended DHCP and DNS config, right? Who has problems with IP address management in their data centers? Yeah? Just one. Wow. And is this a container-based deployment and ESXi provisioning, right? So again, this is not a bunch of specific tool. It supports other environments beyond that. Once we've stood our hardware up, how do we deploy software, right? OpenSec is too big, it's too complex to app get, install, or RPM minus i, right? You can do components for that, but that's not going to lay it down everywhere. So you need to have a slightly different approach. And you really need to have not just can we share config of how you install this via different packages. We really need to see, we're an open source community that's really built around collaboration. We really need to have reusable open source operations, right? Operations is what makes OpenSec successful. To be able to reuse the knowledge, right? If you deployed OpenSec successfully, how can we reuse what you've learned and done with the other side, somebody else that's doing it, we need to encapsulate that in some way. I know people do try and share public Chef scripts, et cetera. But there are so many different versions of that. There's not a kind of pool, if you like, the one source. And I know there are projects that are trying to address that. But reuse requires encapsulation. Encapsulation, we believe, requires a model, right? So big software, essentially, is a group of services, a number of different services that are connected to different services, right? So Nova needs to be connected to Keystone and a bunch of other storage things. Keystone's connected to most things, of course. But our different OpenStack units are connected to other ones. In our world, we refer to that as being relations, right? These are related. So who's heard of Juju? Good. Good, good, good. So Juju is an application modeling tool. And again, back in, Dustin will correct me, about 2010, was when we started to develop Juju. Because as we were working, actually, with a lot of large-scale applications on public cloud, was how do we connect all of these pieces and manage it in the same way? How do we operate these in the same way? And that was when Juju was at least born or un-cepted. So Juju, this is a gooey version, gooey view that you're looking at. This displays a model. I don't know how well you can see it on the screen. But displaying a model of a relatively simple OpenStack implementation. So you don't know where you can make it out from the icons. But we have our rabbit. We have a MySQL in there. There'll be a number of different set services that are providing block and object storage. We're going to have the Nova, Horizon, Keystone pieces. And so this is a logical representation of a model. Juju also has an API. It has a command line client. So it's not restricted to this gooey environment. But we can draw up this environment using something that we call charms. And let me skip through, because I'm conscious of the time ticking away in front of me. This uses something called charms. So each of these services are defined within a charm. And a charm essentially declares an interface. I am a service. I'm, in this case, a MySQL service. And I have an interface. Over on the other side, I've got a rabbit service. And so these interfaces essentially are just pieces, hooks, that one service can use to talk to another. If we have, for example, interfaces of syslog, or what I have, dbslave, if you wanted to be able to configure MySQL in a master slave environment, it needs to be able to interface with other MySQL instances. Here, MySQL exposes an interface of MySQL. I am a database. On the other side, Rabbit has a bunch of other interfaces. And it has one saying MySQL. If I want to talk to a MySQL, what do I have to do? And that interface means that one provides MySQL, the other wants to consume it. And so we can instruct when we have these things as charms and we deploy them, or we're modeling them in juju, we can say MySQL needs to be related to Rabbit. And that definition, it's written in the charm, means that they know what to do. Here's a MySQL database. I know how to go and interface with that. And that's defined within the charm. Event handling. So what happens when certain things occur? We want to add more of a particular unit. I want to add more MySQL to scale my environment, or more units to provide some HA in redundancy. We define that in the charm. So as you can work out now, charms can be pretty big, complex things. Which is why sharing them, reusing, crowdsourcing, if you like, the ideas between them is an important thing to do. This model allows us to be able to kind of describe complex technology. So we can connect bundles of charms or services. Together, in fact, we call them a bundle. Should I use that number, that name? So all of the open stack services that we saw previously, that we define as a bundle. Here are a number of different services defined in charms that have the relation set that allows to be able to deploy and manage our environment. So again, this is what an open stack environment looks like. Hopefully, you can see the lines a little clearer there. Those interfaces between the different services are very clearly defined. So you don't need to worry about setting up your puppet, or your Chef script, or whatever. All of that is designed and encapsulated in the charms. A lot of that is done by what we call the charmers team. So the dedicated team within canonical that writes that specifically for open stack. But the interesting piece is that the input they get from customers. We have some very demanding customers, some big telcos, some media companies that expect certain behaviors, how they operate things, how those, in fact, those event handlers. What do I want to want to back it up? And so they will say, we need this. And in the ideal world, they'll provide us with an updated charm that says how to do that thing. And that charm goes through a series of checks and vetting. But assuming it's good, that update then becomes available to everybody. So then everybody gets the goodness from that particular telco, if we say that in that case. It allows us to model scale. So applications, when we deploy an application as a service, or when we model it using juju, we can model it as a single unit, deploy a single instance of MySQL inside a VM or a container. But then we can add very many units. And because the relation with Rabbit is defined, as we add more units, we don't need to do anything. MySQL and Rabbit both understand, OK, here's another one. And it has the exact same behavior as the previous one. How I relate to that, how I interact with that is already defined. How I'm doing on time, badly on time. So let's talk about operations. Operations, all of the things that we have to do managing our open stack environment, right? The hard things, upgrades, backups, those things. So a lot of the raw materials that people use to manage their open stack environments, lots of scripting, puppet chef, bash, whatever, different zip files, different types of things, right? This is what people got very excited about Dockering. Is that going to be an easier way for me to be able to build, manage that? And the kind of operations you need to be able to run, I talked through a lot of them, actually, backup, benchmark, integrate, apply firewall rules, upgrade, et cetera. There are many different ways that you can do that. But again, Juju, if anyone saw the latest open stack user survey, Juju has risen up that as a become more popular now, I think, overtaken chef. Because a lot of the goodness of how we perform these things is encapsulated in those charms. So how are we doing this? So actions are encoded in hooks. An action can be backup, right? In the charm, we have something called an action. That action is backup. It defines what do I do when I receive the signal? Back me up, right? If I'm a database, I have to do whatever it is, a dump or backup to using some particular environment. Likewise, for Rabbit, right? And then there are a number of actions that we can provide for a lot of the different services. So install, of course, configure, add connections, upgrade is one of those key ones, right? Who's upgraded an open stack cloud? Successfully. I was thinking how many hands went out. So upgrades and updates are still a big challenge. And we were going to actually do a session specifically on that, but we've done that in the last two open stack summits. So hence, took a slightly different approach here. Upgrades and updates are again handled by the charm. So we take this, I could have chosen any of those open stack services. But this example here was specifically the dashboard, mainly because it was the simplest one, right? So horizon is the open stack dashboard. You'll see, if you go on jujicharms.com and go and search on there for the horizon, in fact, it's actually called open stack dashboard, you'll find details of that charm. You can go and drill through, and you'll see here definitions of the actions that that charm supports. This one supports open stack upgrade, right? So to be able to upgrade an open stack environment, we simply send a signal to that charm saying, run upgrade. It will go and do what is defined within the charm, which essentially says, OK, if there's a new version in this repo, let's go and get that. Caution is required. Because a stateless service should run pretty quickly. It should be all right. But stateful services, or services that are where you only have one instance, they're not full HA, you're going to need to do a bit of management around that, moving services around. But we can certainly provide details and documentation for that. So that was a very quick run through. What I wanted to do was rather than me continue to talk about operations and how we see that, was actually get somebody that does this for real. And so I wanted to introduce Billy and Steve from a company called Opus, Opus 2, who have been doing this for real for the last few months. Thanks. Oh, smattering of applause. Thank you. I was not expecting that. Right, good afternoon. I'm Steve Fleming, the CTO of Opus 2 International. And we started our life in 2009 as a court reporting and transcription firm in the legal industry. And in 2011, we launched ourselves a cloud-based platform that has revolutionized the way trials and legal hearings happened across the globe. And we host documents securely for all parties to a dispute. We share them ahead of time, six months before people get to a courtroom. And then when it comes to the actual trials or hearings themselves, we wrap everything up from the cloud, put it onto an encrypted little laptop running Ubuntu, and shove that into the courtroom. And then people can access that locally without fear of the internet going down. And then people can attend remotely through that mechanism and chat and documents and all that sort of fancy stuff. Anyway, how do we come to use Ubuntu OpenStack? Basically, there was a point during our growth where we had to suddenly take stock and assess our needs and decide, really, how are we going to scale this thing? And what's the best way forward? So we reviewed things. Firstly, virtualization, obviously, is a big thing. We started by using VMware, which is decent, but it's a pretty easy business case in terms of losing those licensing costs. So that's helpful. Storage was an interesting one for us. We've got needs here, particularly in America, where we're hosting and streaming lots of video content that sort of supports these legal hearings. So Ceph and Swift and all that sort of stuff was, again, quite appealing options for us. The network design, another really interesting point where basically our application had evolved and our client base is getting larger and larger. So what we wanted to do was redesign our network to centralize some of the heavy lifting that was occurring on some of our machines. So we were doing things like document OCRing and antivirus scanning on upload. So we decided to sort of create a worker server farm and use the sort of OpenStack technology to build that. Hosting is an interesting thing. So we work with a boutique cloud hosting provider called Stratagen, who have been very cool, very quick and responsive on this stuff since day one for us. So again, we're massively influential in helping us get to this point and moving away from VMware. But the main reason we're doing all of this is because of this guy, because he told me that's what we have to do. So I'll just hand over to Billy to explain some of the stuff we've done. Thank you. So I'm the head of R&D at Opus 2. And one of the reasons why we wanted to move to OpenStack is to bring our DevOps and SysOps and our deployment and scalability all under one software. And OpenStack allowed us to do this with API. So one of our examples is a worker server farm, where we transcode video and OCRing. And using heat and sellometer, we can actually look at capacity. And when it goes up to capacity, we can automatically deploy more machines. And that's fantastic for us. We can let clients upload all video, OCR thousands of documents, and we don't have to worry. One of the other interesting things is we're looking at hybrid clouds. So we have a lot of clients that are scared of the internet still. So we're looking at products like OpenStack Magnum to be able to move a container from our cloud into their behind the firewall instance and then back again, which really helps us to scale and be more flexible. So that's really a nutshell of some of the reasons. I can hand back to Steve and he can let you know about the future. Yeah, thanks. So basically, we're trying to pioneer OpenStack in the legal industry, which we, yeah, we get it. We're like five years behind everyone else. We're doing our best. But we want to be market leaders in that space. If you want to know anything about what we're doing in more detail, come find us at the conference, have a chat with us, so on and so on. Some of the stuff that we're doing that's quite cool is we're consulting with the Abu Dhabi kind of court system to see if we can get a whole OpenStack thing going with them. And yeah, basically, we're committed to OpenStack and here we are. So thanks. Thank you, Steve and Billy. So kind of the next step is that hopefully if nothing else tweaks your interest, right? If you're not using Gigi today, if you're not using any of this tooling today, we've tweaked your interest. And so the question, maybe, how can I get going, right? How can I try it out? What are the requirements? What's the best way to get started? Who uses DevStack? Anyone using DevStack? A few? Who else is running? Anybody else running OpenStack on their laptop? A few? Okay, so DevStack's great, but we wanted to show you another way that allowed you to be able to build a full OpenStack cloud on your laptop using containers. That's the container box checked, right? Using containers, right? To build a real cloud and start to exercise some of these tools. So to do that, I'm not smart enough to talk through it in detail. We've got one of the lead engineers on this technology, a gentleman called Adam Stokes who's gonna walk you through it, Adam. Hello, as Mark said, I am one of the lead developers on an application we wrote, Conjure Up. And what Conjure Up is, it's a thin layer on top of Juju, Maz, LexD, and it's completely bundle driven. So a Juju bundle is your layout of your model, or your application model. And so what Conjure Up does is it allows you to take your bundle and any of your administrative tasks such as, I know Mark talked about actions, he said you needed to do some backups or some upgrades. It allows you to package them all into a dead package. And so, tools that you're used to, you're able to use to encapsulate and deploy these application stacks using OpenStack and any additional software that you want to integrate with your technology stack. And so Conjure Up is the best way to install OpenStack. It is capable of deploying to all LexDs on a single machine, on your laptop. So say if you have 16 gigs of RAM and two cores, you are able to run and deploy a fully OpenStack using Nova LexD for your instances. From a development standpoint, I know some of you used DevStack. The good thing about our bundles and our charms is that we can easily switch between stable releases to what's coming from upstream. And so with simple config flag changes, you can pull in the latest directly from OpenStack's Git repo and begin testing your next iteration of your software stack. Like I said, Conjure Up packages are just regular Debian packages. They contain your bundles and then simple shell scripts or Python or whatever language you are comfortable with. Conjure Up has a few events that get fired throughout the deployment process. So you have your pre-deploy, so you can do things like set up your LexD profile. So for example, for Nova LexD to work, we need to expose additional Knicks to those containers. So in our pre-initialization, we alter those profiles so that those Knicks are available and Neutron will work. There's a post-deploy where after the deployment is complete, you can do things like set backups through the Juju Actions, or create databases through MySQL. And like I said, it's all just shell scripts. So you run through your Juju set commands and Juju config alterations. And then lastly, testing different software technologies such as, like below the line, you can easily integrate SDNs into your bundle. You can also above the line integrate things like Trove or Slometer during the actual deployment phase. So what I'm gonna show you now is a typical setup and how you would get going. So everything is console-based, kind of old school that way, so I prefer that. And to get started, you would do your conjure up open stack. And you have three bundles. Now these bundles are defined by me. I created this package, these are the bundles that I vetted that I know work and display the technology stack that I want. And so for you guys, you would just create your own packages and deploy them within your company, however you see fit with your own app repository or external. So I'm gonna pick NovaLexD. And this uses Juju as the back end. And I already have a couple of models. I have a couple of models defined one for LexD. I'm gonna pick that. And so what you see here, this is the bundle. So these charms here are coming from a bundle that was defined pulled from a charm store. So you could get this bundle directly from JujuCharms.com if you wanted to take a look. So it defines several services, things like that. What I'm gonna show you is adding in an additional charm to your existing bundle if you wanted to expand on that. And so for example, we're gonna add in Glant Simple Streams. And so what this does is pulls the charm into the bundle and Mark has spoke about relations setting. So the relations that we can alter, we kind of guide you through. Another good thing about this is we do guide you. So you can't really mess up when you're doing your deployments. And so we expose what relations are able to happen between Glant and any other services that are in your bundle. So Keystone and a Radium Cure required for this particular charm to work. So we just highlight that and it's good to go. And so when your bundle is good, you just hit commit, it gives you your deploy summary and off it goes. And so that would be basically what the output is. And all it is is Juju on the backend. So this is just a simple glue layer on top of Juju and NAS. And that would be what an OpenStack deployment looks like through Juju's eyes. And that's it. It's deployed, it's good to go. You can go to horizon. It shows you when everything's done, you get a little status at the bottom showing you that you can get to horizon through the OpenStack dashboard. Okay, questions? I do want to give you the website. You can browse that. We're gonna have documentation for developers for packaging their deployments. And let's see, yep, that's it. Thank you. Great, thank you. So I mean hopefully that's giving you a little flavor onto some of the tools that we have customers using in production today to manage their operations. Juju isn't just about deploying an environment, deploying an OpenStack environment, it's about managing life cycle operations, so upgrading, backing up, updating, or all actions that are encapsulated within charms that we use to be able to manage that environment. And the conjure up piece that Adam was just showing you there is a way that you can get going with that. How you can use those bundles, use those charms to deploy OpenStack, play with it yourself, go and check the website and have a play with it yourself to be able to get going and experience with it. What we really want people is to start to give us feedback on the charms. So if you can give us any updates, this would be better to do it like this. We may disagree, but we'd love to get that input, right? Or others may disagree. So we've got a couple of minutes, if anyone has any questions? We can probably do a microphone thing, but if you're loud enough, you can shout out. Hi, Fatih from Ericsson. I have a chance to use Juju mass and landscape. That's awesome. What I see is Juju kind of isolated in destination environments, mass, VMware, OpenStack, AWS. If you want to scale from private cloud to the public, implement the hybrid, Juju seems cannot do it because Juju keeps those destination environments separately. So is there a way to combine these two for scaling out and scaling in? Thank you. That's a great question. So just to repeat the question, if everyone's got it, or summarize it, is Juju models an environment and then we'll deploy it to a particular endpoint, whether that is bare metal or whether it's to other environments. And thank you for mentioning them. Where it was, of course, OpenStack to deploy applications into OpenStack or to AWS, Google has your whole bunch of other things. And so, but those are discrete environments. Can I model an application that spans those environments? So component on AWS, for example, and then components in-house and on-premise OpenStack Cloud. The answer is not today. Right. That's something that we call a cross-environment relation, but we're working really hard on developing that, so watch this space. Good. Any other questions? Yes, sir? Yeah? Right, so I've got a double microphone. That's probably bad. So yes, for stateful services like a database, you need to be really careful on how you upgrade that and specifically how you, one, how you define the upgrade in the charm. And then two is that if you've only got, if you've got multiple instances, so by default in our charms, if you deploy a bundle of an OpenStack environment, we use the Pocona extra cluster environment so we can upgrade those individually, ensure that they complete, but without losing service. If you only have one MySQL instance, a lot of caution is required, so you would need to take that offline, manage it in a sensible way. And that, yes, we can do a lot of that work in the charm, but you're gonna need to hold its hand as you go through, right? Oh, sorry, my colleague at the front saying, he's the Lexi, right, guys? So one of the smart ways you can restore very, very quickly is if you're doing this inside Linux containers, by default, for example, Cundra Up will place things within Linux containers. Very smart would just be to do a snapshot on that before you run the upgrade. And then if you saw the demo in the session just before lunch, do a restore on it. If you base that on ZFS, that's seconds. So if it doesn't, if it fails, you can restore within two or three seconds, right? Good, we're pretty much out of time. So if there's any other questions, please grab me in the hallway. Otherwise, thank you very much.