 Good morning. My name is Marco Cepi, I work at Canonical. I'm an engineer, DevOps engineer on a Canonical team. And my focus has really been about how we can make operations much more reusable, shareable, and just tractable amongst organizations. So I want to take you all through a journey of how we approach this idea of making operations truly open source and shareable and reshareable amongst people. Before I do that, I want to talk briefly about who we are. So we're Canonical. We're the company behind Ubuntu. As you can see, Ubuntu's been growing in popularity steadily since we released the product almost 12 years ago. Ubuntu is the free open source Linux distribution. That's welcome for anyone to download, try, and run today. And so today, we see about 34% of servers running Ubuntu. And that is a generally large market share for Linux servers today. In fact, the majority of people that are running clouds, whether you're running servers, clouds, are all running on top of Ubuntu. And we're the choice for developers. And so I want to highlight just a couple of stats. 70% of all instances launched on Amazon are Ubuntu images. 80% of Azure's Linux deployments are all Ubuntu. Where 65% of the large OpenSec deployments are built on top of Ubuntu. And because of this, this has given us a lot of insights into how people run operations. And so what I want to talk about is when we meet with people running and using Ubuntu for their clouds, we come across a common problem. The story plays out in every organization, from the large startups to the well-established and dominant industries today. Everyone that's running software and running software at scale is encountering these same problems. And the problem really is around how do we manage the increasing complexity of applications and services that are spawning today? If you look at the myriad of applications that are available for the people who are evaluating and deploying today from OpenStack, which, as you all may know, is not necessarily the simplest and most straightforward piece of software. You just don't go and put it on a machine. It's actually multiple components spanned across multiple machines that all come together to produce an interconnected single picture of how you provide a cloud service. In the same way, all of these other tools, whether it's Mezzo, Stalker Engine and Swarm, Kubernetes, Big Data, all of these things are providing a single picture of software, which is actually built from many smaller, complex components. And all of these components kind of lend themselves this idea of cloud-native architecture. The idea that it's not a single monolithic application on one or a few servers, but it's actually this huge amount of scale that spans the breadth of multiple machines. And all of this increase in scale and all of this increase in size is due mostly to the fact that software today is open source. It's very easy to get software today. In fact, there is an abundance of it. Where you compare that to 10, 15, 20 years ago, software was really the hard thing to get. You either had to go and purchase large licenses for software, or you spun up a development team in-house to produce the software that you needed to fill your needs. But because of a lot of what we see in the open source world today, we see that there is more and more software that you can just go and try and evaluate. In fact, there's more software that you'll ever be able to evaluate in a given time frame now than there was last year. And every year, that number grows more and more. If you look at the number of just in the container space alone, the number of container orchestrators that have spun up in the last year is quite a large number. There's probably about six or seven applications that do that today. Next year, I guarantee it'll double. And it'll just keep getting more and more complex in numbers. And what we're seeing is because of this explosion of open source software, because it's so much more readily available, the cost of actually acquiring software is really low. It's not that difficult anymore. In fact, I don't imagine most people pay for software licenses today outside of a few maybe longstanding or legacy software application stack that you still have to manage. But what really is becoming the most expensive thing in an organization, from both a time and a manpower resource, is operations. It's not so much knowing where to get that software and paying for it, but it's now knowing how to install, manage, and scale and operate that software. Past just day one, but day two, three, what happens weeks later, months later, how do you upgrade? How do you scale? How do you manage fault tolerance? All of those things that you need to either get expertise in-house or hire services to help manage that. And that's where the real cost today is in operating in software. It's not acquiring, but the operations of it. And so this is really what I want to focus my talk on, is how do we stop that curve from accelerating much higher to the point where it's prohibitive to even evaluate software? If there's 20 solutions that fix your problem and you only have to pick one and hope that one's the right one and run with it, because that's the expertise you have, or that's the choice you made, and it's too expensive for you to try something else, you just have to hope that's the right software solution for you, and if it's not, it's very expensive to step back and try another evaluation. So this is really where we're seeing this shift, is that in all these companies that we're speaking to today that are running a boot, that aren't running a boot, everyone today is having a problem where the scarcity of how they do software is not in acquiring the software, but it's how they operate it. And I want to talk about how we can potentially resolve some of these things. And the way we see it today is that the only way to truly make software operatable in today's world, where it is so abundant and so frequent, is to be able to model how software operates. And the idea behind modeling is that if we can model how you do installation, how you do configuration, how you do upgrades and management, if we can model that in the same way we model code, how you do inside of code, how do you do unit testing, how you do quality checking, if we can model it in that same way where it's software-defined models, we have a way to actually make software in this day and age much more tractable. So I want to take an example and kind of walk through a pretty straightforward example. This is MySQL. And more importantly, this is Oracle's MySQL. It's an open source database engine. It's quite popular in today's day and age. Most software stacks are built on top of either MySQL or something MySQL-like. But for the sake of this argument, it's a database. It's just a database server. This could be MySQL server. This could be Postgres. This could be even something like a NoSQL database like Mongo or another piece. It's just a very common component that you find in most stacks. Most software stacks today need to have a database to store their information. Even OpenStack today, you have to have store database. You need database to store your back-end information for Keystone. And you need things like message queues. And all of these things are common components that aren't unique to OpenStack. RabbitMQ or any MQ service is not part of the OpenStack tree. It's just a supporting service that's required to run it. And so when we see people modeling today, we've seen a lot of these companies model software by going to a whiteboard and drawing out, this is my architecture. So let's say you were running some big data service where you've got a bunch of Hadoop nodes that are all set up to do big data processing and you're using Hive to add your SQL-like query interface. That requires MySQL's database back-end. Let's take another example of a model. Something like a very simple blog app. If you're running ghosts to run your blog services, you need a database service in the back-end to store all that information. Or how about even OpenStack itself? If you look at all the components to get a bare minimum OpenStack where you can launch instances, you need things like Cinder, Nova, Keystone, Horizon, Glance, all of those things need to be connected together and they all rely on a database back-end to store that information. And so when we actually zoom in, when we talk about modeling, what does it mean to model a database? Or what does it mean to model any one of these components on the screen? So if we were to kind of take a dive into this, we'd see really what it means to be modeling is you need to model things beyond just the installation. We talked about modeling software and modeling big software. We're talking about how do you not just install an application, but how do you configure that and how do you mutate that configuration over time? If two weeks later, you need to change a configuration option, is that a redeploy of something? Is that just changing an action and running through some updated configuration? And then how do you do things like upgrade? How do you do things like clustering? And more importantly, how do you do things like integration? This is a MySQL database. I know that in order to integrate it with anything else, I need to provide the credentials to that application. I need to be able to say, here's my username, my password, my database schema, here's my host, here's my port, here's all the information you need to integrate with me. And every component has a unique set of information that's required for integration. And then there's other things that you need to model, things like how do you do everyday operational actions, the kind of things that are required for you to make sure the service is running and running well and healthy. For database services, it's usually things like how do I do a backup? How do I do a restore? How do I check integrity of my replicas? For other components, it could be things as simple as I need to take Nova out of, I need to, sorry, I need to drain Nova of all its workloads so I can take it offline and perform maintenance. I need to do an evacuation of the host. There are a number of everyday operations that we need to perform in order to ensure that the quality of our service is running over time. And these are all past that day zero installation. These are all things that we have to do every day. And that's all things that we have to grow in our organizations in order to be able to do that. We need to be able to learn what does it mean to do the proper way or the best way or the right way for us to essentially evacuate what's the best way to perform backups? What's the best way to do upgrades for a database service that we don't get any downtime? All of these things are important pieces of information. And today in these organizations, every organization's figuring that out for themselves. Now, of course there's resources like blogs and stuff, but for every piece of software there's not a best practice blog where you can go or community you can tap into to say, what's the right way to do that? Instead, you have to figure it out through trial and error and a lot of pain or you have to hire people who have already done that trial and error and pain. So what we want to aim to do is how do we encapsulate all this expertise? Because at the end of the day you're running these things, you're running these operational executions. And they're pretty common patterns. The installation process for a database server is, well, not the same, but it's very similar to what you do for any other number one of these components on the screen. Horizons, installing a set of packages or installing from the upstream or compiling code. And that's the same for most other components we found. So what I want to talk about is how we can actually start leveraging and resharing operations. When it comes down to my day-to-day activities, I don't want to be an expert in operating a database server. I'm already an expert in having to manage and install OpenStack or even just a simple component of OpenStack. And OpenStack is just one example of a workload. What happens when you start operating software on top of your OpenStack? Or if you're operating software on top of OpenStack and on top of public cloud and on top of another public cloud and on top of bare metal? All of these things require expertise in each of those components and we want to reduce the amount of friction and overhead it takes to do that. So I want to just skip slides. I just want to show what it's like to operate software today. How we operate software, how we operate large scale software. And not just the I installed some stuff and configured and I'm good to go. How do I actually manage that software over time? So I've got an example here. This is of Kubernetes deployments. So Kubernetes for those who don't know is a container orchestrator platform. It's a way for you to do scheduling and processing of Docker and Docker like containers. This is the workload that's running right now in Amazon. And what we actually have is we've modeled what it means to actually install not just Kubernetes but how do I do integrations with other things? How do I integrate Kubernetes to Ceph so I can get persistent disk volumes? How do I integrate Kubernetes with monitoring software? How do I integrate all of these components together individually together to produce a holistic operations dashboard? So I want to show how I set this up real quickly and walk through that process and show examples of how we can actually reshare and reuse operations code without having to go and reinvent it every single time. And the entire point of this demonstration is all this stuff is off the shelf available. So for instance, we're using Elasticsearch Beats and Kibana, it's the Elasticstack for monitoring and metric collection and log collection. And that's the same Elasticsearch and Beats and Kibana that's being reused by every other deployment that uses this. The same way for Ceph, every production deployment of Ceph we have today, this is the same operational knowledge encapsulated that I'm doing today here that anyone can reuse. The idea is that there are lots of people in the room that are really good experts in one or a few things, but there's rarely ever an expert that's an expert in everything. And if we can encapsulate the knowledge that we have today and reuse that across the board, people can actually get started quicker and evaluate software at a much faster pace. So we use Juju as a way for us to provide a modeling language. And Juju essentially what it does is takes care of how I get placement in machines. So Juju is a tool, it's a free and open-source software tool, allows you to say things like, here's my open stack, go provisioning machine's foot workloads on it. Here's a public cloud, go gather me that. Here's some bare metal, go put these workloads here. So we use Juju as a way to define our placement of software onto machines. And Juju uses charms as a way to encapsulate the operational expertise required to set up these components. So I'm just gonna create a new model and I'm gonna run through the same thing I just ran through earlier here. I'm just going to deploy Ceph. And what this is gonna do is gather all the things that are required to deploy a Ceph cluster. This is installing everything for Ceph OSDs, for Ceph monitors. It's making sure it's set up with the minimal requirement for these. So it's three nodes for Ceph OSD, three nodes for Ceph Mon. And it's adding in some additional monitoring and dashboarding capabilities so I can see the health of my cluster easily. Then I'm just gonna deploy a core-based Kubernetes. This gets me a master or a worker to run workloads on. And as well, a PKI for doing TLS encryption. So there's entire SSL encryption across the stack and Flannel for an SDM. And then I'm gonna deploy some other things like Elasticsearch, Kibana and Beats. And so what we've got is a whole bunch of software that's being installed and managed right now on Amazon. But this could have been on my own private open stack. This could have been on bare metal. This could have been on any other cloud. And what we see is what Juju's doing for us is it's grabbing machines, it's allocating resources, it's putting these workloads on them as I've described it. And then these workloads have code encapsulated to do all the operational knowledge. Now this is very similar to day zero stuff. I'm just installing software and configuring it. And that's the initial configuration. While this sets up, I'm gonna switch back to this, which is the same model, but it's running now. So it takes about 10 minutes, five, 10 minutes, depending on how fast the cloud is, to spin up these machines to get workloads running on them. What I'm gonna show is what happens after you get things running. So this is a command line version of this pretty gooey that I showed earlier. So if we switch over to Hello Open Stack, we'll see a whole bunch of these circles just kind of sitting all over the page. We can fix this up in a second, redesign it. But I'm gonna go back to the already deployed pre-baked model, great. So I've just replayed exactly what we've done here, but from a little earlier today. And what we get now is the ability to actually start managing this over time. And so we do that in a number of ways. The first is we can actually manipulate configuration of stuff. If something changes in your deployment where you need to actually tweak and tune an item, whether it's scaling up the amount of memory you wanna use for caching your database server, or it's tweaking the volumes you want Seth Mon to use, I mean, sorry, Seth OSD to use for creating Seth block devices, or any number of configuration options can be tweaked and tuned using Juju. So let's take a look at a couple of examples configuration options that we can kind of manipulate and modify. And I'm going to borrow the Kubernetes worker itself, gives you the ability to tweak and tune things like the Docker ops you wanna use for when you launch Docker containers. If you have things like using insecure registries or other parameters, those are allow you to supply them and change them over time and tweak them. Things like defining ingress values, where to do proxy values, if you wanna install from upstream instead of local Docker, all these things you can manipulate over time. And each one of these charms, each one of these circles enumerates a set of options you can actually manipulate. And so this allows you to do things like as you grow your cluster over time, you can scale components, you can manipulate components and you can change them. And because all these operations are encapsulated and distilled in a way that makes them reusable in that each one of these circles is just enough to define the characteristics required to install and set up that single component. That component then defines these set of integrations. These lines are integrations and components that you can tie into. And because we delineate that in a single circle and a single charm, it becomes actually truly reusable. So all these pieces are individual, reusable, scalable chunks of code that just define these is how I install, manage, configure, change and operate this piece of software. At any time we can switch these things out. If you're not a big fan of Beats and Elastic Search, why not Nagios? Why not Zavix? Why not Prometheus? Because each of these things define their own set of criteria and how to install, manage and integrate, you can actually start interchanging, swapping and removing components. I didn't have to write a single line of code to get an entire Kubernetes cluster running. I didn't have to download any kind of predefined scripts and weed through them and make changes to my environment. Juju has done the job of encapsulating and distilling what it means to get that provider specific information where you get your cloud credentials from, how you boot up machines in that infrastructure. All these little circles do, all these charms are concerned with is how do I, once I have a machine, how do I install and manage everything? These also give you the ability to run operational actions against. So, for example, when you're managing a Ceph cluster, there's quite a lot of things that you need to take into consideration. Ceph is a wholly robust piece of software. It's great for remote block devices, but when you look at the actual operations for it, there's quite a lot of things that you can actually do during the life cycle of a Ceph cluster that you may need to run at any given time. And these aren't necessarily configuration options. These are just kind of one-off operational tasks, things like creating caching tiers, managing erasure pools, creating new actual RDB pools. All of these things are one-off tasks that get still modeled and still down to this charm level. So even charms themselves, beyond just things like installation and configuration, also define day-to-day operational tasks that you can execute in a repeatable, reliable, robust, and observable fashion. And that's the advantage of having this idea of open-source operations. Every single company that uses these charms, they feed back into them. This is how we do erasure pools. This is how we manage these actions. These are things that we find ourselves doing on daily tasks. All of that stuff funnels back in, just like an open-source software project, funnels back into the operations for that and produces a very reusable, reliable, and robust means of operating these pieces of software. So let's see how our deployment here is looking. So still a few pieces that are being spun up if I detangle these all, because I kind of mashed them on top of each other. We see that we have a Ceph cluster that's running. We've got Kubernetes that's just finishing up its installation, and we've got a bunch of Elasticsearch stuff hanging around. And so what we can do in order to finish and compare these pictures, while I didn't place them just right, a lot of the integrations haven't been completed yet. And Juju allows you to define these pieces of integration. So if we go here to here, to our model that we're just spinning up, I know things like Kibana needs to be connected to a certain piece of software. In this case, it needs to be connected to Elasticsearch. Elasticsearch needs to be connected to Beats. Beats needs to be connected to, come here, to all the workloads I want to monitor. So I want to monitor at CD, I want to monitor this, I want to monitor this, sure, I want to monitor Ceph as well. And I can draw these integration lines because I've declared how these integrations work. And now, oops, this one didn't get drawn. And now what I get is the same picture I've had before, where I have software deployed, integrated, and redone, and I can operate it just as I would any other workload. So coming back to slides here, the idea that we have is that just like open-source software, operations too will be actually shared amongst organizations because at the end of the day, the operations, how you operate and manage your infrastructure doesn't actually help you win. It's not how you run your open stack or you running your open stack better than another competitor that makes you win. It's the services and how you manage and provide service to your customer that really makes you win. And by sharing infrastructure and operations amongst companies, you guys are actually able to accelerate your innovation and compete where it really matters at the product level, not the infrastructure level. Just in the same way that a lot of these companies today, most everyone here I imagine is running some piece of open-source software today. And that's open-source software that you've probably either contributed to or your competitors are contributed to or other organizations that you've never even interacted with or compete within any level have contributed to. And by sharing our knowledge on how these base components are built, whether it be a database service or a cloud like open stack. In the same way, if we share our operations in that same vein, how do you install and manage, configure and operate that? It helps to reduce the overhead and cost that it takes to do operations today, ballooning infrastructure sizes, duration it takes to stand up stuff. If it takes you more than a couple of days to stand up at open stack, that's time invested and ultimately wasted in trying to get a product running for evaluation. And if you're using open stack today, ask yourselves how long does it take to upgrade my open stack? Are my running the latest version? Am I on Newton? Am I on Mataka? Are we still maybe even on an older version? By distilling this idea of operations and by using things like the open stack charms that we have, you're able to do things like upgrades, day zero releases, upgrades without any downtime, management of a cluster to produce a robust and reliable AHA production deployment. And it's because we've spent time with our customers, it's because our customers and users of our open stack charms feedback their experience and their operational knowledge that we're able to produce such a robust and quick and reliable way to do things like big software, things like open stack, big data, container infrastructure. And it's that idea of sharing that really helps to promote the reuse and reusability of operations. Here's just a couple of examples of things that we have today that have been produced as charms that you can go today and get operational expertise on and experiment and play with. Now I'm not saying that you don't need to know how to operate these pieces of software, but what I'm saying is that this is a really quick way to evaluate which piece of software you really want to go deep on. Maybe you want to evaluate all the different types of SQL databases for your application. And if it takes you a couple of days for each set of application to stand up instead of a couple of minutes, that's a lot of time spent gaining knowledge in an arena that you may not even be playing in. And so ultimately what this means is it's not gonna get any simpler. If you think open stack is the most complex piece of software your company's gonna deploy today, I encourage you to think about what's coming in the next five years. Going beyond things like even container infrastructure, things like serverless, things like high performance computing, all of these things will lend themselves to larger and more complex pieces of software. The future is only going to be more complex. It's only going to be more problematic for us to operate the software. And if we don't start sharing operations and we're using operations, we will have a hard time innovating. And that's what Jujie does. I'll take any questions. I think there's microphones over there. If you guys just want to queue up in a microphone so people can hear your questions. Hey. Is there a way to try it freely without installing everything by ourselves? Yes, so Jujie itself is all free open source software. Everything I've shown you today is free and open source. There are much cheaper ways in paying for a cloud to try this software out. For instance, if you have a machine running modern version of Ubuntu or Linux, you can use Linux containers to model a deployment you wouldn't have cloud but on your laptop. And actually I'm just gonna pick that off in the background, that's a good question. So there are definitely cheap and free ways to try these things out. I will create a new controller on LexD. So this will be just basically spinning up a bunch of lightweight containers, machine containers, LexD machines. For those of you not familiar with LexD or LexE, it's Linux containers. It's the same primitives you'd expect from containerized technology, super lightweight, namespace, no virtualization overhead, but you still have isolation and security. So I'm basically gonna model a bunch of really lightweight VMs on my machine. Well, it's gonna download an update, but in order to basically play with Juju, model topologies without spending a dime either for resources in my organization on my cloud or even in the public cloud. So that's a great question. Juju works against a number of machines, including things like if you have a couple VMs running in Vagrant or a couple VMs running in VMware or somewhere else, Juju plugs in and works against those as well. But this is probably the fastest way to get started, yeah. Any other questions? Anything anyone's interested in seeing? Operations-wise, ask whether or not we have it, see it deployed, see it manipulated, maybe see it broken. Sorry? Backup and restore? The question is around backup and restore. It's a great question. I'm gonna leave this running in this tab and we'll go over here. So not everything has a backup and restore. For instance, I don't think Keystone has a backup and restore, but things like database services typically do. And that's all defined in the charm. So let me go, that's probably not the name of it. So ProcodedDB is a bit more robust of a MySQL data store. It's what most people running OpenStacks in production that we know of are using instead of Oracle's MySQL. And it itself includes a bunch of actions, things like how to do backup and resumes. Oh, sorry, how to backup and how to pause and resume services for multi-clusters. And these are, let me deploy this so I can show you what that looks like. So let's take a few seconds to spin up, but we can still, because it's asynchronous, we can play with it before it's fully run up. So if I can do juju actions for Conan Cluster. So these are the things, the things I just showed you in that webpage, a little more to still do juju. We have backup and we have pausing and resuming the MySQL service for things like maintenance and whatnot. Because ProcodedDB has a heartbeat measure, if you pause the MySQL service, the cluster will know that that machine's down and not to use it for read and writes. As far as doing a backup, essentially what you would declare is a set of options. You say, I want to backup this database, juju run action on ProcodedDB, oops, Cluster and I want to do a backup and I want to do compression and I want to change where it outputs to. Let's just say I want to put this in slash SRV. And if I want to do an incremental backup or not, that's all I need to do. Now, granted, ProcodedCluster is not running yet. It's still probably in the process of being set up. Yeah, so I'm waiting for a machine to come up still, but I can still queue an action to do that. Because we're modeling these things, juju knows the state of everything that's being modeled. And in this case, juju says, I've queued this action, and if we check its action status, we'll see it's still pending. It hasn't been able to run it yet. And it'll remain pending until it's derived that the workload is in an idle and ready state. Once it's idle and ready, that action will be queued and then executed. We'll get a backup in slash SRV. Now, granted, there's nothing to backup because I have no databases in there, but if we created the relations, we could deploy a blogging service, we could write a couple of blog posts in the next five minutes, and then we'll have a database, and we can back that up, and then the backups will be there. Now, from there, it's a matter of, do I SEP a down-to-line machine? Do I push it to a cold storage? All that is modeled in a separate charm. How do I do, how do I remove backups and do file synchings to an off-site thing? So for now, it's on that local machine, and I can either grab it directly to juju, or I could do another measure to have a charm that says every file in this directory gets pushed somewhere else. So that's just an example of running a backup action for a SQL database, and each charm delineates its own set of actions as it sees fit, depending on what the common operations are for that service. Great question. Any other questions? We have a few more minutes. I'm happy to show off a couple more things. We'll see how we're doing with Lex-D. It looks like we have a deployment. So if I run a Lex-D list, I've got a couple of other Lex-D containers I created, but here's the one that juju created. It's basically a super light VM. I can SSH to it, so it's here on my local network. Well, except I reuse my IP addresses a lot. I can SSH to it. It's a juju machine. I can run juju status. I have nothing in my payload here. I can do a juju deploy prokona cluster here. And what'll happen is juju will put this charm here inside of my Lex-D deployment. So in a few seconds, we'll see it requests the machine. If we run Lex-D list again, well, give it a second. We'll wait for it to request. There it is. Okay, now it's requested the machine. I've got another juju machine created. It's still booting. It'll get an IP address. Juju will do what it needs to in order to make the machine is ready. It puts the charm on there and all that operational code and then runs through the sets of events that needs to be required to do the installation. Do the setup, things like install, do the initial configuration, make sure all the services are started, make sure the workload's idle, and then it's ready for me to start doing integrations, running operational actions against it, or any other means. But it's a real workload. It's running on my laptop isolated completely without much resource consumption. And this is not a brand new laptop. You can see most of it's Google Chrome. I don't think I can even find where that workload is, but they're super lightweight VMs that allow you to do a lot of these kind of things without spending money on clouds. Great question. Yes, microphone, sorry, I can't hear you very well back there. If anyone else has any questions, feel free to just queue up over there. What's the difference between LXC and LXD? Ah, that's a fantastic question. So LXC is the same thing. It is the Linux container itself. So these are LXC containers. LXD is kind of like a hypervisor for that. It's an API, so whenever I run things like LXC list, I'm querying the hypervisor, the LXD daemon, on my machine to show me all my LXC containers. When I do like a LXC launch for an Ubuntu 16.04 machine, it's going to basically talk to the daemon running, so I can run a daemon on multiple machines. We can do things like run LXD inside of Nova, so instead of getting KVM machines, LXD asks as a hypervisor, so instead of Levert for creating KVM machines, you can do LXD to create LXC machines. And so that is essentially what LXD is. So it is slightly, they are slightly different, but they are very tangent to each other as far as products are concerned. And again, just like everything I've shown you, all free open source software. Fantastic question. Any others? So I'm just going to deploy a load balancer onto my local machine, and then I'll put some kind of workload that I can poke at, maybe Keystone. So the great thing about Keystone is, while it is part of the OpenStack project, it doesn't actually need very much to run. So as long as you have a database service, you can have a Keystone endpoint that answers queries for authentication, you can create credentials for it. So a lot of times when our developers are working on charms, they'll just deploy small fragments of things that make sense. So instead of deploying an entire OpenStack to evaluate a change to the operations of a charm, you can deploy the minimal set of code to do that. So I'm going to run an integration between Keystone and Percona cluster, and we'll just watch the status here. So despite things not even running yet, you can still do the integrations, you can still do the operational actions. Everything is essentially queued and managed as an event bus in Juju. So it makes it really reliant that you don't have to sit around and wait 20 minutes for everything to spin up, you can just set the things you want to do. I can go get a coffee now, I can come back. When I come back, Keystone will be connected to Percona, Percona will have provided all the credentials that needs to connect to a database schema, and then I'll be able to start poking Keystone, the API directly. That's an example there. Any other questions? We have a few more minutes. Any workloads people are interested in seeing? Let me know, otherwise. Yes, yeah, that's a great question. So I guess the question is who is creating these? Or yeah, so who's creating all of these charms? So who creates the Percona cluster? Who creates Keystone? Who creates MySQL? Who creates all of these things? That's a great question. And that is super dependent on who's the expert in that workload. So today in our charm store, we have a mixture. We have a small team in Canonical that builds out and does the operations for OpenStack. At Canonical, we actually deploy quite a few OpenStacks in varying degrees and we're very aggressive at our upgrades. Make sure that we have HA, all our production workloads. When you go to Ubuntu.com, you're hitting an instance on top of an OpenStack. So we have a lot of expertise in OpenStack. We've been running it for quite a long time in production. So we produce a lot of the OpenStack charms, but they live here in the upstream org for OpenStack.org. So the charms are there, the code is there. We have contributions for not just us, but from people like Walmart and other companies that we work with who deploy our OpenStack as well. And it's really a bit of a community effort. You'll find that a lot of vendors are the ones that actually end up creating their charms. So when you look at things like charms for either NFV or charms for common components, a lot of those are created by the vendors themselves. And they live in their upstream repos and they manage it. Now, not every upstream, not every developer is necessarily the best operator of their code, although they probably have a lot of insights into it. We also have charms that are created by community teams or consultants that actually end up being more of the experts at how that looks in the field and less so the upstreams. But it's definitely an effort across the board. Canonical is not the sole creator. We do curate the story, make sure there's quality code in there. These ends up in production deployment, so we're very critical about how well the code's formatted, making sure it's robust, doesn't do anything nasty per se, and generally follows good guidelines for integration and configuration. But outside of that, there are many people that contribute to charms today. There's a whole list of that slide before I show you all those icons. I may be overtaxing my poor T430, so. Well, yes. So a lot of these icons, these are the people creating their charms. And if not, it usually comes from someone in their community or an expert somewhere that's running it that just happens to distill their operational code and put it in as a charm. So fantastic question, but it comes from all over the place really. Great. I think we have time for potentially one more question. All right, great. Well, thank you all so much for your time. I appreciate it. If you have any questions about open source operations, it's a topic I'm really passionate about. So far, I've found juju and charms are one of the best way to actually reshare and reuse operational code. But I'm always looking to find how we can make sure that operations itself is just a lower barrier of entry for trial, error, and experimentation. So thank you all so much. Appreciate it. Enjoy the rest of your conference.