 All righty, folks. All righty, well, hello, OpenStack. I'm here to talk about Juju as an app store for your OpenStack cloud. I'd like to call this. You have a cloud now. What? Let's go put it to work. My name is Rick Harding. You can reach me on Twitter at my techie, or rick.harding. Feel free to email me. I'm a member of the Juju engineering team, worked at Canonical for four years next month. I've been playing around with this Juju stuff for a couple of years. So I want to share with you guys, I think it's really exciting times right now. I like to think back of how I got started in system administration, and software, and computing, and all that, and all these great promising tools that we have at our disposal. I remember back in the days of actually getting a server from Dell and trying to plug it all in and get it all set up and up and running. And with all the cloud computing, the ability to give users really efficient access to hardware in mere minutes to do it really dense, as a sysadmin, you can track just a small subset of hardware and use it to its capacity, no more idling CPUs. And to do it at scales, user needs increase, it's very easy to throw more hardware at the problem. So that's great and all, but I have a scary thought for you. If you build it, will they come? And that seems to be a little bit of a theme that Mark talked about this morning. Just building a cloud doesn't necessarily mean that you're going to have all those CPUs humming, running it, and being utilized where you would like them to be. So what I like to do is to try to tell you guys about how I think we can help fix that. As was talked about in the Lightning Talks, or not the Lightning Talks, with the keynotes beginning of the week, up full screening please. There we go. I think this is all about a matter of friction. Users will always go to the thing that's the easiest thing to do. If your cloud is not as simple to use, if you can't add a tool like Redis for your user to be able to actually access it as easy as Heroku can enable a plug-in, then you're not going to get your user base going. You're going to end up with these shadow clouds that were talked about earlier in the week. And so I think it's really important that at the end of the day to recognize that whenever we build something, be it a software project, be it a stack of hardware that we want to expose out, what we really, really want to do is to have people use our stuff. And so what we've done is we've basically taken Juju and Horizon. This is Horizon Liberty running on our latest suite of charms. This is all available right now. And the latest Liberty release of Horizon has got some really good stuff in it. It lets us do plug-ins. And we have a plug-in using the upstream plug-in architecture to actually embed Juju up into your actual Horizon UI. What's great about this is you've already got Horizon. You've already got the ability to expose this to your users. Your users can log in with their Keystone login, and everyone can have their own easy-to-use, unique experience. So jumping in, we've obviously pre-populated this for the purpose of this talk. And I need secure web sockets and non-true SSL certs for the win here. So what I've got are a couple different, what we call models, of actual workloads running on this OpenStack. Now this OpenStack is running on an orange box here, a handful of nodes, so this is all real running workloads. This is all coming from our charm store. And so with this UI, a user is able to log in and not just go and grab something and say, I now have WordPress running, but they can actually build a true end-to-end solution for their needs. And they can do it many times with different sets of tools. So if you ever develop a piece of software on your laptop, you cut a lot of corners. You get your database sitting there locally. You're probably talking directly to it. There's only one of them. You're hacking on your app. There's no scaled front end, but as soon as you take that app and you go to really put it to work in the real world, you need to be able to change that structure. It's not as simple as just grabbing the WordPress app and running it. What you actually need to be able to do is when you go to your staging environment, set up SSL certs and do true SSL termination. You might actually go in and be able to benchmark against that by throwing memcache into the mix and making sure that your staging environment is set up to help you check the performance of what will soon be your production cloud or your production deployment of your solution. And when you go to production, you don't just need one. You're gonna need multiple units of different things as you go along. And so with Juju, we've actually encapsulated and made this all really nice and easy where you can start off small and it's very malleable, which is really something that I think a lot of the other solutions really lack just to point out here. So here we've got a couple of working things. I would like to search the store for some stuff. In particular, let's go find some stuff for syslog. Very popular tool for actually logging all your stuff, bring it all aggregated. And here we have a big data solution that will actually go through and deploy a bunch of Hadoop components along with Spark and be able to actually run monitoring and watching all those logs to get meaningful data out in a way. And what's nice about this is that it's not just a single software thing. Someone put together this end-to-end solution. They pulled all these pieces together, figured out how they need to be made to work. And I, as an end user, don't have to worry about it. I just know that this is out there for me to go grab, to use, and to put into my own use. I don't have to worry about assembling all the pieces. So why don't we go ahead and create a new model, nice clean slate. And what's great about the charm stores, we've got over 130 curated community reviewed charms where service is available. And a couple dozen curated, checked that they work, bundled solutions. And so we have many partners in the store. One of our new ones that is really great here is we've got IBM has an actual DB2 charm in the store. So what we can actually do is I can go get IBM DB2. And if you kind of notice, I did a search and assuming the internet cooperates, it should come up and work, and live demo fail. So let's grab something else instead. How about MariaDB that's sitting right here? Will that one load? Hey. All right, so I can add MariaDB here, MariaDB. Our UI is really nice because while it's simple and very service orientated, which we think is really key, Mark pointed out this morning, I can go through and I can go to my instances under my project here and I can look at all the actual VMs. And that's, I mean, that's wonderful and all, but to an end user, this was friction, figuring out what this is telling me, what's running, where it's running at and how it cooperates and coordinates with other things that are running. It's just not clear. So by going in and looking in a service view, it allows me to really have a cognitive view of what's actually going on. Now we still allow you to go through and do manual placement. As I said, in a small test case, you may want to co-locate services using containers on as few VMs as possible, or you may want to actually go through and make sure that the containers that you're bringing up are going to be LXC or KVM and this machine view that we have here lets you do and control all of that, and that's not a problem. The other thing is this gives a user really good hand-holding and clarity as far as what's going on and what they're doing. So it's not just I hit a button that I think is supposed to pull down some kind of zip file and run, but we actually tell the user these are the things that are going to happen in these steps, allow them to check it out and sanity check that it's what's important to them. So the key thing here is that Juju will help you fast-track utilization of your cloud by reducing friction that the users hit as they go through and attempt to actually use it. So with just a few clicks, it's really amazingly easy to move from one working set of solutions to another and back and forth. And what we've got planned up in the future is because this is all backed by Keystone and all your users are there, is to enable collaboration and people to work together by actually sharing these environments out across whatever users within the Keystone that they would like to be able to share with. That way, you're not the one stuck who's the only one that has access in your lot to go on a holiday once in a while. The other thing that can be really nice when we talk about solution-based is that you can actually add things on to your environment. So I've got my workload, that's great, but in production, I'd actually like to be able to watch it. And we have in our catalog, the elk stack that you would use to be able to go through and do that. So here I'm gonna grab some log stash, some Elasticsearch and Kibana. And suddenly, your users not just get to deploy applications, but they actually get to have tools available to help them follow good practices that are pre-baked and ready to go. And with these, I can actually go through and wire them up. And I don't actually know what config goes back and forth between these guys, but I know they're supposed to work together. And voila, I can go through, relate the services, and I don't have to be a master of Elasticsearch or log stash or Kibana. I can just be told the good practice at in production, I actually have this up and running and working. So let's go back to some slides. So what I really wanna highlight is you definitely need pre-baked solutions and you need to have a full stack of them if you're ever going to actually get any kind of adoption. I ask the users to figure out how to bring up every one of these with whatever tooling choice that whatever the user tries to jump into is just a bunch of friction. And here you can see we have a lot of partners, including MongoDB, we have various Hadoop solutions, Kubernetes, if you wanna deploy Kubernetes cluster, all that's available and ready for you. As you can see, we talked about kind of combining them into actual pre-done solutions. And then when you get these solutions going, you can then share them out as pre-baked knowledge within your organization or within the users of your cloud. And what's really important is that Juju is completely cross-cloud compatible. So if you live in a hybrid world, Monday there was a demo where they definitely put some stuff on OpenStack. But at the same time, they mirror the same thing to AWS. And so Juju is set up in such a way that once you learn how to speak this one language, these same solutions are available on whatever cloud of choice and allows them to keep things in sync and to keep their knowledge in between them and allow you to do things like prototype, for instance, maybe an AWS because it's cheap and easy and that's where they're at and then go actually to production on OpenStack or if they decide that, you know what, you've outscaled our OpenStack, we want to go launch that over on Google Compute and let them have the problem for a little bit. It's all available to you because it's all very reusable. So other great features of Juju here is that it's multi-OS. These services can be in Ubuntu, CentOS or Windows. What's great is that in OpenStack, you can preload your Windows images as available and you can deploy the workloads from glance straight through as part of those charms. Juju is in fact open source. Feel free to contribute upstream, the charms are open source, the Juju itself is open source and most of the tooling and things around it are as well. So you can contribute, have a little control and kind of monitor how the feature goes along. It's also, Juju is not config management. Config management is great at writing out config files but what we're actually trying to do is help the users not have to know the config files. There are experts that know how to configure Redis. And experts that know how to configure Hadoop. And making every user that just wants to get something done have to process that is a little bit of a fail. So what the charms do is they enable you to do whatever config management tool you want within the charm but the charm encapsulates that expert knowledge and doesn't expose it out to the end user. It's also very, very repeatable. I mean these charms are basically little state machines. Whenever you go and put them on to any cloud, they sit and run the same steps over and over. If you scale it, a preset set of steps happen as you add units, the charms can react to the actual number of them that are there. They can react to what they're related to. They can even react to cloud resources that are available to them. For instance, Juju supports the idea of mapping storage from underlying cloud providers. So I can say deploy this with this EBS volume and the charm can pick up that EBS volume and go and use it for its needs. What's great is that Juju abstracts away the idea of that EBS volume in such a way that the storage script that you might write for scripting up your Juju deployment can be still be cross-cloud compatible. And the last thing is that it's scalable. Everything that we're doing is to make sure that we like to joke around that if something starts to get a little too stressed, we just go Juju add unit. Basically, all these charms are really meant to be constructed in a way that if you add more of them, they react intelligently. If you go through and start to scale up MongoDB, for instance, you can actually, it will pick up and start to cluster itself and distribute as required. So in closing Juju and OpenStack, we think there's a great set of tools. We think embedding Juju within OpenStack is a really, really big help to reducing the friction, to actually adopting the cloud. It's up to you guys. If you would like to figure out a way to expose out the NOVA credentials and a bunch of read-me's on how to actually get your first things up and running on a cloud, that's great. But in our experience, we find that that's not the easy-to-use baked solution that most users will actually not just accept, but will actually enjoy, feeling like they're actually getting their work done and that it's not IT fighting them today. So if you have any questions, we can have some now. And then if you have any interest, we'd love to hear from you. Feel free to email juju at canonical.com. And we'd love to talk with you guys and what you're doing with your OpenStacks and where Embedded Juju might fit in. And I probably went way too fast, didn't I? What's up? So to be determined, I would encourage you to email the team. Basically, what we're showing here is engineering work that is built on top of the next version of Juju that's coming out in January, 1.26. So at this time, this isn't available to just go plug-in. However, it is totally constructed just as a plug-in mechanism so that it's compatible with any OpenStack. If you've got a canonical OpenStack, your own OpenStack, a competitor OpenStack, we don't care. What we're really trying to do is to help you guys utilize that more. It's a good old-fashioned thing that if we can build some kind of relationship, we can learn how to talk more. So that's not yet defined. And so I'd encourage you to send an email and we'll start those conversations. All right, yeah, go again. Yeah, so we're a little bit different just because of the fact that this is not a true Paz situation. It's not just for taking software up that you have on your system. What we've done is we have charms that will accept payloads from apps. So for instance, there's a Django charm within the store. And what it will actually do is it'll, you can point it at your application, for instance, a get upstream or whatnot, and configure the charm so that it knows how to go fetch it and bring your Django app down and put it up. What's great is that people have then updated the Django charm to know how to talk to common things like RabbitMQ for doing message queuing, for Redis, for doing any kind of caching, Postgres, for database access. And so since Django has a pretty preset way of how it would talk to these kind of services and things for its config files, the charm actually encapsulates, again, a lot of domain-specific knowledge about how to talk to things, which makes it much easier for users to pull things together. What's nice about that is there's not a case of what is OpenShift support. It's whatever you have a charm for. It can be built in and built on top of. And so I don't see us as a direct competitor, no. But I think that we can provide that functionality and some more. No. So this question comes up a lot. And autoscaling is extremely interesting. Autoscaling is great when it works up until when it does something you don't want it to do. And because the scaling requirements get to be really tricky on when is the right time to scale your app and how far, defining all that gets to be something that's just not this easy for users to do. What we are working to do is to provide standard libraries for Juju, where you could basically wire into Juju any kind of logic that you would like to make it really easy to autoscale. So for instance, there's a Python library called LibJuju that's in progress. And that is a handful of five lines of Python or something is what you actually can do to add unit to a service on a model. And so we'd rather empower you to, you know best, if we start to give you boxes to live within, then you'll just end up frustrated with us. It's our experience. Yeah. No, so what we're writing. So Magnum's interesting. Right now, most of the people that we talk to that are looking at container-based things, like one of the examples that I've got up is the Docker cluster. And so what we're actually investigating and looking at is how to provide Juju resources through a deployed Docker cluster itself that you would deploy again with Juju, where you could throw your containers into it, but then you can actually use Juju to deploy databases and things that don't really fit the container idea very well, and then allow you to relate those things in. And we just recently spent some time looking at how we would go about approaching that, and that's work that will be going on this cycle for 16.04. Again, when you think about things at kind of that service level, it seems to put a little twist on most of the things we want to try to do. What's nice is that it kind of logically fits together once you get into it. If you can deploy Postgres with Juju really easily, as long as I can get you a way to get that to those containers really easily, which is really just you need a connection string, then there's no need to really go through a lot of extra effort to figure out and actually pulling in some other outside project and working with that. All righty. If anyone has anything else, I'm around, and happy to answer questions and talk shop. And otherwise, enjoy the rest of the summit.