 Can you hear me? Aha, wonderful. I'll put this one down. So I'm Michael Ford. I work for Canonical on a project called Juju that's written and go. I've been a Python developer for about 13 years now and a Python core developer for about eight of those years. I released or created Unit Test 2 and the mock library which reflects my particular interest and my particular passion for testing. I think good testing practices are the only ways or certainly an important part of keeping developers sane. And this is this is my colleague and actually my boss Demeter. Can you hear me? Demeter, I don't know. I work with Michael in Canonical on the hopefully next generation cloud orchestration framework suite system. Juju, here is our talk about it. So the talk is called to the clouds and it's about deploying applications and services to the clouds and why you should probably be deploying to the cloud even if you don't want to. Because we're talking about the cloud, we're going to have a lot of buzzwords. The cloud is a piece of jargon that's had a lot of hype over the last decade really. And I'm hoping that in this talk we'll rehabilitate the term a little bit and show how despite the hype there's actually a useful set of technologies and principles behind this. So this photo rather stretched I'm afraid is actually one of the original computers that Google used in the early days of their search engine. And part of their genius which I think was actually forced on the mobile out of necessity than directly entirely out of genius was they built their service using cheap commodity hardware rather than the very expensive big iron mainframes that their competitors were using. And a consequence of this was that they were able to scale out very rapidly, very easily and very cheaply just by buying new units of inexpensive commodity hardware. But because this hardware wasn't as reliable as the big iron mainframes that other people were using, they built on top of it this false tolerant architecture that as well as enabling them to add new units of hardware to their system also allowed them to take out failing hardware. They eventually released this platform as a service as Google App Engine. Amazon kind of took this to the next level with their infrastructure that they used for running a giant retail website. They took a slightly different approach providing a host of virtual machines as deployment targets for their services. And this infrastructure as a service approach rather than the platform as a service approach is really loved by developers because it just gives them a machine and when they just have a machine they know what to do with it, they know how to deploy to it. And so when Amazon made their cloud public, they rapidly became the dominant player in the public cloud market and other modern infrastructure as a service. Public clouds include HP cloud, Microsoft Azure, Joy-Ent, OpenStack based offerings, a whole host of them. So there are several problems that using the cloud solves. And these include dependency hell, resource underutilisation and hardware management. And the way the cloud solves these problems is by separating the deployment target, your deployment layer from your hardware layer. So when you deploy to the cloud you're deploying to virtual machines without having to care about what physical machines that your service is actually running on. So resource underutilisation or equally resource overutilisation, a situation you might be familiar with, you have to deploy a new small public service say or an email server, a dev wiki bug tracker. So you need a new machine and you deploy stuff to it and you're using about 10% of its capacity. Or alternatively you work in a company where getting new hardware is a really slow and painful process. And so what you do is the three servers that you already have, you jam everything onto that and everything runs as slow as hell. Dependency hell, again situations that you may be familiar with, you have a whole bunch of applications and services. Some of them use the same library, they have the same dependency, but they use different versions of the same library. So you have these applications that you can't deploy on the same machine, they have to be located on physically different machines. Or alternatively you do the work, you make sure they're all using the same version of the library. And then what happens is you want to use some fancy new feature that comes in a newer version of the library. So you deploy that for one application, you forget about the other services and your deployment breaks another application. Now obviously all Python libraries take backwards compatibility very seriously, so this never happens in practice. But I've had to do emergency rollbacks of deployments because we upgraded a dependency, we forgot there's some other application on the same box using the same dependency and it just broke. Or alternatively you do the work and you port all your applications at the same time to use the new version of the library. And you do your upgrade, you do your new releases all in lockstep. And then there's a regression in one of your applications and you have to roll back all of them at the same time. There are various ways of solving this, you can deploy all your services to separate machines and then you're back into resource underutilisation. Or you can use virtual M to provide isolated deployment environments for all of your application. Again this is something that I've done. But what you then have is you then have the same libraries in multiple different locations in non-standard places on the file system. And that's a security problem and system administrators, they tend to hate this solution because they like to understand and preferably be able to control dependencies. So an alternative approach is for every service that you run or even every component of every service that you run to have that in its own virtual machine where you're able to very tightly control and specify the dependencies just for this application. And then hardware management, that's one of the most important benefits of the cloud. Because we have this separation of our deployment layer from our physical layer we're able to deploy new services just by acquiring a new virtual machine. We're able to add new machines to the cluster very easily and we're able to take out failing machines, upgrade machines without shutting down running services. Okay next slide I think. So it may be the case that you're already running a bunch of services, you may have hundreds of servers, you may have just a few. And you certainly don't want, you feel like you don't want the cloud, you certainly don't want your data or your customers data on someone else's machine. Maybe you don't want your data located in America or hosted by an American company. And you can get some of the benefits that I've talked about just by managing virtual machines on your own servers. You get this, the isolated environment. But wouldn't it be nice if there was a framework that provided the dynamic server management aspects of this that was able to provide, automatically provide you with new virtual machines, automatically provide you with new deployment targets and make it easier to add machines and take machines out. And that's what you really want is a private cloud. And if what you want is a private cloud, that probably means you want to open stack. Our PDF export screwed up the bullet points there. Sorry about that. So open stack is basically the private cloud. It's not entirely the only option but it's the giant in the world and it's written in Python. It's probably one of the biggest things going on in the Python world right now. So there are lots of companies, both big and small, hiring Python developers to work on the cloud, either directly on open stack or on specific implementations of clouds, both public and private, using open stack. Open stack is huge. It's huge in terms of the amount of code. It's huge in terms of the amount of sub projects within open stack. Huge in terms of the functionality it provides and huge in terms of the number of people using it and contributing it. And you can get all the benefits of the cloud but without having to use somebody else's implementation. Oh, let's go back to the last slide. Just to mention that there are alternatives. Eucalyptus certainly used to be an alternative cloud implementation. They got acquired. I think they got acquired by somebody, by a company that has a public cloud offering using open stack. So I don't even know if eucalyptus currently exists. And there's an alternative data centre technology that I've worked with a bit called Mars, metal as a service, and that's another canonical product. And that gives you a lot of the benefits of the cloud, the dynamic server management aspects of it, but with physical hardware rather than virtual machines. And Juju, the project that we work on, you can deploy open stack to Mars or you can deploy directly to Mars with Juju. And it achieves full density, full resource utilisation with Mars through using Lexi or KVM containers as deployment targets on the physical machine. So it's an interesting technology. Sort of a data centre level technology sits below the level of open stack. So the benefits of the cloud, solving the problems of dependency hell, hardware management, resource under utilisation. Alongside that, if you have fully automated deployments, then the other big benefits that you get from the cloud are you get the ability to rapidly scale out and easily deploy new services. And you only get those if you have fully automated deployments. And an important principle for fully automated deployments are that we treat our servers as livestock rather than pets. So if you have a pet, your pet is unique. Your pet has a name. Do you name servers in the company you work for? So they all have names. And if your pet gets ill, you spend a lot of time and money and effort on getting it well again. Whereas livestock, livestock, they don't tend to have names. They have numbers. And if your livestock get ill, well, it's a cruel world. You shoot them and you get another one. And this is how we should be treating our servers. Deploying new services. We ought to be able to tear down our application servers, reprovision them with a single command without caring about what physical machine they're located on. Without caring anything about the machine they're located on virtual machine or physical machine. And without preferably without having to worry about machine configuration. Now the trouble is with infrastructure as a service, which has largely beaten platform as a service in the market by the way, except for some specific platforms like Salesforce. Infrastructure as a service is largely one. Developers like it because the paradigm it provides is you have a machine to deploy to. And what that means is although we leave some problems behind, the problems of well how do we provision, how do we configure, how do we manage those machines, how do we administer them. We just take those problems with us when we go to the cloud. Now there are lots of tools out there that will help you with machine provisioning. Chef, puppet, salt stack, ansible. Some people use Docker although that's really about image based workflows. It's not really, but some people are using it in this way to help with the problem of machine provisioning. But these tools all require you to think about machine provisioning. What's going to live on this machine? How do I configure and administer this machine? And as developers or even DevOps, we don't really want to spend our time worrying about machine configuration or administration, or at least that's not how I think about the services that I deploy. When I deploy an application or I deploy a service stack, what I think about is I think about the components. I have my application server, I have a message queue, I have load balancers, I have a database. And these components are all related to each other. They're all interrelated. They all communicate. This is how I think about my application stack. But the actual deployment, we tend to, we're forced into thinking about our services in different ways. We're forced into thinking in terms of units of machines. And juju is a tool that takes a slightly different approach. So this diagram shows that this is through the GUI. The GUI, you don't have to use the GUI, of course, but it's a great way of visualising services. This shows deployed and related services deployed with juju. And the lines there show the relationships between the components of the services. So juju takes a different approach. It's about service orchestration. It's a tool that provides a powerful service modelling language that just happens to use virtual machines as a deployment target. The basic unit for deployment for services or service components in juju is the charm. And the charm no codifies the knowledge required to deploy and configure that application. This is a very important principle of DevOps, a principle of automated deployment, is that the knowledge about how to deploy and configure your service components rather than living in some system administrator's head and then that system administrator gets a better job somewhere else. And like Guido said in his talk this morning, they have deployed services using binaries and nobody knows how to get those binaries back. So if some disaster occurs, presumably they have lots and lots of backups because they can't reproduce, they don't have a repeatable full stack deployment, they can't automate and reproduce that. You need to have your deployment knowledge codified. And charms for juju are the codified knowledge about how to, not just how to deploy the service, but how it communicates with other services. And these charms tend to be written in Python. So when you're using juju, you get to codify your deployment knowledge in Python. But not just that then. Through the charm store, there's a whole host of common infrastructure components, many of which you're already using, I'm sure, where there are existing charms out there. So juju is sometimes called apt get for the web. So why juju? I've kind of talked about some of these. We get to think in terms of service orchestration, think in terms of service components, how they relate and connect to each other, which is how we think about our application stack anyway. We get to work with Python to do all this. Cloud independence is important as well. Juju will happily deploy using exactly the same configuration to Azure, Joy Ant, HP Cloud, EC2, Mars, and also using Lexi containers to your local machine, which is not just to your local machine, to your CI server. So you can take exactly the same production deployment configuration. You can deploy that locally for running your own manual test. But your CI server can also take that, can spin up the whole production stack, configure the communication, configure things to talk to each other, and you can run your acceptance tests on your CI server, using effectively your production configuration. Now, you are running CI tests, you are running acceptance tests for your applications, I assume. Like I said, I have a particular interest in testing, and I think a good principle with testing is if it isn't tested, it's broken. If it is tested, it might not be broken. Those are the principles I work to. So testing doesn't guarantee that things work. What it does is provide an assurance that things might work. You need that assurance. And through the charm store, there are many charms available. Things like Hadoop, Ceph, Postgres, Django applications, MySQL, did I say Postgres? All of the standard components, they are all charmed up there and ready to deploy. So let's look at an example deployment of a Django app. We have ten minutes left, time has flown. Demeter is going to show you, it's a Django app called D-Paste, using the Django framework charm. This is one we already have deployed, but this is a full stacker, and I'll let Demeter take over and talk about it. All right. So as you can see here, well, the resolution is not very good, but you have services represented as boxes, and this is all you really care about in your deployment. You don't really care that much about machines, you care about your workloads, and also how are they related to each other. And as we can see, we have a Django framework charm here, which is a modified version of the default charm available in the charm store, and it's configured to run D-Paste, pulling the Git repo from GitHub, and there are a few other things like a squid proxy, and also a couple of HA proxy charms, which are configured as application load balancer and cache load balancer. We have the front-end Apache and also the back-end Postgres database. This is the Djugu itself, which is also a charm that you may or may not deploy if you don't want to, and this here is the G Unicorn, which is a subordinate charm to D-Paste. What does that mean? It means that with each and every unit of D-Paste, there is a G Unicorn deployed alongside it, so it can be used, for example, for logging or other closely related services, which depend on each other. Also, so this is the service view, this is the default view you can see, and here we have the machine view, which shows that we have eight machines, actually, like C containers, and what services they're running on, If these were physical machines, this is all running on the laptop using the local provider that I mentioned, so they're lexie containers that these services are running in, which is why hardware details are not available. If these were running on virtual machines, the hardware constraints of the actual machines would be visible here. Also, I'll come back to this, but you can also use the command line to do the same sort of inspection and management. For example, if I do the status format short, that's just the gist of what I have in my deployment, and if I want to, I can show a lot more details about it, like what sort of workload status it is for each term, what relations there are, what IP addresses, when it was deployed, and so on. And there is even a status history that shows, for example, I deployed this charm, but it didn't come up, or like it's waiting for some sort of interaction, like I need to enter database details, or I need to do something manually in order to enable it to work. Also, we can easily do, well, not quite right now because the network is non-existent, but I would have shown you the same exact deployment than in Amazon. It's actually running. If you can hit this. I can show you whoever is interested later. Any questions, of course, are welcome. So what you can do with this, you can actually see what are the details of each service, you can see what units are running on it, what's the configuration, there are lots of things that you can tweak for each charm, and this is also part of what the charm encapsulates as a best practice. What we could do here is, you saw that there was one running unit. If we needed to scale out our app, we would do juju, ad unit, I think Django, whatever this charm is called, five, and that would create five new units. But because the Django charm is related to the Postgres database, so as the new units come up, there's a juju knows these need to be related to the database, and charms are comprised of hooks, which are code that can run. If we go on to, if we go back to the slides, we'll have to fly through these because we have just five minutes, but there's a charm that says relation joined. So the new unit of the application server has joined the Postgres database, so it knows the configuration information. The Postgres charm, we don't need to generate a password and a user for the app, for the database, sorry. This is all done as part of the charm. On install, on startup, a new database user is created, and then as the relationship is joined to the application server, that information is passed through, that configuration is automatically given to the application server. As we add new units of the application server, that configuration is given, information is given to the new units. So these services are automatically added and configured to talk through the relationships that we've defined. They're automatically configured to talk to the rest of the application. So it's not just the deployment information that's codified into the charm, but the configuration information, how to talk to other units of the service, and as we add new units, this reconfiguration, this updated configuration, happens automatically for us. This is service orchestration. You can see this isn't the directory tree but the actual charm is just a zip archive. And you can see here that all of the different hooks that fire when different things happen, they're actually symlinked to a single Python file. And here's the install hook that actually actually installs a bunch of packages. On to the next slide. So we're actually using a deployer bundle here, which is a YAML file that you can keep under version control. Deployer is a separate tool that the functionality is now being rolled into JuJu call. And you can see for the Django charm that we're using, that there's a whole bunch of configuration information stored in there. And if we go on to the next slide, the relationships between the charms and the other components of the service that we're using are also defined in the deployer bundle. So as you can see, there's an awful lot to JuJu. There's an awful lot more we could talk about. But the basic principle of codifying our deployment and configuration information in Python, being able to reuse the existing charms using virtual machines as a deployment target to get us the benefits of the cloud of separating our deployment layer from our physical hardware. And with JuJu, being able to then just retarget that at multiple backends is a very powerful combination. I think we'll... And this is just an example of how easy it is to do this deployment. We'll make the slides available as the last... If we go to the last slide, that has a URL, and I'll tweet it and Demeter will tweet it. This is deployer doing its stuff. But I think that's about all we have time to for. And the slides are available. And there is also documentation and all the different concepts of JuJu explained on JuJuChallenge.com. There is also a live demo that you can use as a sandbox to do experimental deployments. You can then export that as a YAMO file and then deploy it as we show with the JuJu deployer to your cloud or to your local machine. OK, thank you very much. Do we have time for some questions? There's a couple of questions. I see JuJu like something more top-able than a puppet salt or any of these. But I don't really see how complex could it be because when you have some simple applications, you usually can configure them with free parameters or so. But usually the applications that I'm listing in my company with target are really, really complex. Where we need to have like four dot jams, three exams, XMLs, two database connections, et cetera. And they are configured differently. So at the end, the edge cases you have are really, really complex. How would JuJu scale with complexity of an application? So there are quite a few very complex charms that have certainly in the order of tens of configuration options, which are things that the deployer YAMO, obviously they all have same defaults, but there are many tens of configuration options that you can tweak in the deployer YAMO. What we tend to see people doing is, for their own applications, they will write their own charm. Possibly taking like the Django framework charm there, taking that as a base if you're deploying a Django app. You can have fat charms that contain the application bundle itself and extra configuration information so you could merely specify the paths to the configuration files within the charm bundle. And so you would have your own charm that is specific to your application. And then you would reuse existing charms for the other component pieces that are deployed alongside it. And the good thing about charms is that we showed an example in Python, but they're actually language agnostic. So you can, for example, using Chef or Puppet or Ansible, you could reuse the same language. You can use Bash or whatever you like. We have a very good support for Python, like nice tools, libraries, unit tests support, coverage and benchmarking as well. One of the things to note is that under the hood, of course, for an individual service component, Djugu is doing machine provisioning. The difference is that at the modelling layer you're not thinking in terms of machine provisioning. So Djugu really works alongside these tools. And you can see that this particular charm, it has a fab file, it has Ansible.py. This particular Django framework charm here, I don't know if it still is, but at the point that this demo was originally created, was actually using Ansible and Fabric for doing the service installation and machine configuration. I can see that Djugu can be useful for provisioning services where the basic building block is like a container or something, or like anything that runs on the top of VM or a machine that's already running, let's say. But is Djugu the right tool or is it intended to be the tool that you can use for actually spinning up all the VMs in the cloud and talk to Google Cloud API or AWS API or any... This sort of task, is that... Well, so if you just want something to manage your cloud... Yeah, I mean Google Compute Engine, AWS or all of these things. You do Djugu add machine, it will create a new virtual machine for you. You can then use that, you can specify the machine, you can create containers on those virtual machines and deploy components to those machines. Djugu SSH machine number gives you an SSH connection to that specific virtual machine. So if you just want something to spin up virtual machines or alternative workload targets, there are then Djugu actions that allow you to run specific commands on those machines. So you kind of can use it in that way. It's not entirely the intended use case. So it has the integration with some other APIs? It has it with all the cloud providers that are supported, so Google Compute Engine, AWS, Joints, HP Cloud, Azure, Joints. Yes, it has within its knowledge of how to talk to those APIs, how to create the virtual machines, how to destroy the virtual machines, how to connect to them, some aspects of how to configure networking on them, this kind of stuff. It has to do that because it's doing the machine provisioning for you. A question on... You mentioned that your machine should be treated like livestock. How do you deploy configuration changes? Do you take down the machine as an immutable server pattern? Or do you reapply... ...re-apply the changes? Right, so the idea is that you go away from the actual machine configuration and instead codify what configuration your service needs within the charm. And then, if you need to change that configuration, you specify like either tweak charm config settings or you relate it to something else which provides the functionality it needs. So the actual configuration happens automatically depending on how the charm is written. So if, for example, your charm is written both as a standalone application and as a clustering solution, if you just add more units to it, then the charm will automatically discover those other units and set up clustering in the same way. And there is a concept of a charm upgrade where there's a new version of a charm and individual units can be upgraded to the next version of the charm. More questions? What do you have for auto scaling like a load balancer and deploying new servers on demand? So this is kind of a high level feature on top of what Juju currently provides. There are plans to do things like auto scaling while balancing as part of the set of features that we provide. That's not yet there. But it's being worked on. It's also... We mentioned that Juju's open source were accepting contributions. It's on GitHub. I think the standard answer is landscape will do scaling on top of Juju. Lou, how is it different than OpenStacks Heat and Murano? So it's different that it's not targeting... I'm not quite familiar with both of those in detail. But I think that the thing is Juju is not prescribing a certain sort of cloud that it can orchestrate. For example, I know Heat orchestrates OpenStack. Whereas with Juju, you can orchestrate OpenStack and other clouds just as well with the same set of features and steps. More questions? It is just a very quick question. How popular is Juju with respect to other similar services like Chef or Ansible? So Juju is... It's fully open source developed by Canonical. We're using it a great deal with our customers. This is how and why Juju exists. We have cloud customers with large deployments. Often, but not only large deployments of OpenStack where we either set it up or manage the deployment for them. So this is how we're using Juju. So Juju is quite battle tested and is getting refined based on those experiences. And there is community uses. There's particular community around the channel creation. I don't think in terms of the wider community it's as widely used as Chef and Prophet. I think the DevOps kind of mindset is still used to thinking in terms of machine provisioning. So they look for tools to do machine provisioning. And the idea of service orchestration although Juju's been around for a while service orchestration as a concept and a way of thinking I think it matches more directly the way developers think about their application components. But it's something that's only gaining traction. Because we're seeing the community of charm creators those are obviously people who are using Juju. And since recently like last year we had a lot of big contributions like we're working with our partner's cloud base which actually ported cloud in it for Windows and Sentos as well and I think there is a... This is a Romanian company who were using Juju. They had a customer who needed Windows workloads so they did the work to Juju now deploys Windows workloads and Juju runs fully on Sentos as well as Ubuntu. For Windows you need an Ubuntu Juju state server to manage the applications but it will deploy two Windows machines. But for Sentos you can have a fully Sentos Juju installed if you want. And that's on the server side. On the client side the client is available for Windows, Linux and Mac and pretty much any platform that Go runs on because it's written in Go. More questions? Thank you very much. Thank you.