 Can you hear me now? Ah, wonderful. Hello everyone. Morning. It's wonderful to be here in India, my first time in India. I feel very privileged and honoured to be here and to be speaking to you. I've had a wonderful welcome, met lots of wonderful people. It's clear from the real sense of community and the real sense of open source culture but more than that for the number of just smart and talented people and particularly the number of young people that makes me feel very old now. But it's very clear that India has a very bright future and particularly if you keep pushing with the start-ups, keep innovating and even more importantly this culture of mentoring and teaching and bringing people into the open software movement but that's going to be a real powerhouse for the future. So that's a slightly old picture. I thought I'd tell you a little bit about my background and how I got into Python. I got into programming, a slightly unusual route. So I've been programming with Python since 2002, so about 12 years now. But the story happens way before that. I grew up in the era of 8-bit computers. Computers like the BBC, Model B, the ZX Spectrum and the Commodore 64. Who remembers computers like that? I'm not the only old codger here, that's good to see. Those computers, you would switch them on and they would boot pretty much instantly because they'd booted from one. They would boot to a programming interface. If you didn't run a game or run some other software, the interface you had right in front of you was one for programming. So myself and hundreds of thousands, possibly millions of other kids, we learnt to program on these machines and I spent hours tinkering around with BBC BASIC as it was then. Which, although it was a version of BASIC, my code was full of go-tos, horrible, horrible code. But it did have subroutines and functions so you could do some measure of structured programming. Then after that came the 16-bit machines. My favourite computer was the Amiga 500 made by Commodore and Commodore completely screwed everything up. Despite that, they managed to build a wonderful machine and it was the first pre-emptive multitasking operating system way before Windows was doing that. But at the time, I didn't particularly want to go into computing. I wanted to be a lawyer so I started doing a law degree at Cambridge University. I did about a year and a half of that and then for various reasons, which is a whole other story. If you want to hear that story, you have to buy me beer. I dropped out of university and I went on to sell bricks. I became a salesman in a builder's merchant selling building materials, timber, bricks and so on. I did that for ten years and I didn't really do anything with computers during that time other than use them for email and so on. I started playing this game, play by email game. I think some of them are still going. This one was called Atlantis. What would happen is you had your armies and your castles and every week you would email your orders into this central server using a specific format and you would send your troops off to explore, fight battles, discover lands, build castles and so on. Everyone else would do the same. When the central server would process all of these orders, walk out what had happened, battles had been fought and won and lost, new lands discovered, castles built and so on. It would email back a report to everyone. Because this happened just once a week, all the really interesting stuff happened in between during that week and the diplomacy between the different players. I was part of this alliance. None of us had done any programming except myself a few back in the distant past. We figured as this report was computer generated, it followed a regular format and we were getting it by email, wouldn't it be great if we had a program that could read these reports and understand them and build up a combined map of all the territory we'd explore and even issue a few commands for us a bot that would help us play the game. We did a little bit of research on the internet and we decided that we settled on squeak as the language we were going to use, a small sort of variant. That would have been a fine language to choose, but in the very last minute, someone said to us, no, what you should do is you should use Python. Python is great for this and we just said, yeah, okay, sure. We started to learn Python. One of us, not myself, built a parser, built on regular expressions that could understand the reports and build up a data structure from them and on top of that, I built some code that would combine our reports, combine the maps so that it would have a persistent data structure of all the territory that we'd explored and also because it knew the terrain, if you gave it two different locations, it would work out the shortest path between them and issue the move orders for your armies to move towards that location you wanted to go to and that was an implementation of Dijkstra's algorithm for power finding. Still a variance of that, a style which adds heuristics to Dijkstra's algorithm is still used in games programming today. I found that to be great fun. I was very proud of it. And what happened was I was bitten by the programming bug. I was bitten by the Python bug. I really enjoyed it and long after I'd given up playing the game, I was still programming in Python and using it for various bits and pieces. Along with Python, I discovered the open source community, this whole world of people building stuff, doing stuff together online. I've not met, even in the first few years of my programming, I lived out in the wilds of Northampton in the middle of England, not many computer programmes there. For the first few years I never met another Python programmer but I got involved in this whole online community and through IRC and I started blogging regularly. I wrote some articles for magazines and that was back in the day when magazines were still a thing. So I wrote articles on URLlib2 and Unicode, some of the stuff I'd done for URLlib2 I contributed back to the Python documentation. I got involved in all sorts of other projects, open source, some of my own work. After a few years of doing this, probably about three years, I decided I no longer really wanted to sell books, I wanted to become a programmer. I thought, well, I've got no commercial experience so I'll have to find an entry-level job somewhere. Horror of horrors, I might even have to do PHP for a while because that's what most of the programming jobs of the time were. But thankfully things didn't get as bad as that. I found a small company in London, a startup called Resolver Systems, and they were building a spreadsheet application that was going to take over the world, was going to replace Excel, and it was designed to be programmable from the ground up, and it was written with IronPython, IronPython being an implementation of the Python programming language for the Microsoft.Net framework. I went to interview with them and they could, even though I had no commercial experience, they could see from the stuff that I contributed to Open Source, the articles I've written, everything I've done, my involvement in the Python community, my blogging was on. They could, not just from the interview, but they could see the depth of my understanding of Python, and so I joined Resolver Systems, and I think at the time I was the first programmer there with any Python experience. They'd come to Python because they needed a scripting language for the spreadsheet that we were building, and they were doing test-driven development, and they found that IronPython not only is a good scripting language, but Python is so much easier to test than the statically typed languages that they were used to, and they thought, well, let's see how far we can get building the whole product in Python. So I worked with them for a few years. I started to attend Python conferences, and because we were the first company using IronPython commercially, I wrote the book IronPython in Action from Manning Publications, and at Pycon I met up with all these Python core developers that I've been interacting with on IRC and the issue tracker. Now, at the time, the Microsoft team who were building IronPython, they weren't allowed to contribute patches back to Python, and as I was involved in both the Python community and the IronPython community, I got Python commit rights, particularly to work on the standard library and to make fixes to make the Python standard library compatible with IronPython. So that was how I got into becoming a Python core developer, but because of my... I picked up a passion for testing with resolver systems. I'm still convinced that good testing, good testing, rigorous testing practices are essential for product and code quality and also for developer sanity. So if you feel like your job is driving you mad, well, testing may be part of the answer. Good testing practices. So that was how I got involved in unit test maintenance. Along the way I released unit test 2, which is a back port of the features that we added to unit test in Python 2.7. That's been bundled with Django, which is very nice. The mock library that came out of some of the testing practices in resolver systems. Then fast forward and now I'm working for Canonical. So that's a brief history of how I got involved in Python. There's some interesting things to take away from that, I think. The first being that certainly my experience, and not just my experience, but the experience of many other people that I've met from the Python community, is that being able to demonstrate the quality of your work and your ability to work with other people through your open source contributions and through having effectively a portfolio of work that you've done patches that have been accepted into other projects that you've released as open source is far more important than any qualifications that you may or may not get. So if you're coming to Python, coming to programming, looking for work, the best way you can demonstrate to potential employers that you're someone who can work with other people that can employ good development practices and just that you're a good coder is through your contributions to open source work. The other thing is about becoming a Python core developer. The first thing to say, I guess, is that it doesn't mean as much as you might think. Python is just another open source project like any other, and it needs developers, it needs people to work on it. So we want volunteers, we want people to become core developers. It's not some exclusive club, and if you can demonstrate through working on the issue tracker, submitting patches, being responsive to changes, learning the development practices around how we do bug fixes, when it's acceptable to backport a bug fix to earlier versions of Python, and when something is a new feature, needs to go newer versions of Python. If you can learn those parts, if you can work with us on the issue tracker and on IRC, then getting commit rights is much easier than you think. The other thing is that it doesn't require you to know C. I got commit rights to core Python, not through my amazing C abilities, which I don't have, but through working on the Python standard library, just in Python, and there's plenty to do. So I would recommend if you're interested in contributing to Python, Cushal Das is a great person to talk to, he's very enthusiastic about getting people contributing to upstream. Okay, moving forward, I'm now working for Canonical on a project called Juju, and it's slightly interesting to be here talking to a Python audience, at a Python, as I'm no longer a Python programmer, or at least not professionally, I'm still involved with Python core development, obviously, but the project I work on, Juju, is written in a language called Go. Who has heard of Go? So easily half of you, maybe even more. Who's used Go? Quite a few of you, and who's actively developing something or working on something in Go? A few hands, a few hands. So even in a, this is a young language released by Google, three years ago, I think, maybe even more recently, but you can see that Go is gaining mind share very quickly. It's at least slightly ironic, given that one of my real passions for code quality is for testing. The irony being that Go is such a horrible language to test. Now that's partly because the tools are immature, so the tooling around testing is, they're just not very good, but it's also the standard problems that you have testing a statically-typed language. You wrestle with the compiler, mocking things is hard, Go interfaces help to quite a degree there. But if we compare and contrast Python a little bit with Go here, there's a lot to dislike about Go, not just the difficulty of testing it. Most of the problems come, as I said, Go is a young language, it's a very small language. They're being very conservative about adding features, which is very good. They're much more focused on performance and the runtime and making sure that what they add to the language really sort of suits the Go way of doing things. Some of these things may be addressed, but not necessarily very quickly. One of the big things that hurts programming Go is the lack of generics. What this means is you can't write a container that works with any kind of object, or if you do, you have to cast to an interface which goes equivalent of objects, cast to an interface on the way in and then cast on the way back out again. You have to subvert the type system to do it. You can't work with the type system. You can't write a function that will take any type of parameter. For example, you can't write a generic sort function, you write an int sort, a string sort, and so on. There's no operator overloading, so you can't write custom numeric types, you can't write custom containers that behave like the built-in slice or the built-in map. There's no method overloading, there's no optional parameters and no keyword parameters. If you want to write an API that's slightly flexible, in Python you would do that with adding so boolean flags or extra parameters, but they would have a default on you that you would make it optional to pass those in. Now something that's easy to reuse, but it's a useful way of creating a flexible API. What you're doing Go is you either have these extra parameters and then every caller of your function has to pass them in. Your code is listed with calls like some function parameter, true, false, true, and you have no idea what these additional parameters are doing. You can't even pass them in by keyword to make it clear. Alternatively, you write a separate function, you have to come up with different names and your users have to remember all these names for every permutation of the API. If you have three optional parameters, that's eight different functions you have to write. All in all, this tends to make Go a more verbose language. You end up writing more code than you would with Python. Some of the problems with Go are definitely not going to be fixed in the future. They're ideological positions from Rob Pike and the guys who are creating Go, and the Go community itself. The one that I think, and I hope none of my colleagues watch this because I'll be in trouble for saying it, but the thing about Go that is really just insane is there are no exceptions. So every function typically returns a result and an error object, and then about a quarter of your code is then if error not equal to nil, return nil comma error, and you're returning the error all the way back up for stack and you're losing the stack trace in the process. So every large project then implements their own errors package. You can go out and see Facebook or release one. We've got one. There are quite a few of them out there that allow you to wrap error objects and try and put back in some of this functionality that the language doesn't provide. But obviously it's not all... Go is not all terrible, otherwise nobody would use it. And there are a few places where Go really shines. And one of those comes because Go is a small language. It's extremely easy to learn. It's a new that I've been working on. It's a huge project. I'm still very much learning. Juju learning the code base, learning the way we do things. But I felt comfortable reading Go and writing Go within my first week. And that really surprised me. I expect the on-lamp, the learning process to take longer than that. But really within the first week I was comfortable with significant proportion of the language. That really surprised me. And it's a new thing, despite my reservations about the Go language. Another place where Go shines, because it's a young community, there's this tool called Go Format, or Go Fmt, as you spell it. And that reformats your code with the indentation and the curly braces in the Go way. So there's no community arguments about how code should be formatted. Everyone just does it the Go Fmt way. And in fact most people have set up to run this tool on their code on save in the editor. So you just bang out your code, you save it and it gets reformatted the way. And that's nice because the code is consistent. Everyone's code looks the same. So there's something else that you have to mentally adjust for as you read other people's code and it just reduces those pointless stupid flame wars about code formatting. But the real place where Go shines is for concurrency. And the basic model for concurrency is asynchronous Go routines with channels for communication. You call Go and then some function call including just an inline anonymous function if you want. And that's run on a separate Go routine and you pass in channels for communication between Go routines or between your main program and the Go routine. That's a very powerful combination and it's very easy to understand, very easy to read. And even better the Go runtime can use all the cores on your CPU you can happily saturate all of them at the same time and this is something that Python can't do because of the global interpreter log. And if you ask many projects why did they choose Go the concurrency story is often a big part of the reason why Go is used. The Go runtime itself is capable enough that you can have tens of thousands of these Go routines in process at the same time. And that's something we make heavy use of in Jujju and it's why Go is a good fit for our project. There are no threads in sight. Obviously the threads are there but they're under the hood with the scheduler and this is the CSP model of concurrency and it's a good paradigm to use for concurrency. Another place where Go shines is performance. So we say that Python is fast enough and generally that's true but Go is faster enough and nobody objects to their code running faster. So you can see that the Go programming language it's gaining we are seeing some projects and some developers migrate away from Python to Go. It's not really something to be particularly worried about. Python these days is absolutely massive. Back when I started Python way back in the the distant past it was kind of a small language had this small band of enthusiastic devotees being used mainly for scripting for automation. Some people were using it for web programming with this crazy thing called Zope. But it wasn't a big language but two things have really happened since then that have exploded the usage of Python. The first of that is that computers have got faster. So the fact that Python is a dynamic language and so it does a lot more work so that you don't have to that matters a lot less than it used to. This is one of the reasons why I'm not particularly worried about Python on mobile. The Python on mobile story is not very good. Not many people are using Python on mobile devices. One of the reasons is they say Python is just too slow. But if you look at the performance of mobile devices that's rocketing. That particular argument will just fade. Python is used everywhere now. The second thing that happened was the web revolution. The use of computers really exploded. The need for people to be able to develop rapidly became much more important than the need to be able to for the programs to run at super computer speeds. A program, of course, is famous for developer productivity. But still if we look at the future there are these couple of places where Python is not at the forefront and concurrency is one of them. That's not something that's looking like it will change in the very immediate future. And also I mentioned Python on mobile. Six times as many smartphones sold last year as PCs. The majority of computing devices already perhaps are non-traditional computers. Ones that you carry in your pocket. So if you are concerned or if in 10 years, 20 years time you still want to be programming in Python and you're thinking as a community what can we do as an individual what can I do to help ensure that that happens. Obviously supporting organisations like the PSF, the Python Software Foundation and the PSSI who are looking to help maintain and develop the community help move Python forward. That's something that you can do. Get involved with projects like KIVI that's Python on mobile devices. It is out there, it is possible to write apps for Android and iOS and get them into the various app stores using Python but they need more users, they need feedback and so on. We use Python 3 which is obviously the future of Python and for Python concurrency you can donate to Armin Rigo's work on software transactional memory in PyPy which is one way that Python will be able to make use of multiple multi-core devices. So those are a few things you can do. But anyway enough about go. So the substance of my talk today is called to the clouds so because we're talking about the clouds you can expect a lot of buzzwords a lot of jargon but one of the things I'm hoping to do is rehabilitate the term the cloud. It's the most overhyped buzzword in software engineering of the last few years and anyone here sick of hearing about the cloud? I'm very sorry you're going to hear some more about it. But what I'm hoping to do is demonstrate how deploying software services applications your scientific computing clusters how deploying them to the cloud using the cloud model how there's genuinely an interesting technology there and why you should be doing it even if you don't want to. So let's start this part of the talk with what I call my brief potted and mostly wrong history of the cloud. The picture here on the screen this is one of the original servers used by Google from their early days. And at the time they made the what was considered radical decision that they would build their systems or their cluster of servers rather than on big iron mainframe servers or big servers they would build it on cheap commodity hardware and to cope with the fact that the individual machines were unreliable or certainly less reliable than the servers being used by their competitors they would build this false tolerant architecture that could cope with individual machines failing they could take machines out and the system would still keep running but even more importantly they could quickly and cheaply scale up add new servers, add new servers and they could scale up much more rapidly and much more cheaply much more cost effectively than their competitors and this along of course with their their search algorithm was a big part of their early success and eventually they released this platform as what we now call a platform as a service as app engine Amazon took this to the next level with the infrastructure that they built to run their huge retail website and they took a slightly different approach with their cluster of servers providing virtual machines as deployment targets what we now call infrastructure as a service now developers and devops tend to love this approach because the paradigm that it provides to you is a machine and we know what to do with machines we're already deploying software to machines so with IORS we can just do what we're doing but do it in the cloud and when Amazon made this available publicly they became the dominant player in the cloud market other IORS public clouds include Microsoft Azure HP cloud and a whole host of clouds in stack like Rackspace for example so some of the problems that deploying to the cloud solves and then through that I'm going to particularly look at briefly here are resource underutilisation and overutilisation dependency hell and hardware management now the core way that the cloud solves these problems is by separating your deployment layer from your physical hardware so when you deploy to the clouds you deploy to these virtual machines and you don't care which physical machine it's going to run on and we'll look at the consequences of this as we go so the first one resource underutilisation or overutilisation perhaps a situation you might be familiar with you want to deploy a new public facing website public facing service or perhaps even internally you have some tools like dev wiki or dev issue tracker that you want to deploy and so you need a server so you buy a server and you run the software and you're using about 10% of its resources or alternatively you work for a company where the processes around getting new hardware commissioned and put into the network are so slow and cumbersome I've actually worked at a company where it took a month to get new hardware commissioned and put in it wasn't just the slow process around permissions it was the whole system administration of getting a new machine commissioned getting it built into the network and it took a month to do that it was horrible and so the simple answer is just stick everything on the machines we've got and everything starts running slower and slower well with the cloud first of all if you're running on a public cloud don't need any hardware at all so that solves that problem we can easily just get more virtual machines but even not using a public cloud we can fully utilize the resources of the machines we have by putting more virtual machines on them and if we need more capacity we can just add more machines to the cluster and let the cloud infrastructure take care of a lot of the administration processes much more straightforward so dependency here perhaps even more painful I don't know if you've ever been in this situation where you have two different applications you're running and they both use the same Python library but they're both using slightly different versions of the Python library one's an older application you haven't ported it to the latest and greatest version and what that means is you've got two applications and they can't both be deployed on the same machine you have to keep them separate perhaps you make sure that both of your software both of your applications or services they're both running the same version and any time you want to upgrade version you have to do everything in lockstep upgrade everything together at the same time leaning to situations sometimes where you're working on the new release of your software you take advantage of some new feature in the library you're using you've upgraded the version you've fixed all the cases where you were still using because you haven't switched the warnings on and they've now been removed you've fixed all of those places you have your software ready to go you've deployed and you've forgotten that something else on the same machine is using the same library you break it and emergency rollback get everything fixed again undo the database migrations you are using migrations aren't you otherwise you've got to manually fix your database I've actually been in and we've deployed something forgotten about something else using the same dependency and had to do an emergency rollback now one way around solving that problem is to use virtual env to provide isolated environments for each of your pieces of software that works fine but what you've just done is giving yourself a security problem because when there's some security vulnerability perhaps it's in Django you've now got on your server you've got several copies of your library that all need to be upgraded urgently and all in non-standard locations on the file system as well so system administrators tend not to like that feature and you can't use the support of the packaging system of your OS to do the upgrades for you another way of solving the problem of dependency hell is to have an isolated deployment environment for every application or service that you run through having them all running on separate virtual machines and finally the last problem that the cloud solves which we've already alluded to a great deal hardware management where you're using the cloud technology you can if you have a machine that fails or you need to take it out of the cluster you can do that without having to take down running services and where we're over capacity adding new capacity adding extra capacity with new machines is a lot easier this is dynamic server management now of course if you're already running and managing your own servers you probably don't want to take your data and put it on the cloud and throw away the machines and viral your system administrators and they're not going to be very happy about that and it may be a more expensive and less secure solution for you so instead we can still take advantage of these benefits of the cloud which reminds me something I forgot to mention a big benefit of the cloud not just the problems that we leave behind but something that we get if alongside using the cloud we have automated deployments then we get the easy deployment of new applications and services and we get the ability to quickly scale out existing services again I don't know what the processes are like where you work how quickly can you deploy a new application what are the processes you have to go through how easy is it to scale out your existing services this is something that the cloud gives us but in order to take advantage of these benefits we need to have automated deployment processes so even if you're just running a few servers and a few services we can still take advantage we can still get these benefits that we get from the cloud by using a private cloud we don't have to hand the keys of the kingdom over to some third party cloud provider and these days if you're using a private cloud that probably means open stack you can trouble with crucial at this point I don't mention that there are alternative cloud technologies like eucalyptus for example which crucial is a fan of but also there's other interesting technologies like metal as a service which is another product from Canonical and that provides you the dynamic server management advantages of the cloud but working directly with bare metal servers you don't want to run data center technology certainly worth looking at if you're managing a large amount of servers you have your own systems, your own frameworks you don't want to run something like open stack or you can run open stack on top of Mars very easily but you need to manage your bare metal but certainly open stack is the leading private cloud framework and it's all written in python and this is great news for us as python developers because what it means is there are a whole host of companies out there who want to pay you to work on open stack or with open stack open stack is absolutely huge massive in terms of lines of code, number of sub projects number of people using it and the amount of functionality it provides in my opinion one of the biggest things happening in the python world right now so I talked about, I mentioned that it's very important if we're to get the full benefit of the cloud that we have automated deployments now an important principle for this is that we treat our servers like livestock and not like pets what I mean by this is pets they're unique they have names anyone work at a place where you give each of your servers different names I do a few of you bad, stop doing it pets they're unique they're lovingly cared for, hand read and when they get sick where you spend a lot of money and time and effort to get them well again because if it dies replacing it is going to be a real pain you're going to have to find one that looks the same and likes the kids about it livestock doesn't have names they have numbers and if they get sick you probably take them out and shoot them and get another one and this is how we should treat our servers we ought to be able to tear down our application servers with a single command we ought to be able to provision our application servers with a single command we ought to be able to scale out add new units of our services with single commands not having to care about all the intricate details of machine configuration and machine administration all one person knows about or even worse that over the last 10 years three people have worked four people have worked on the configuration of this machine and if it dies we really have we better hope the backup works there's no one person who knows how to put this thing back together so that everything runs that's a terrible situation to be in it makes scaling out having new services really difficult and it makes it much harder to deploy new services and this is where I refer to my notes so I remember the reason DevOps and developers love the infrastructure as a server paradigm is that it provides the you have a machine to deploy to model so we get to leave a whole bunch of problems behind but one of the problems we take with us is we still have to provision, administer configure the machines in the cloud and there are a whole host of tools that will help you do that but it still means you have to do it some of those tools you're probably mostly familiar with chef, puppet, salt ansible it's quite a common problem quite a lot of people trying to solve it and even Docker which is the latest and greatest the latest hot new thing really that's about image based workflows but in as much as people are using Docker images as deployment targets it's about machine provisioning let me show you a yet still more excellent way as developers and as even as DevOps teams we really don't want to think in terms of machine configuration and administration or at least I don't want to what I want to think about is the services I'm deploying and how they're related my application needs a web server it needs a load bounce or in cache it needs a message queue and those are all related that's what I want to think about the key parts of my infrastructure and how they're related not individually what dependencies what ports need to be open firewalls, all of these kinds of things those are details which ideally some tool would take care of for me and we call this is something we call service orchestration canonical is something we've been talking about for a few years and that is kind of picking up in the industry that what we want to do is orchestrate our services, think in terms of our services not think in terms of machine provisioning and administration this on the screen here this is a view from the juju gu now this is something you'd use in production you'd use the command line scripts and configuration that you can keep in version control but it shows deployed services and how they're related and even for production it's a great way to improve the health of your running systems so some of the key advantages that juju as an automated service deployment tool and not just deployment it manages the life cycles of your services some of the key benefits that juju provides we've mentioned service orchestration cloud independence is an important one it works with Amazon EC2 Microsoft Azure, HP cloud open stack metal as a service and also Alexey and KVM containers for local testing that's great because you can take exactly the same configuration exactly the same system that your production deployment your pod stack users deploy it locally check that everything still works make changes, redeploy destroy environment recreate it, check it still works even better on your continuous integration server you do have a continuous integration server don't you running a barge of automated tests against a system provisioned in exactly the same way as your production stack who does have that that's ooh now some of you may be researchers and students so you have excuses for everyone else the basic rule is if it's not tested it doesn't work you may not know why or how it doesn't work but believe me it doesn't work if it's tested it might work but if it's not tested it definitely doesn't even worse you have people going through manual tests by hand and that's terrible and the basic reason is because people suck they're bad at doing repetitive things and computers they love doing repetitive things that's what they're there for so if it can be automated automate it so LXC containers are a great tool for this again another share of hands who's heard off and used LXC containers so some of you LXC containers are lightweight Linux containers they can share the kernel of the running system so you get native performance but it's an isolated container just like a virtual machine so they're great for things like this for local testing for deploying as deployment targets for your continuous integration they're also great for developing if you're setting up a development environment it's complex, requires installing a bunch of dependencies, maybe adding private PPAs, private repositories doing that on your system is a pain because you get it wrong you screwed up your system you have to reinstall everything instead create an LXC container LXC-create it's slow the first time because it downloads the image so that creating a new LXC container very lightweight LXC-start SSH into it and then you're just SSHed into your Linux environment installing your development requirements there you can either share the home drive or bind mount into it and then you do your work inside the LXC container and if you screw that up blow it away and create another one very lightweight, very easy to do it's not worth exploring even if you're not interested in any of this which of course you all are, I know and certainly as Python programmers one of the big advantages of Juju Juju knows how to deploy services and applications so a thing is called charms and those charms you can write them in any language but the best language to write them in and the language used for most of them is Python so you got to codify your deployment infrastructure in Python code and that's the heart of DevOps isn't it we don't manually maintain our systems, we don't manually cowboying changes that then no one can quite remember we keep everything as scripts and configuration files that are completely automated that we can hand off to somebody else this is DevOps and the Juju you get to do with Python I think with some of the other popular deployment tools you get to do it with Ruby and I assume that most of you are here because you like Python and one of the best things is that for the key parts of your infrastructure that probably most of you are using some of, there's a whole host of charms already written there's a community around the writing and the creation of these charms so things like Hadoop, Elasticsearch Seth, MongoDB, Postgres MySQL, Squared Apache HAProxy the parts that you're using that aren't your core applications there's probably already a charm out there that you can use so let's have a quick look at an example this is to play with Django the app is called D-Paste it's on GitHub you may be familiar with the application already it's a paste bin Caraman now sometimes this gets a bit antsy coming back from sleep but ooh can you actually see that let me try reducing the resolution and make it a bit so so this is the paste bin application it's running on my local system the URL which you probably can't see there is a 10.0.3 URL which is so the IP address that's a cloud IP address but it's actually running in an LXC container so just to demonstrate if I run juju status it spits out a load of information about the running services on the system and that's an IP address if I look at the which I will just scroll past probably scroll past the Apache IP address that's the public address of the 10.0.3.131 that's served by Apache running through this this pipeline of services the actual so the it's not just a charm that's been deployed it's a bundle this is jujucharms.com which is the charm store if you search for D-Pace there you can can find this bundle of services developed by my colleague Simon Davie the charm store this is actually a fake juju goo so you can do a test deploy to the charm store just to see how the services look configure units and so on but I've also it doesn't actually do a deploy I've actually installed the juju goo into my local juju so I've got a view of the running services I'm not going to attempt to do the full deploy here now partly because it takes time but also because it's network heavy it fetches the dependencies and so on and I don't want to attempt the demo gods too much by relying on the network here but what we've got in this bundle of services we've got Apache and HTTPS termination this is just the standard charm we've then got HAProxy running in working as a load balancer this is configured to send the same URL to the next thing in the pipeline is squid working as a cache and HAProxy here is configured to send the same URL to the same unit of the cache obviously cache only works if the unit that has that URL cached actually gets the request then the next thing in the pipeline is HAProxy doing standard load balancing using the standard algorithm at least connected and that's going to our application depaste which is the thing right in the middle you can see just off to the side the juju goo charm that's been deployed that's the running instance of juju goo that we're looking at not connected to any of the other services but it gives us this live view into what juju is doing there's an interesting one to the left the connection this is gynunicorn which I never know how to pronounce properly so I'll just say it confidently and you'll all assume I'm saying it correctly gynunicorn is the whiskey runner for our depaste app we could have built that in to the gynunicorn charm it needs to run on the same unit as depaste but somebody else has already done the work of writing a gynunicorn charm for us so that's running as a subordinate charm which means it runs in the same unit but we can just pass the configuration information to it there's also a postgresql connected to depaste, depaste is a standard Django stateless application and postgres provides the persistence layer we can see here on the left there's some information about depaste we've only got one unit running through this URL here I can configure new units using constraints of the the CPU power of the CPU memory I want it to create if this was going to to Amazon that would then create new virtual machine instances using the constraints we provided and if I want to scale up depaste and add new units this would be the same as from the command line juju add units, depaste and then tell it to add five new units so we can just scale up I might actually try that and see what happens this has been running locally on my system for several days now so quite what the state is in is anyone's guess scale out with these constraints default CPU cause one gig of memory which will be interesting as there's only three gig on this system anyway so let's have a look that's going so that's now doing it so you can see the bar has changed to yellow because only one fifth of the units are actually up and running and the other ones they're now in the process this is going to fail actually cause I have wifi turned off so I don't get any silly notifications so provisioning those machines locally is going to fail but the interesting thing is because the charm defines a bunch of hooks that are run when things happen and then a charm provides interfaces the postgres charm provides a postgres interface and the depaste charm says I know how to talk to a postgres interface so when we have a relationship the charm hook runs there's some python code that runs it's told by juju the live juju state server it says this relationship has now been joined postgres puts into the juju configuration it creates a username and password so no need to hard-code those in configuration if you don't want to the postgres charm can create those put those into the configuration for the application the application is told you've now the configuration has changed you've now joined a relationship and you can get that information from juju so the depaste charm runs does the setup to provide the database connection and the postgres the charm runs to make that information available so this is service orchestration the great thing is when we because we've established this relationship here as we scale out and add new units juju knows that there's a relationship between depaste and postgres so as I add new units I need to add that I need to add that relationship as well if we added more postgres units the charm is clever enough to know it's right now I'm running in replication mode and there's some reconfiguration to happen so when we're with these charms properly in place as we're defining our services we don't need to think in terms of how to configure and set these up that's the job of the charm and it manages not just the configuration of the individual service but it manages these relationships too if I'm now four pending units we've got a little triangle telling us that something is wrong and this is because it can't provision these machines something that I I didn't mention that is important from the point of view of cloud independence and because the deployment target is separate from the description of your bundle of services if you're a young start-up you're deploying to Amazon then you get onto Reddit and hack a news and you're scaling up it's getting more expensive your customers realise their information is on American servers and they really don't want that so you want to move to deploying to OpenStack or deploying to your own servers you put OpenStack on them you just retarget your deployment and configuration just run it against OpenStack export your data, re-import it and there you've moved over nice and easy so I only have a few minutes left I keep mentioning these charms charms is just a directory structure on disk we can fetch them from the charm store if it's an existing one we can pass in some YAMO configuration if it's a charm that needs configured that we will keep under configuration control obviously but we can also have them locally if for your own application you will probably take an existing charm and configure that set that up specifically for your application that blue you really can't read I'm sorry about that this is the output of tree or part of the output of a charm we have a bunch of metadata in the YAMO files and the hooks that I mentioned that are run when configuration changes and the relationship is joined or left here there are some links to the same hook.py file and then we've got a charm helpers package that's bundled in the charm here's some of the Python code this is the install hook and surprise surprise that's installing a bunch of stuff I mentioned that the services that you saw were from a bundle of services and this is the configuration file to deploy those this is the first part of the YAMO the Deploy YAMO bundle the configuration for the De-Paste app this is the Django framework charm it's actually fetching the De-Paste app from GitHub when you deploy it so this is something you will probably not well, slightly that's not how you want to deploy in production and then the other services and some of the configuration options for them just a text file so actually getting juju up and running once you've got it installed although the juju deploys a bunch of workloads we're actively working on getting it deploying sentos workloads and windows workloads we have a working prototype of deploying to windows so if you're in the unfortunate position of having to maintain a windows network juju deploys SharePoints juju deploys SQL Server but even though it's for deploying a bunch of workloads at the moment, there are clients for windows and for the Mac so we can deploy to OpenStack but from windows from a Mac computer if that's what you need to do so once we have juju installed we generate a config file then we do juju switch local here that tells juju that we want to deploy locally and if I was deploying to OpenStack I would source my.moverrc the standard OpenStack file that puts my credentials into the environment, juju can just read those from the environment so I'll just do juju switch OpenStack, you can metaphor environments.yaml define new environments but there's no need to mess with it for the standard cases, similarly with Amazon there's a couple of environment variables I can put in place to make my credentials available and then juju bootstrap and this creates the running state server that's going to manage the life cycle of the services using the provider API so talking to Amazon, talking to Azure when it creates virtual machines making sure that only the required ports are open when we upgrade the charm managing that and again having this running state server that manages your services and knows what state there and checks they're still running and alive this is a key difference between juju another automated deployment tools and having done that we can then call juju status and it will tell us what the state of our local juju is and there's the basic command deploying something from the charm store juju deploy juju GUI and then for deploying a whole bundle of services I went juju deployer on this configuration bundle so that's a very whirlwind tour of juju itself hopefully I've demonstrated to you why the cloud is an interesting technology how it's useful even if you're managing your own services thank you very much do you have any time for questions thank you Michael we have time for three quick questions if you have any questions three quick questions if you compare juju with chef then what are charms analogically similar to cookbooks or recipes I'm really sorry could you repeat the question if you compare juju with chef then what are the charms similar to are they like cookbooks are they like standalone recipes which then can be combined together so how do the charms compare to chef and the hooks are they stand alone and can you compare if at all you want to deploy say a postgreSQL or MySQL then you have standalone recipes for them standalone charms from them so can charms combine with each other and form a logical umbrella of this so do you mean can I deploy different services to the same virtual machine can you logically combine different charms into a single charm well you can yes you can simply have the charm do both you would then create a new charm but really there's not a great deal of advantage to that because you're then maintaining your own charm that does a lot more work than you need to and you lose this advantage of separate isolated deployment environments it's much easier just to use the two existing charms separately once if the root core so certainly you can combine them if what you really want to do is to deploy services on to the same virtual machine just to use less virtual machines have denser deployments one thing we're working on is allowing the use of LXC containers within virtual machines we already have that for metal as a service so we achieve a higher density full resource utilisation by deploying two LXC on Mars we're working on having the problem of having containers inside Amazon instances and Azure instances is container addressability and we're working on the network model for that but that's one way we'll solve the problem but you can already say juju deploy juju-gu and have it installed on the same machine as the state server so they're logically they're separate units but they're combined they're running on the same machine so if that's what you're after we can get it that way combining charms could also be done as I mentioned with the D-Paste we have a separate canunicorn charm which is a subordinate charm always runs on the same machine that's one way of combining charms and we've done it that way rather than building the canunicorn run out into D-Paste so is juju like a counterpart of red hats open shift or is juju now something like how does it compare to open shift so red hats open shift is a platform as a service more competing with Heroku than the likes of Amazon and Azure and open shift is a hosting technology rather than a deployment tip maybe it combines both I'm not really very familiar with open shift but something that's interesting in recent years is I think infrastructure as a service is largely beaten platform as a service in the market there are a few big players Heroku's sales force is huge in platform as a service but maybe that's really software as a service but Heroku may be huge but if you look at how do they compare in size to Amazon really the infrastructure as a service is much bigger it seems to have beaten out platform as a service and I think that's because if you're targeting a specific platform you're really locked into that whereas with infrastructure as a service you're able to move provider much more easily particularly if you're using a tool like juju rather than directly hard coding your calls to the Amazon API if you're working directly with the Amazon API you're tied into Amazon if you're using a deployment tool like juju or some of the other tools that knows how to deploy to any of these IRS you're much less tied in that's harder to achieve with platform as a service sometimes back I used one of the tool that you created RST to web so I just curious to know what motivated inspirator you to create that static site generator I'm really sorry can you RST to web so nowadays there are so many static site generators available so what inspired or motivated you to create this static site generator static site languages static site generator RST to web asking that what motivated you to write a static site generator RST to web oh my goodness so nowadays there are so many static site generators why did I write RST to web the answer is when I wrote RST to web there weren't so many static site generators I wrote this for my own website got 10 years ago so I was way ahead of the curve but you probably shouldn't use RST to web the more modern ones are much better the only reason I use it is because I'm a bit stuck with it but there we go thank you okay thank you very much