 Right, good afternoon everybody, welcome back from lunch and socializing and networking and stacking My name is Mark Shuttleworth. I am the founder of both Ubuntu and Canonical and I currently lead products for Canonical which spans both our cloud work public private and our Edge or IoT work which is Equally entertaining and interesting but not the focus today How many folks here have been to a presentation by me or Canonical generally before at an OpenStack summit? Great lots of new folks what we like to do is to focus on on Actual operations of actual OpenStack in these kinds of contexts, right? In a time of disruption in a time of uncertainty You tend to have what we call the fog of disruption, right? Lots of people shouting out about lots of opinions about You know what what's important in this new world, right? That's partly what makes the new world exciting is because The book isn't yet written, right? We're all trying to figure out You know what's gonna work? What's not? What's the right way to do things? What's not? And of course everybody's opinion is equally that you know valid in a sense And there are a lot of people here at a summit like this with a lot of opinions, right? But I if I look at what we do I think it it really Focuses on asking the simple question. What really matters and what really matters in the long term Which is not to say that things aren't different, you know operating in a cloud way is very very different to operating before cloud They're both important, you know both enterprise class stories, right, but what really mattered changed fundamentally between People who who are running enterprise operations before the cloud and now in a world where people are running enterprise operations with the cloud, right? And we're going into another one of those phases of uncertainty and disruption and opinion fests Which you can broadly call containers, right? And just as with clouds, you know containers have actually been around a while But because they have lit a fire under people's imagination They are a source of lots of opinions and lots of opinion fests and lots of disputes and arguments and Twitter fights, right about, you know, how it's going to work In that complicated world of containers Kubernetes is really catching people's attention at the moment I think folks will come to understand Kubernetes as one very important part of a broader story Just like OpenStack is one very important part of a broader story, right? There was a time when people said the whole data center will be OpenStack and that's the end of the conversation We didn't think that and now it turns out that OpenStack is the important part of the data center It's never going to be the whole data center and that's okay, right? That's just fine. OpenStack will do just great doing the things that OpenStack is just great at, right? What I want to talk today about is to zoom in a little bit on just the question of Kubernetes and OpenStack and Kubernetes across other clouds, a hybrid cloud question But I'm going to start as always with with this set of questions, right? Like what really matters? right what really matters And I'm going to put up what I what I hope will be Some useful ways to think about the choices that people have to make as they're getting into Kubernetes, right? And the same would apply if you're looking at other kinds of containers, but let's kind of focus on on Kubernetes itself What say the most important thing is to maintain the freedom to choose infrastructure dependent of Your choice of Kubernetes why why would I say that well because Kubernetes like other what we call process containers, right? So there you'll find there's lots of different kinds of containers and it's much easier Actually, if you give them each a name because then two people can have a conversation and not be talking past each other, right? but there's a class of containers where Inside the container you really care about a single process and you really care about that process You know coming from a developer that you trust and so Essentially that container is you can think of it like an app, but it's really a It's only an app in the cloud sense of apps, right? Like Gmail is an app, right? But yes inside Gmail. There are a bunch of Processes and those processes are probably running alone in a little container what we would call a process container So doc is an example of a process container And rockets you'll have heard about rocket. That's an example of a process container run see these are ways to sort of set up a process container Well, those constructs are completely separate from machines. I Can run Docker on Amazon? I can run Docker on VMware. I can run Docker on bare metal I can run Docker on you know any place I can get a machine, right? I can so I can have a centos machine and I can run Docker on it I can have an Ubuntu machine. I can run Docker on it So the first the most important thing actually is to separate the question of where am I gonna run Docker stuff these process containers From the infrastructure because if you end up in a situation where you can only do Kubernetes You know on VMware because that's where you built it now You have a really big problem you can't you can't essentially give your developers the freedom to go fast and Yourself the freedom to then put that developer stuff on AWS or an Azure or on Google, right and as a business owner The ultimate buyers effectively the CIOs They really want the freedom to put stuff on a public cloud like Azure or a Private infrastructure like VMware or OpenStack, right? So the first thing really is to Deconflict these two choices infrastructure machine choices from process container app type choices now, of course You know free speech and lots of opinions So right here, there'll be you'll find furious arguments about hope how OpenStack should change to get a set of APIs which will provide you Open stack in 80s, right or various permutations of that and honestly That's just a waste of time because from a CIOs perspective having a different set of APIs on OpenStack To get a Kubernetes Doesn't make sense in a world where you don't know if you want to put a particular app on a public cloud Or on a private infrastructure So no VMware specific Kubernetes APIs really are going to get much traction no public cloud specific Capabilities if they're exclusionary are really going to get much traction because we have to decouple what developers do from infrastructure choice Right infrastructure choice is really an economic choice, right? Do I want to rent? Do I want to buy do I want to rent in Germany don't want to buy in San Francisco? These are reasonable economic choices for businesses to make I'm going to separate those out from developer choices, right? There's another reason to do this which is that essentially Kubernetes like Docker like Measles and so on are fast moving pieces of infrastructure and we probably want to think about those as project level constructs, right? So the other reason to do this is that you essentially allow two different teams to make different choices It would be perfectly reasonable for a business to essentially say that project over there that project We're going to go run that on Azure this project We're going to run an OpenStack and that project we're going to run on VMware You might disagree with the choice of technologies, but at the end of the day Those are reasonable choices for businesses to make at the same time Right large businesses do lots of things that are at odds with each other at the same time. That's okay So separating these choices really allows allows the business the flexibility economically and it also allows developers the flexibility at a Project level to essentially do different things in different places What we have to do is is make that freedom not cost too much, right? It has to be really Easy essentially for the business to own and operate these different Kubernetes is on different clouds or different substrates Separating the operations of Kubernetes from the operations of OpenStack or the operations of Kubernetes from the operations of your public cloud Those are good ways to make things easy, right because you kind of concentrate your mind on the the key thing that matters but like all Sharp swords that has two edges Kubernetes is a layer Essentially for coordinating docker processors applications And those applications typically are cloud native applications, which means that they built in a way Which can really take advantage of cloud services So if you look at aws, they don't just provide you with vms But they can also provide you with load balances if you look at azure, they don't just provide you with vms They've got different kinds of storage if you look at google, they don't just provide you with vms They've got these ancillary SaaS services that they wrap around which are in each case really interesting and really good, right? So simultaneously you want to do two things first. You want to decouple Your you're you're thinking about process containers kubernetes docker from your choice of cloud because as a business you want to maintain the freedom to choose But if you've decided to deploy something on azure, right? It makes sense to then get the most efficient deployment on azure If you've deployed decided to deploy something on open stack It makes sense to to get the most efficient open stack And if there are ancillary services that are kind of built into the cloud and cheap and super reliable Just things you don't have to worry about at that point Then binding into those and integrating with those things is really interesting really really useful, right? And that's something of a contradiction. I just said, you know separate these choices and then I said Having separated those choices go and like lock yourself into the cloud, right? But in fact if you think about it, right? The fact that you can spin up a kubernetes wherever you want and then optimize that kubernetes for the cloud You're not locked in, right? You your developers shouldn't know Which cloud you're on at that stage unless they themselves are choosing cloud specific services for machine learning or Other capabilities that those clouds are offering, right at a kates level at a kubernetes level You know, you you should be able to choose where you want to do something for a project And then choose if you want to use the local optimizations the cloud specific optimizations So that we would call sort of integration of the cloud and kubernetes And then this is probably the biggest thing that most people don't think about But maybe I hope you will Which is that you know, you you fall in love with a thing Like kubernetes and it's very exciting and so all of your best people will be interested in it The cio will be paying attention The you know, there'll be monthly metrics as to how big your cluster is or how many apps you've moved to it Or how in production it is, right? But it really starts to pay value to the business just like open stack when you stop thinking about it in that You know in that in that very Kind of excited way it really starts to pay for the business When infrastructure kind of disappears into the background so the business can focus on the things that are Really unique to the business, right? So we saw this very much with open stack, right open stack, you know People couldn't get enough open stack and they were going to have their very very best people on open stack And open stack was super important to the board and to the future of the company and blah blah blah, right and Now I think we're in a much healthier place that everybody says geez. I just want an open stack that works right What's happening is that everyday operations and costs are suddenly really starting to dominate the conversation Right because at the end of the day Infrastructure is just infrastructure Right if the cio is happy to buy discs from amazon, it really tells you right that at a cio level You know what kind of disc it is doesn't really matter because you don't know when you're buying from amazon, right? I think that's super useful and super important to think about in the early stages You're going to be living with this thing every day And it will not be the most important thing, right? Hopefully it won't be the most important thing because what should be the most important thing is the apps you're building on top of it Right But it's an easy thing to lose sight of because in the moment containers containers containers. It's super exciting and everybody has opinions, right? So this is the one thing to remember is that at the end of the day the stuff should just fade into the background Should just disappear should just be there right so that you don't have to worry about it two o'clock in the morning But more importantly so that it doesn't show up in a surprising way in the budget Right at the end of the day the stuff should be there at a reasonable level And and and never essentially deflect or constrain your your your other decisions In a similar way like these are all constraints about how you might think about operating Kubernetes We know that open source has become the way that infrastructure gets defined right we understand that We're living in a world where innovation even mission critical corporate defining innovation Leaks out of the building and in fact in many cases it's pushed out of the building as open source Right and remember the good old days when open sort open source was a sort of side project from people who you know Had spare time at work Today in many cases, you know critical things that companies are inventing gets pushed out onto github Before it's a product Right and so, you know, we have to find new ways to think about how do you know That a product is going to be around for a long time right it used to be that that Your enterprise database would get released every three years And before it got released it would have gone through Very very rigorous qa and documentation You know extensive beta testing and so on and so forth and you could You're essentially blind to all of that because it didn't exist to you until suddenly it showed up as a press release And a new product launch and so on and so forth right whereas today We literally watch the we watch the guts of software development right in public. It's great It's very exciting. It's the best time ever to be a software person, right? Because you literally get to watch google microsoft oracle and others Do their r&d in public right and kubernetes open stack are no different, right? They they're just We get to watch all of this but the flip side to that is you know, it's a bit messy And the best way to be pretty confident that Um You will be able to go with the flow effectively Is essentially to think carefully about places where you step out of the flow where you step Where you climb out of the river effectively And I think you know if we look at open stack over the last couple of years There'll be many cases again where people stood up and said, you know You should use our open stack because it has this particular feature Or we fixed this thing in open stack or open stack had this problem and our open stack doesn't have that right And at the time that might have sounded very convincing and compelling But in the long run, you know Come back to the previous one In the long run, there's no single feature you might have had on kilo Which turns out to be more valuable than everything you could get In l m n and akata, right? And so staying close to upstream is really the best way to say look There may be one thing right now that's super super super attractive and interesting But if I stay if I essentially preserve my ability to upgrade, right? I'm going to get a thousand new things every six months And actually as a business Getting into those features isn't the most important thing for the business One of our so you saw in the keynotes today the big focus on managed services And I think they talked about one of the world's biggest deployments large scale deployments of multiple open stack clouds Moving to a fully managed service. That's actually canonicals boot stack service And the customer there essentially said hey, you know In the early stages of open stack, we fell in love with the idea of modifying it We fell in love with the idea that we ourselves could essentially change it and modify it Now we have a bunch of them and we're not exactly sure how they've been modified Right and we're not exactly sure Right which modifications are where and we're not exactly sure Right what we'll break if we upgrade to newer versions of those open stacks. So the project essentially is to help them Put the economics of operating open stack into a box so that it becomes super super predictable, right? But really Thinking about staying close to upstream is thinking about all of those things. There's nothing wrong with investing in patches for open stack But essentially consuming them as part of the main release cycle is a much healthier way And really as a as an operation the key question is to think about how am I going to upgrade? Right, how am I going to upgrade? It still amazes me today went through an rfp process And as we started that rfp process it became clear that in the waiting Of how the company was thinking about open stack Technology was 35 and I think that's quite smart right at the end of the day open stack is maturing Um, so pretty much all of the distributions are going to offer the same sort of technology Um Economics are much much more important, right? If you think of why a business is even bothering to do open stack It's to give themselves the option of owning infrastructure alongside renting infrastructure Smart businesses will do both right so they want to give themselves this option if you ignore economics You're forgetting the real reason to do open stack at all right? It's all about economics But anyway 35 percent of the weight was for technology, but of that Only eight percent if you treated that all the tech is a hundred percent only eight percent less than a tenth Was can you upgrade? Well, that's really crazy right because if you think about it if you're rolling out for a large business You may be rolling out tens or hundreds of clouds if you think of a telco You're rolling out 200 clouds in north america Well, if you do one a day by the time you get to the end of that Right your two releases out of date of open stack So actually not only do you need to roll them out one a day you need to upgrade them Effectively every six months which means you may have to upgrade them twice in the same year before you've finished rolling out the whole thing Right so upgrades your ability to own the thing and upgrade the thing And to do all of that predictably and cheaply those turned out to be Super super valuable right so in your weightings of things You know staying close to upstream and everyday costs turn out to be Super super important any Comments or questions you want to call me out for a Line cheating thieving south african Is this resonating with those of you who've owned open stack for a little while? Yes It's very pertinent. So the question is You know when you're upgrading something that has many many moving parts How do you assure stability? I'd say the the number one the deep the zen of how Is repetition you know the first time you upgrade anything complicated. It is a science project Right and that's true of everybody right you might have you know better background in physics But it's still a science project and the great thing about science is you learn stuff because it doesn't work quite the way You thought it was going to work right So the real key is is essentially using the same upgrade process that many other people have used Because you find that there are rough edges and and if lots of people have used that process then Other people will have found rough edges that you don't have to find so it's the first thing For us one of the keys is that we're able to essentially reuse operations code In completely different architectures and normally if you think about how people automate stuff They'll say oh, we're automated this you know the bank will say we're totally automated and the building You know the other guys will say the insurance guys will say we're totally automated and the The oil and gas guys will say we're totally automated But if you look at them, there's not a single line of chef or puppet or salt or ansible That's the same in those two operations or if they started with the same thing once upon a time They're forked beyond recognition which means for each of them the first time they do it It's a science project right so we have a substantial advantage because of our use of juju Which essentially in capital encapsulates operations and decouples those from architecture We can use the same operations code Even in cases where you have two completely different architectures of open stack You know I might for whatever reason have storage and compute and and control plane all on the same Machines with or without containers and somebody else may have said no no no I've got reasons that I want to separate those things two totally different architectures, but the code that sequences What do I tell this hypervisor node before I tell that control plane node before I tell this database node Before I tell that messaging node that code can all be the same because that's independent of architecture right the operations of the oracle database It's a good idea to sequence this then this then this then this when you're backing it up and updating it Those are independent of whether those things are vms or physical machines or even containers right So so the key thing actually came from from pulling operations out of the architectural automation We call that modeling right where sweet the architecture comes from the model But the operations are essentially reusable across all different architectures and different models So so that's the heart of it You know People measure their progress in different ways one of the things that we measure very closely is for our customers Um how many of them are on the latest version of open stack? It's really important to us to help them that they're just there on the latest version of open stack And it's very very clear the difference between us and others Is that our guys tend to find it fairly easy to stay on the latest version of open stack They upgrade running clouds The actual workloads don't need to know In fact, some of our toughest customers are now pushing to be able to maintain operations on the cloud In other words, you're still creating disks. You're still creating vms. You're still While you're actually upgrading those services So that's the kind of current state of the art for us is is being able to not only Leave all the workloads in place and then upgrade the cloud, you know In in in a couple of minutes But actually to be able to leave the cloud operating so that you don't have you're Essentially zero downtime from the point of view of things that are consuming resources That's super fun. Actually. That's super fun getting to that level of operations But okay, the key thing is to be able to reuse that knowledge easily across across lots and lots of people So that it's not a science project for everybody, right? Let somebody else push us to do the rnd so that so that everyone else can reuse it for free Okay, so now the question. How do we think open stack and kubernetes fit together? The simplistic answer is beautifully, right? Like they are effectively dealing with different Classes of object in in my view. It's not up to me to say what open stack is, right? There's a board for open stack and they make decisions and technical committees and then Ptls and projects and tents and things like that That's where open stack gets defined. But if you were to ask me, I would simply say look we have two Completely separate and independent layers right we have Machines on demand As a story and for that we need software and apis And things that give us machines on demand and at some level everybody knows what a machine is, right? I mean, it's kind of the beauty what amazon did is they said here you can have a machine by the hour Is it an actual machine? No, it's a virtual machine But we don't care, right? It feels looks like a duck talks like a duck quacks like a duck We can use it like a duck, right? We can certainly do r&d or prototyping for ducks on that kind of stuff, right? And everybody was quite happy with that and now people do a lot more, right? So if infrastructure for machines on demand and then we have infrastructure for processes on demand and those can stay completely independent that's up to open stack to have the kind of clarity of purpose To not be confused about that Right and maybe we will maybe we won't but I'm not confused about that and I suggest you not be confused About that because the confusion won't help, right? You want to be able to do kubernetes trivially on open stack or on public cloud the same way that means it can't be something special to do with open stack There is an opportunity I think to do to provide services in open stack And bind those two kubernetes when people do choose to put their standard kubernetes on open stack And the services there would be al-baz for example al-baz to me makes a lot of sense to have as essentially part of the broader picture of neutron And we already see al-baz It's it's it's well defined The couplings between al-baz and kubernetes are very obvious the way kubernetes thinks about An application is this a set of processes with ip addresses across which you can load balance, right? So they're very clear examples of places where Should open stack provide a standard infrastructure as a service primitive? A kubernetes could bind to that and use it as an optimization But but that's it as far as open stack is concerned doesn't know that that's kubernetes, right? That's just a set of ip addresses that it's load balancing across So processes on demand that's what kubernetes does very well, but it's also what what docker does very well machines on demand Think of three classes of machines physical machines And for that we recommend metal as a service mass It's very widely adopted In fact, if you're playing a computer game these days almost certainly The server that you're talking to was deployed with mass like people who are deploying Tens of thousands of machines and they want to do that super reliably super efficiently And they just want to move 500 machines from this game to that game this morning and this afternoon They might move another 500 machines depending on what gamers are doing They would use mass, right? It's just physical machines on demand Virtual machines on demand that is really the domain of open stack Like there is when it's done well when it's done properly There is nothing that can beat the economics Even if you take it as a managed service like boot stack, right? There's nothing that can beat the economics for vms virtual disks and virtual networks on demand That than open stack if you're okay with capex, you know what I mean capex buying buying servers and data centers Some people aren't okay with that public cloud obviously suits them Large organizations should do both, right? But for your on-prem infrastructure, there's nothing that can beat open stack And then containers so in the world of containers just to just to confuse things a little You can take the container idea and take it all the way to the point where it looks like a VM Now this is not what docker does, right docker essentially says they're only interested in just Opening up the container enough to fit one process in and actually that's Super useful because it gives us a very clean way to ship that one process, right? But other people and canonical in particular we're interested in essentially extending that idea to have A container that feels like a virtual machine in other words You know If you just logged into it if you ssh to it it could be sent us and you wouldn't know that it wasn't sent us, right? And so I call that like containers for legacy workloads or containers for old people, right? It's it's a really interesting conversation because simultaneously we will talk to a bank about Docker or kubernetes, which is the new style process contain a cloud native architecture for the next applications And lxt which is basically where can you substitute out kvm For something that's lighter and faster but without changing the application, right? So You can get machines that are containers, but don't worry too much about that, right? It doesn't it's not kubernetes. It's just machines And then on top of that processes the way you organize larger numbers of processes You can use docker swarm. You can use kubernetes But for here, obviously the thing that we're focused on is kubernetes Across all of these different places you can get machines, right? You can get machines from google I can get machines from open stack and get machines from oracle rack space software It is kind of fantastic that you have all of those guys working for you, right? Oracle will give you a machine on demand. That's great. So we'll google. So we'll microsoft so will amazon And so really thinking about then how to deliver kubernetes cleanly on that Is is the story of the day? oops, so I have Let's see what I've got that's fun it's not over yet so I'm gonna need to Unsuspend this This is a box from a canadian company called contra and it has nine physical nodes in a 2u form factor So it's pretty amazing and I understand that they actually built it for telco nfv because essentially you can just drop open stack straight onto that People who know canonical will know that that's just mas and i'm showing the mas dashboard over there Plus a juju bundle that contra on have designed of an open stack that fits perfectly So it's literally one command to put an open stack on there But for today's purposes, let's rather put kubernetes on there. So if I say juju status I have a blank model I'm a bit i'm a bit nervous about all the machines turning on and off when i'm not doing anything Maybe they're just sleeping um And I want to say juju deploy kubernetes core and so with a little bit of luck What am I doing? I'm building a model So i'm reusing those operations of kubernetes and our kubernetes core is a sort of pre-can bundle of the simplest possible kubernetes that you might put together And with a little help from the internet I'm building a model of this super simple kubernetes and i'm mapping that model to a couple of physical machines So what you should see is physical machines turning on And then once those machines have the operating systems installed the model of the kubernetes applications will be built And that will give me kubernetes it's Going out to the internet for each kind of chunk each part of the topology so it's going out to fetch kubernetes all the pieces of kubernetes um, so i'm going to leave that and just switch to the public cloud this is A slightly bigger model of kubernetes that I deployed on google An hour ago and so I can just kind of walk you through that So I typed exactly the same command instead of just kubernetes core I chose uh canonical kubernetes right and so that's a slightly bigger model In other words, it has some additional pieces and it scales out some of those pieces So let's just look at what's in that kubernetes. Uh, oh we're slightly cropped on the screen, but essentially we've got um That the heart of it is kubernetes master and worker and for kubernetes core. That's what you'll see That's the architecture of kubernetes essentially there's a controller and then the work It's distributed across a set of machines. There's an agent on each machine. That's the worker effectively and and those will be glued together It's cd from core os is used as a database essentially keep track of what's been put where Um, we use a key distribution system called easy rsa. So that essentially is like a little certificate authority hands out keys Um to each of those pieces so that when they connect to each other they understand Who's doing what and then flannel is a way of essentially describing um Encrypted network tunnels to those different workers so that they can talk to each other Um in a secure fashion so we can give ip addresses effectively to all of those pieces In this model, um, it's built out over a bunch of different machines And so you can see we've actually got multiple of the kubernetes workers because that's fun. It's elastic We can just grow more of those Um and three of the xcd. So that's a essentially a full production xcd Um and so every day operations say I've got a kubernetes on the public cloud. This one's running on google It's not currently but we are working with google to bind this to the google capabilities if you want right So I just used the standard topology here But but what we want to do is is address that question about binding to the features of the cloud And I can deploy exactly the same thing exactly the same way On on azure and then add the features to bind to their load balancer as a service or their storage Primitives for example, but say for example, I wanted to scale this out. I could just say I want to add three units to kubernetes worker so Because I've mutated the model essentially I've now said okay I actually want a total of six units of kubernetes worker And that means I need to go out and get another three machines from google Which are pending so you can read that chart very clearly You can see how changing the model essentially fetches clouds from resources from the cloud as needed How are we doing on that bare metal? so you saw the deploy And here because it was kubernetes core I've actually only got Well really two physical machines that I'm asking for so kubernetes core You can do kubernetes core with just two vms or just two physical machines And then all of those services are mapped in there And they'll be busy turning on To get the operating system deployed if I go and have a look at that in mass here You'll see there are those two machines and they've gone from switched off To turned on the operating systems getting deployed and then the model of kubernetes will get built on top of that Okay, so now I want to do something a little crazy That google kubernetes is in the middle of A big change I scaled it out right so i'm in the middle of Scaling out that kubernetes and I deployed that kubernetes and I deliberately asked for an old version of kubernetes Right i deliberately asked for kubernetes one five who's played with kubernetes who's actually built it How many of you built with one five? How many of you built with one six? okay, so This is a classic problem people go out kubernetes changes every three months And so people go out and build with kubernetes one five and then one six ships And how much work is it to go and replumb that infrastructure and so on and so forth? How stuck do you get? Well, I'll show you when you do it this way. I'll show you how stuck you get So say I want to upgrade kubernetes one five to kubernetes one six. Well, I actually only have two pieces of kubernetes here that I need to Worry about right? So here in google i've got the kubernetes master and worker and i'm going to just go juju config kubernetes master And i'm going to switch tell it to change a config item which tells it to switch its uh channel So i'm done that will trigger through juju through the model That will then trigger all of the work on the right vms to to upgrade the control plane to 1.6 The work is I can do the same thing, but So let me just do a status that Okay, so it goes it goes down very what's down is the machine from google remember I'm in the middle of that expansion, so i'm busy growing this cluster from Three units to six units and while i'm doing that i'm busy upgrading Just just to give you a sense of what it feels like when you're operating in this great way so um, I've upgraded the Master I now want to go and upgrade the workers and so you'll see an interesting thing it will say blocked So i'm upgrading kubernetes worker and if I go to status it's going to change and they're going to say that they're blocked the ones that are up Right now why are they blocked? Because to go from 1.5 and 1.6 on the worker. I have to restart the process It's like there was a version 1.5 of the process I have to turn it off and turn on the 1.6 of the process But the way these process containers work Restarting that or killing that is going to kill all of the workloads all of the things that it started So I don't want to shotgun that I don't want to simultaneously. I changed the config. I changed the model I said I want 1.6 Across all the workers, but I actually want to control The process now I can just put it into shotgun mode. It's just an option and then it'll just do it But it's much more interesting to see The process of doing it this way So what we do is in the in the juju framework we have the ability to have Um operational actions which you can then deliver to specific units So now if I go and I say right I want to do an upgrade action just on kubernetes worker zero That gets queued off. It's asynchronous and if I go and look at the status In a little while you'll see now it's doing the Now it's essentially preparing for the upgrade action And I think It's now doing that And very shortly it'll be done. So you see how I can now control the rollout of the upgrade across that Now if you show like there are lots of the biggest kubernetes's are on ubuntu The microsoft azure kubernetes for example that service is on ubuntu The ebay and other some of the larger corporate users of kubernetes are on ubuntu And for the folks who've actually gotten deep into this and sweated With the chef and the puppet needed to stand up kubernetes at scale. This is amazing. This is magic when they see it Now I understand why you guys do it in that way so How we do on time do we We're at the top of the hour. Okay. I'm going to take questions But while I take questions my colleague james is going to come in and plug in And just essentially do that deploy on top of open stack in the background. Sorry james james has a very good speaking voice and answers questions quite well too, but But if if anybody has questions, I'll take them while james essentially just shows you A stand-up of kubernetes on open stack So now you would have seen kubernetes getting stood up on bare metal You'd have seen it getting stood up on google and then upgraded on google while it was being scaled out on google And then we'll show you the same on open stack. So complete decoupling of those two Any questions was that useful? Yeah, any questions Has maz related to ironic maz is something of a Wise old grandfather to ironic It's probably the nicest way to put it. So maz predates open stack And is now used very widely to operate the whole data center folks there was a An opinion fest and there were folks who said no they wanted bare metal metal provisioning inside open stack effectively We disagreed with that simply because we thought look most of open stack is to give you virtual machines virtual disks virtual networks Most of the knobs and dials on the dashboard are complete lies or irrelevant when you're dealing with physical hardware So we continued with maz and now I think most people agree that actually it's better to give your data center to maz and then use Open stack for virtual goods If you want bare metal performance with open stack, the best way to do it is with Containers remember I showed you that kind of container lex D that is like a machine So open stack lex D nova lex D is great So if you give people a single container on bare metal through open stack Then it feels like real open stack and they get all the raw raw performance of the machine So yes, we see a lot of folks who asked that question less so now because I think people have tried both They generally go with maz Other questions. Yes Yes, quite a few of our customers are interested in things like collar or open stack Helm you can find official canonical images for open uh for you know, canonical collar on on darker hub And and we will happily support our customers if that's if that's how they want to do it You could use your deploy kubernetes at scale on maz and then run collar or open stack helm To spin up kubernetes on top of that It does to me feel like a little bit like a hammer in search of a nail um, I think um We will support customers who go down that route But most of our customers have said actually juju plus maz gives them really beautiful open stack operations at scale and they'll wait for other people to Polish off the rough edges on other approaches before they go there Okay, I've got the tricky bit of connecting to the screen done. So this is a A karta based cloud running on ubuntu 16.04 It's one we actually run all of our qa for ubuntu and for the open stack charms project on I can see I've got a couple of instances running. I've got a juju controller already running and we're going to deploy the same bundle that mark deployed on Gce on on this cloud as well. So That's exactly the same command that I typed to deploy on the google public cloud effectively And what we should see once this has started going is uh Just let it get its legs So we're now talking to the juju controller and saying hey, we want to build a model which has these apps and these vm's on open stack The juju controller will then turn around to open stack and say hey, we need some vm's And hopefully those will start showing up in the nova listing In a sec Good last question. Yes Yeah Yes, you couldn't I mean so lots of people use maz without juju They use maz just to own the data center and allocate machines to windows Allocate machines to sentos allocate machines to ubuntu maz will deploy all of those things and maz has no idea whether it's giving them to juju Whether it's giving them to ansible whether it's giving to chef puppet anything else So it's very easy to just get machines out of maz through a rest api right if they're in your quota You can get them right and so that makes it easy kubernetes is Kubernetes will give you Docker processes on demand right You're you're kind of making life difficult for the kubernetes people if you start asking them to give you physical machines on demand Right. Everything has a zen Everything has its purpose Right. There are chisels and hammers and mallets and they're all good Do you know what I mean? You want to use things the way they're good So kates will give you docker processes on demand maz will give you physical machines on demand Right. Those are each different things What you can do is you could use for example ansible and bash or python To deploy ubuntu across a bunch of machines or center across across a bunch of machines And then put kubernetes on top of that you would just be doing for yourself manually the stuff that juju does for you Right. So really don't be ideological about this would be my thoughtful advice Right Good. Thank you very much. I hope that was useful