 Can you hear me? Pardon, you're supposed to say at that point. So hi, my name's Mark Baker. I am not this person who is supposed to be delivering the presentation. So Tom, unfortunately, was unable to make it. So I'm going to do my best to try and cover for him. So what we do work in the same team. And so aside from knowing many secrets about him, hopefully I'll, and I have run through this with him. So hopefully we should be able to do that. But it does mean you could probably catch me out with any tricky questions if you wanted to. And so the subject of this is making the economics of open stack work. And so this is essentially going to be a comparison between open stack and a well-known public cloud. You can guess which one it is. And how it kind of stacks up and how I think we can try and address it. I work for a canonical company behind Ubuntu. You can probably guess that. So this comes with a warning in that most of the examples I will give are Ubuntu examples. And most of the tools that I will talk about are Ubuntu tools. So any of you find that offensive, now is your opportunity to leave. Thank you for staying. So I will skip over Tom. Modern software is going through this change, right? And this is the change from enterprise software that was traditionally vertically integrated, very tight coupling between application, operating system, and hardware. And that has changed. And so now we're seeing software that is made up of very many different, often smaller, components that are spaced across very many different machines. And so complex application architectures will contain very many different pieces that instead of running on one or two or a handful of machines, actually are running on many more machines. And so we'll see the different components that go to make up that application being striped across many, many servers. And this is kind of very typical of a scale out implementation, right? The reason that we have scale out software architectures are because we're having to deal with more, right? It doesn't matter whether it's more data, more users, more connections, but more. And in canonical, certainly, in the Ubuntu community, we call this big software, right? So in the age of big software. And big software examples, things like Hadoop, which I'm sure if you don't use, you certainly know of, platforms of service technologies, Cloud Foundry, or OpenShift, or Heroku, or all these kind of things, comprised of many components spread across different platforms. And so OpenStack is big software. There are 54 different OpenStack projects, right? So installing all of them, you can do it on a single machine if you want, but it's very many different components that are related to very many other components. And in some sane architecture, you would have those spread across multiple machines, right? That creates a management challenge, shall we say, right? How do you manage that? How do you manage, how do you deploy that environment? How do you manage that environment? So what we see is with big software and with this move to scale out, we see this kind of push and pull. So as we get more scale, we have to disaggregate, right? Split things into multiple units. And as we disaggregate, we get more scale, right? And to kind of explain that, we take a telco example. Anyone from telco? Hey, so I telco people. So in a typical telco environment, and I could have picked on any NEP, but in a typical telco environment, we're used to providing services like whatever, Packet Core, or IMS, or phone connectivity things, shall we say? And they would go and buy that from a vendor like Ericsson or Alcatel Lucent or Nokia or Huawei or one of the others. And that would come as a big, fancy box that would do those things and it'd do it extremely well. The inner workings of that box were kind of a mystery to people that didn't work at the vendor that provided that, but it didn't matter because it did exactly what it was required to and did it really well. But also expensive, right? Reassuringly expensive to steal Stella Artois' marketing line, right? And so as telcos are looking for different options, they've moved to NFV and of course, you'll hear all about that tomorrow on the NFV track if you don't know about it already. You see that that is moving to standard industry hardware for one of a better word, common off-the-shelf hardware, using a bunch of example, of course, but using a Linux-based operating system, could be sent to us with Red Hat or Sousa or a different one, of course. Sometimes with OpenStack, sometimes with LexDs, our container hypervisor, sometimes with VMware. Then again, another operating system above the line and then a series of VNFs, which is fancy telco speak for application, right? And that VNF could be a firewall or it could be some messaging service or load balancing service or a whole bunch of different things. So that's disaggregation, disaggregating a service and that is big software, right? That NFV service is big software. There's many different components that we have to try and manage that we have to try and scale. The same is true in enterprise, right? Where we see a traditional enterprise stack, good enterprise quality, HPE has it built into their name. Likewise, Red Hat, Oracle, of course, SAP, the kind of big industry heavyweights of enterprise software. We see that being disaggregated. There we go, in exactly the same way. Oh, good, Ubuntu CentOS example, both sometimes with OpenStack, sometimes with VMware, sometimes with containers, increasingly, there we go, finally. But then with guest operating systems, could be Ubuntu, could be Windows, could be an enterprise Linux of your choice and then other applications, right? Some nice data analytics example there, Hadoop and Spark, MySQL SAP, right? So disaggregation, many more moving parts to manage. It doesn't mean that the SAP Oracle environment there over on the left, it's easy to manage, of course, that has its own challenges too, but typically those vendors have great management tools, so things like operations and backups and upgrades and those things are handled pretty well, whereas on this side, the potential is very many different ways of managing them, right? Upgrading, updating all those problems. So enterprises have this disaggregation, they end up moving towards big software, they do for good reasons, they want flexibility, they want scale, they want other things, right? So economics and operations, if we take, I didn't pick a, pull out any examples, but take vendor A, vendor B, vendor C as sort of typical enterprise software vendors, so the bills are really slow on this, I don't know why. That is the traditional enterprise way of doing things and prior to these guys arriving on the right hand side, that was the way of delivering infrastructure, right? And businesses typically kind of sucked up the cost of that, this is what it's gonna cost to deliver my infrastructure and therefore, and then kind of tweak around the edges, you know, the great Unix to Linux migration in the early 2000s, so I was proud to be a part of, when I was at Red Hat, was great, that cut out some costs and gave some flexibility, right? But that, the general way of doing things was accepted and until Amazon came along in roughly 2007, 2000 and then more likely as you, because Amazon's so successful, Microsoft, hey, get in that game, off they go, suddenly you saw that actually there's a much more cost-efficient way of delivering infrastructure, right? And I think that if OpenStack needs to succeed, its comparison point isn't over on the left with a traditional enterprise way, right? You should be able to do OpenStack on more cost-effectively than traditional big enterprise software. Its comparison point actually of the public cloud vendors, right? Just like gravity, it's very hard to defy the laws of economics, right? And workloads will naturally go where the infrastructure is the most cost-effective, certainly in any sane business, right? And so really to be able to deliver that across that environment, OpenStack needs to be cost-competitive with those public cloud vendors and actually because we live in a hybrid world, right? Nobody's gonna go all in, very few people are gonna go all in on one model or another, they're gonna have on-premise and they're gonna have public use. You need some kind of marketplace and commonality of operations, right? So, here are some numbers. Your mileage may vary and these numbers are not gonna stand scrutiny of the finalist analysts in the world, right? The fact everyone's taking pictures. The numbers came, by the way, by and large from some OEM hardware vendor websites and the Amazon TCO calculator, right? It's very useful for putting these things together. So, take in terms of capex server hardware based upon 25 servers. This is a kind of a mythical application, but 25 servers, we're needing a load of hardware that we need to buy, some networking, top of rack switches, so we're looking at a couple of racks and top of rack switches and power units, some storage that we're gonna need as well and some software, right? And take that software number is going to be for roughly 25 nodes of open stack from vendor of your choice, right? No, I know different vendors have different pricing models and different means of doing it, but that's sort of roughly $500 per server per year, right? So, total capex around that number, 236. Then the operational expenditure. We've got maintenance, power and space values. These, by and large, come from, as I say, the Amazon TCO calculator. Compared to some point, it's probably biased, right? So, as I say, it's not gonna stand scrutiny, but as an indicator, roughly around that, right? See fair? Anything missing? People, right, so, who's hiring open stack? No one, right? I mean, this is a big job fair, right? To a large extent, a lot of people shuffling between different companies. And it's good time to be an open stack engineer, supposedly. So, if we look at the national, this is coming from nd.com, salary trends, average open stack engineer is being paid in the region of about $170,000 a year, right? I don't actually know if that's fair or not. I work in the UK, it's a different market. But, supposedly. So, once we add that in, if you think for our 25 server open stack cloud, we're probably looking at two engineers, right? To build that, manage it, maintain it. So, adding that cost, we're looking at over a million dollars, over a three-year period for those two people, assuming they don't get pay rises in the meantime, and that would be optimistic over that three-year period. So, it's quite a chunk of money. If we look at public cloud, this is AWS, even though it doesn't list it, but it's AWS, 25 of their large instances, plus some storage, plus AWS support, right? So, somebody we can call. And, one person, over three years, you can argue whether that's fair or not, but $450,000. So, that comparison point, 1.58, just under $1.6 million, for over a three-year period versus $820,000, right? Quite a difference. So, given the choice, where do you think you put your workloads, right? What are you gonna do? So, how do we get those costs down? How do we address that? So, first up is look at the model that people are using, right, to deploy their OpenStack. This is from the latest OpenStack survey. Most people are using unmodified packages from the operating system, right? So, it can be Red Hat, Suzer, Ubuntu, whatever, or unmodified packages from a vendor distribution. So, they'll be using a vendor distribution of OpenStack, Rantus OpenStack, or RDO, or whatever it is, right? Right, that's gonna be a little less labor-intensive than modifying packages or building them yourself, right? But that's the only one piece, right? Because that may help you get it deployed, but what we really need is the operations piece, right? Because it shows people that are expensive. So, we need to get to a place where we have fewer people. How do we do that? Well, the open source way is to crowd source operations, right? It's to crowd source, get skills, reuse. We need reusable operations that we can all benefit from. So, reuse requires some kind of encapsulation, right? Packaging is a good example of encapsulation. So, encapsulation requires a model. We need to be able to model how our services fit together. A very simple example of how we might model an OpenStack. We've got Nino, Nova Swift, Glance, Keystone Horizon. This is not 100% accurate model, right? But just as an example, we have these things connect together, right? To be able to reuse. Ubuntu has a tool called JuJu. And we do exactly that, where we model different services together. This is an example of what we call a bundle. Each one of these services, you can hopefully recognize Rabbit and Seth as services. These are applications. We define what that application service is, is in something called a charm. These are a bunch of services that we have modeled on a nice little gooey environment. And we say, okay, Seth connects to whatever it is, Keystone, and Rabbit connects to MySQL, and Horizon connects to Nova, et cetera, right? And we say these are the relationships between them. We collect those charms, these services up into a bundle, and we can share these things. I can share an individual service, the definition of a service with you. I can share the definition of a bundle, what that looks like. This is relatively simple, it's just pretty much opens that core with Seth on the side. But it gives us a model, and actually gives us a means to be able to share that model. So as I said, these things are charms. Charms defines, a charm is essentially, it's analogous to like a chef recipe, or a puppet manifest, or something like that. But charms declare an interface. So we take this really simple example, MySQL and Rabbit, we can have MySQL has a set of interfaces, like things that it can connect to, essentially. DB Slave, for example, if we're in a sort of master slave environment, Syslog, because we want to be able to log to Syslog, centrally, et cetera. And likewise, Rabbit. But you'll see one of these, we define as MySQL has its interface that says, I provide MySQL. Rabbit has an interface that says, I consume, this is how I consume MySQL. So that when we add a relationship between the two, it knows what to do, right? That's adding a relationship. We also need to be able to tell the service what to do when things happen, right? And in usually we call these event handlers. So for example, when we add a relation, or we remove a relation, what happens, right? Do we need to create tables? Do we need to drop tables, for example? And likewise on the Rabbit side, right? These things have to be expressed. So you can see, actually, reasonably quickly, a charm can be quite a complex thing, right? Which is why crowd sourcing them and getting people to develop them is a good thing. We do a lot of the work on a lot of the base level charms. We being a bunch of canonical. So all of the open stack stuff, all of the common, very common open source applications. We've done a lot of the heavy lifting on that. But we also get great community contribution from people also defining, this is how we can add relationships between MediaWiki or whatever it is, OpenStack Project X, right? New OpenStack Projects. This model allows us to be able to define sort of complex topologies. And given that OpenStack is a complex topology, we can see a number of these different services that we have running together. We define what the relationships are. We have the charms expose those relationships and run events or the hooks when we want to do certain things. This modeling approach allows us to be able to model scale. So because we have the messaging server on one side and we have the database server on the other side, we've just got a one-to-one, we've got one unit of each and we have a relationship between them, that's fine. But when we start to add more units of the service, it knows what to do, right? Because the relationship and the event handling is defined. So add another 10 units of MySQL. It doesn't, we don't need to go and do things manually to be able to add config. Oh, now I've got 10 units, how do I connect that to the rabbit, right? It's all defined within the charm. It means adding 10 units is just, did you add unit n equals 10? And off it goes. Boom, boom, boom. I forgot about the fancy build. So operations. Modeling is fine. This, by the way, is independence of OpenStack. So we model on different platforms, AWS and others. But modeling is only part of the problem. Deployment is only part of the problem, it's operations. And so being able to operate and say, if we want to have fewer people managing our environment, we need to be able to operate more efficiently. That means we need to be able to do some of the hard things in OpenStack. So the model like gets us to define and build the environment, but there are certain things that we need to be able to do as part of our operations. So typically, there's a lot of raw materials that we can put into a charm. There's a lot of expertise, for example, out there. People have defined great ways of doing things in Puppet or Chef or Ansible. And we can pull those into a charm. But obviously, you know, tables and zip files and all sorts of different things. Docker images even. And then there's, we can define these as operations. So if we want to upgrade, take an example. Install, of course, or remove, yep. But if we want to upgrade, what are the steps and the things that we need to do as part of that? So, or backup, or benchmark. These things on your right-hand side, we define as actions. So as part of the charm, you can define that as an action. So actions are also encoded as hooks. So we can say, you know, backup a database, or clean out a database. We can benchmark it. We can flush cache. We can reset logs. We can do all of those things, right? So it's like an automated operation. If I want to back up my environment, I will just run a single juju command to back that up. Those operations are encapsulated to do a number of different things. We focus on upgrades and updates. For example, this is the charm for Horizon. And you'll see defined as part of that, we have a number of actions. One of those actions you can see is open stack upgrade. So to upgrade that environment, we can either click on the fancy GUI and select the upgrade option or by a command line, typically upgrade that. And it means that upgrading from one version of Horizon to a new one becomes very easy. Exactly what happens when I click upgrade is defined in here. And what we really want to do is to get people, we, the community, to input on the best way of doing that. We have a lot of people using this in production, you know, some big telcos even, and they are feeding into this in terms of, okay, this is how we upgrade, or this is how we apply security profiles, or this is how we back these things up. That makes for more efficient operations for everybody. And whilst you can share puppet manifest or chef recipes and things like that, we maintain the central repository of the charms that we're vetted and fed into by a wide community. Let me skip through that. So let's look at how we can start to reduce some of that. Did I skip a slide there? For whatever reason, it's not showing. So how do we bring these costs down? Well, if we start to look at a slightly different model, yes, we've still got the server hardware, we've still got the top of rack switch, we've got a PDU, but through using this tooling, we can actually reduce the number of people, right? So instead of having two, we have one person to manage the environment. Actually, we can cut the cost down significantly. And in fact, I've cut the storage cost as well, because one of the things that we can do, one of the charm bundles, for example, we have Robostank uses a hyper-converged architecture where we're spreading disk across NOVA, we're combining storage with compute. Piston used to do this if you're familiar with Piston's architecture. So, and that can lead to some efficiency, right? Fewer servers are required, right? So no dedicated storage. So we can cut some of that storage cost out. Of course, you still need the disks, but reduce some of the capex, reduce some of the OPEX. And that gets us down towards, you know, we've still got the same public cloud costs. So we can cut it down, chop some of that cost out, right? One fewer people. But we're still, is that gonna be enough? I don't know. So we take a slightly different approach. There are a number of different vendors that offer fully managed services, right? Fully managed on-premise open stack. And I think below a certain level, this makes real sense. So these are a bunch of canonical offices as a service. So these are the numbers are based on that. But yes, you're still gonna need the hardware. Same network environment, same environment. So your capex is by and large the same. But now we have a fully managed open stack environment. Chronicle charges $15 per server per day to do this. So you can kind of work out the math over three years. OPEX now drops down a chunk more. So whilst we're not, in this instance, AWS is still gonna be slightly cheaper based on my, or Tom's, or I should say, back of the napkin maths. It's much closer. We're now in the similar ballpark, right? And gentlemen who put their hands up saying work for telco is probably aren't paying retail on their servers, right? So you could probably get pretty close. And then a lot of the values are running in-house versus in public, data sovereignty and all those things, all those good reasons start to really come into play, right? Especially there's a lot of certain costs there already anyway. So I think it's pretty close. But to try and sort of summarize, I don't know how I am on time, but try and summarize, I think that OPEX can get closer, right? It needs to get closer to the operating costs of a public cloud because it's very hard to defy the laws of economics, right? Workloads will go to the most cost-efficient places. And in order to do that, we need to treat, I would say like big software, and big software requires a different model, right? Using the tools that we had, I was gonna say, 10 years ago, but even five years ago to model these big complex environments, to manage these big complex environments, isn't gonna work. So curation and tooling approaches or managed services even can help us get a lot closer to that kind of cost, equity, equivalence. I think that was it. It was, yes. So does anyone have any questions? If you do, you can either come up to the mic and ask or shout it out and I'll repeat it. No questions. All right, well, thank you very much.