 All right, so thank you. This is the final session of the canonical track. It's been an extremely interesting day, which we've had pretty much standing remotely the entire day. So thank you for coming along. This is also one of the most interesting sessions of the day. We're talking about architectures. But before we get into the canonical reference architecture, we wanted to introduce you to one of our customers that's using a reference architecture, an Ubuntu open stack reference architecture. And it gives me great pleasure to introduce Kanki-san. Thank you so much for coming from NEC, who's going to share some of the details about the implementation of Ubuntu open stack at NEC, and some of the tooling they're using, and some of the benefits they're getting. Thank you. OK, may I start? OK, hello, everyone. My name is Takashi Kanki, coming from NEC Corporation in Tokyo, Japan. Today, I would like to talk about our open stack-based product and service. OK, go ahead. Look at the picture located on the bottom of this slide. This part describes our technical activities for the open stack. We call it technology platform for the open stack. The left side shows our community activity. No, no, no. The left side shows our community activity. Our world-class engineers joining this open stack community and contributing. And the second one is alliance activities between open stack distributors. We have built very close, collaborative, relations with canonical Ubuntu team and utilize the Ubuntu distributions for our product. NEC has been stepping up these two activities simultaneously. I think we have established industry's top-class open stack technology platform for now. This is our advantage. So we utilize this technology platform to leverage our service and product development for our customer's benefit. Take a look at the upper side. This part shows our service and product. We have been started remote computing service on this April. We call it NEC Cloud Years. Our team has learned a lot of things through this years business. We had a lot of suggestions from our customers. So we accumulate the knowledge and we utilize it to roll out our on-premise years business. First one is NEC Cloud Years reference model, which is kind like a system integration service based on the difference implementation of NEC Cloud Years. The second one is a Cloud Platform for Years, which is a tanky appliance for years. That is abstract of our accomplishment so far. OK, next please. This is a concept of our on-premise years business. We will apply these two different solutions for the different market and different value proportions. We think the reference model should be applied to the large enterprise customers. And the tanky appliance should be applied for the small size years. OK, next please. I will talk about the future of NEC Cloud Years reference model. It is an on-premise system integration based on the difference implementation of our data center. It's quite impossible to deploy it on-premise with the same size as our massive scale data center. So we will deploy a downsized system of our data center with proven configuration with the same product. It's able to start small size for the customer's POC and able to scale out its production phase. OK, next please. Next, I will talk about the reference of our future of Cloud Platform for Years. This is a tanky appliance for years solutions based on the open stack. All components will be integrated in our factory before shipment. So it's able to reduce lead time for setting up drastically. It's able to start small size from seven nodes minimum set and able to scale out easily. OK, that's all my presentation. If you want to know about the detail of our service and product, please drop in on our NEC booth. I thank you very much for joining us. Thank you so much for sharing that with us. Good, so given a second time, a great pleasure to introduce Mark Shudworth to talk through a little of the Ubuntu Cloud, Ubuntu OpenStack reference architecture. Those of you who've been in OpenStack a little while, notice that you only have to go back about 18 months and people are crying out for reference architecture. What we need is a reference architecture for OpenStack and you'd see that. And now reference architectures are like standards. There's so many to choose from for every shape and size and the problem you have is which reference architecture is going to be right for me. So rather than present just another reference architecture today, we're presenting the architecture that we think is optimized, optimal for all kinds of sorts of deployment. Can I skip this? Are we on? Yes, yes, we're time limited anyway. So bye, all minutes, Steve. Hello, everybody. There was a great question earlier on in a session on Hadoop, where somebody noticed that Rabbit and Hadoop, I think, were co-located on a common machine and asked why, in that particular case, Rabbit and Hadoop ended up on the same machine. And the interesting thing for me is that architecture is not really at all about the bits themselves. It's about the way those bits are combined. And so one of the very strong sort of principles that drove us early on was to say, well, we want to encapsulate the bits in such a way that we can experiment very quickly and combine those bits in very different ways. And the reason for that is because there really is no single best architecture. When I look across the incredible variety of environments in which OpenStack or any other large distributed piece of software finds itself, you always find yourself in a situation where, for example, you have a test cloud or a test Hadoop or a test cloud foundry. You have a staging one and a production one. And so the question of what's the right architecture is a little bit tricky, right? Because the set of values that drives you to an architecture decision is in some cases extremely dependent on the context, tests, staging, production. And it may also be dependent on the workload. So even in the case of Hadoop, you can ask different kinds of questions that derive things in different kinds of ways. In the case of OpenStack, obviously, you can throw completely different workloads at it. So what I really want to do is start with a couple of sort of values and principles that I hope that have certainly come to guide our thinking, although one can't be dogmatic about them, right? There's no dogma, there's no ideology. And we have the great privilege of learning from many people. And I have to say, so Konkison spoke a little bit about NECs, the engagement that we have with them and we have learned much from that engagement. And similarly, each different environment teaches us many different things. We do believe that over time, clouds tend to the general, I don't know when I say that, tend to the general. By that I mean that if you're successful with a cloud, especially a public cloud, you essentially can predict very accurately the aggregate behavior of that cloud. Because of course, if you're successful, you get every kind of workload. And just as pollsters can predict the outcome of elections, but they can't tell you what a particular person is going to think, right? We can, in the aggregate, for successful large clouds start to make strong predictions. And a lot of what I say here is really applied to, because our focus is scale, our focus is the largest clouds. Our clients are building, you know, aspire to build and do build very, very large clouds. A lot of what I'm saying here is super relevant if you are building a really large cloud. And you have to figure out how to convince it down if you're working at a smaller scale. It's a little bit like saying, okay, well this is how a gas behaves. But if you're a little bit cooler than that, you've got liquid, if you're a little bit cooler than that, you're dealing with cubes of ice. And they behave a little bit differently, right? So a cloud that hums TM, like it's not a trademark, go out there and spread the word. A cloud that hums is like the one thing we keep saying to each other. How do we make sure that this will be a cloud that hums, not a cloud that squeaks, not a cloud that howls, right? And what I mean by that is that a nice cloud's like cooking, you know, the sound of a happy pot, right? And a cloud that is well-designed and is handling its workload hums, right? Everything's doing a little bit of work. And think about it, there's an economic principle to that, right? If something's not doing some work, then why did you buy it, right? And if you haven't sold it, then that's a problem if you're in a public cloud kind of environment. A lot of this sort of comes back to economics. There are a couple of key guidelines there. We want to make sure that, that this is the sort of overarching statement. We want to ask questions about, for example, the consequences of a failure, right? So a lot of people spend a lot of time arguing about HA and architectures for HA and some things. As you'll know, a million different ways to put an HA label on something. But the deeper question to ask is to say, okay, what are the kinds of failures I can have? I can lose a node, I can lose a switch, I can lose an AZ, right? Or I can lose a region. Those are the kinds of failures you can have. And the question really is, in each of those scenarios, what's the consequence? And the guiding principle comes out of here is, is to really think about the consequence of failure. Now I'll give you an example of that. We often see architectures where you put all the admin stuff on one node. So you've got the rabbit and my sequel and the, all the glue, all the sort of stuff that connects the stuff that happens. And of course, in an HA article, Cloud, you may have three of those. Now, even in an active, active sense, many of the components that you put in an active, active environment, they work in the sense that there's a leader, right? And in fact, everything's talking to the leader. Not on the super modern software, but let's be honest, there's quite a lot of not so super modern software that makes up a cloud. And so there is a leader and when there's a failure, you may well have active, active replicas of the data. All the data's here, all the data's here, it's ready to go, but everything has to be told. Stop talking to the thing that's dead and talk to the thing that's live. So even in active, active scenarios, in many cases, there's a period of distress when you have a failure. And imagine that you've put all of those services on a single node and that node goes down. All the things which were leaders are now essentially going through that period of distress. What do they have to do? They have to re-sync, reconnect, re-establish confidence. If you've got an external agent, 12-factor app, you've got an external agent looking at that thing, saying you're in distress or you're fixed or you're fixed or you're fixed, it's suddenly having to do that for five or six different things at a time. So simple rule. Try to minimize the consequences of a single failure. Here's another sort of guiding principle for us. Failure's non-linear. Take, you know, we had, on this machine, we probably still have OpenStack, Juju-deployed OpenStack on the bare metal, and then on top of that, Juju-deployed Hadoop. And what we did is we added Hadoop instances on top of the OpenStack until it started to hum quite loudly. And the interesting thing about it, you know, you take a machine and you add, you know, two VMs, great, four VMs, great, eight VMs, and you keep adding it, everything's fine, everything grows nice and linearly until it doesn't. And failure is non-linear, right? In other words, as you add resources, you'll get to a point where something fails. You certainly will. And the key thing to realize is that the first failure is your limit. And a lot of people spend time allocating resources, and they'll say, for example, I'll have my network for storage and my network for compute, great. How useful is a cloud? How useful is the compute when none of the storage is accessible? Not very, right? So if you design for compartmentalized things, typically, the thinking that's driving that is old thinking, right? What you've done is you've essentially said, one of these things will fail first and I don't know which one it is, right? And when that's failed, the other things that I provided may have plenty of spare capacity, right? But the spare capacity's kind of useless for me. And again, economics, if you're selling that cloud, great, you paid for all of that spare capacity, you're never gonna use it, because this part fails first, right? So a cloud that hums is a cloud that kind of everything's working. As you add a load to it, you increase the hum everywhere equally, and you try to make sure that if it's gonna break, it all breaks at the same time, which sounds terrible, but it makes sense, right? Don't try to predict the future. We see a lot of people spend an enormous amount of time, architects spending an enormous amount of time trying to design a cloud for a particular set of expectations. And I wonder where they've learned what that's gonna be, because I don't know, you know what I mean? The game is changing so fast, the apps that are coming to the cloud are changing so fast that especially when you're starting out, it's really difficult to know exactly what you should optimize for. And so this is one of those cases where it helps not to think about it too much in advance, right? You wanna create pools of capacity. Your success is really gonna come from attracting developers, and if you think about it that way, you're more likely to be successful over time. And then this point about that successful clouds tend to the general. So it's a slightly tricky thing. You want to build a cloud that's gonna be really successful for the application or the team that you're building that cloud for, but if you're successful, I guarantee you, all of the optimization that you did for that particular workload will become less and less relevant because those developers will push more and more diverse things to that cloud. So some high level kind of guidance. So where does that leave us? So here's one thing that's driving a lot of our design at the moment, hierarchical storage. How many folks are actively using hierarchical storage? Good man, excellent. So hierarchical storage. Storage comes in all sorts of shape sizes and prices and performance characteristics, right? SSDs, rotary. And lots of people sort of say, well, I guess I'll need to have a pool of SSD and a pool of rotary. And we're quite used actually to sort of thinking about stuff that should be on SSD and stuff that should be on rotary. Now imagine you build a cloud and you say, okay, I'm gonna go to my supplier and I wanna have a conversation with them about supplying me some storage. And you know that the more you buy, obviously the better the price will be. So there's a big economic story there. So you're kind of thinking, trying to think in advance, how do I plan for what I'm gonna buy? Say you buy a bunch of SSD and you buy a bunch of rotary and then you go out to your app developers and you say, okay, you've got these two kinds of storage and you can get them through Cinder and go ahead and choose SSD or rotary. What you will almost certainly find is that you were wrong about one of them. You guessed a ratio and you'll be wrong. And that means that one of those will be full long before the other is. Problem there of course is that you paid for that stuff that's not full and it's sitting there unused. So we try to invert things around. We try to make everything look like SSD. And the way we do that is using there are a range of Linux capabilities, DM cache, B cache and so on. And what they allow you to do is say, essentially stack those block devices in such a way that to the application, to the kernel, to the system it looks like there's just one big one. The size of your cheap rotary disk. In fact, you've got a little SSD sitting in front of it. And when you write to that big rotary disk what happens is it first gets written to the SSD and then it comes back and says, okay, I'm written. That means your application can get on with life. In other words, you've reduced the latency of that write. At that stage the kernel will say, okay, great, we said we were done. We actually have a copy on disk if we lost power, we're fine. But I'm just gonna write again to the rotary disk. And that way, hopefully I've got time to do that. That way I've now freed up space in the SSD again. And so what do we do on storage? We tend to say, well try to estimate, because one can do this pretty accurately, try to estimate the ratio of SSD to rotary. And then just by the rotary that you need, knowing that you can keep adding up to a particular kind of ratio. And so you're able to essentially provision storage as you need, make it all look fast, right? When people talk about fast storage, nobody ever goes and reads the entire library all at once, right? People are reading a bit of this, a bit of that. And mostly applications slow down when they're trying to write. So hierarchical storage everywhere. There's another clue in there which is that everywhere piece. We see a lot of architecture where you have a storage cluster, one form or another, and a compute cluster of one form or another. And what you're guaranteeing in that environment is that you're almost certainly never talking locally to storage. Now local storage conversations are many, many times faster. I love Ceph dearly, right? But there's latency, there's laws of physics, right? We've got to, if we're gonna write to the network, we've got to go out to the network to one of three nodes right, right, and right again. And only when the last one of those is done, typically am I gonna be comfortable that that has been written, right? There's latency, it's gonna take time. Local storage is a beautiful thing, right? So again, with hierarchical storage everywhere, what we're doing, we're saying, okay, just put a reasonable set of disks into every node, put a SSD in front of it, carve that up with some for Ceph, some for Swift, some for local, right? But all hierarchical. So everything looks fast, right? Your local storage looks as fast as an SSD, but it's as big as a CheapSlow big rotary disk. And your Ceph, yes, you still have to go over the network to write to your network storage, but at least the writing doesn't take very long, right? It comes back as soon as the SSD says it's written. So this is a really big driver for us. The economics of this are fantastic. We added Bcash support, there's DMcash and Bcash are the two to look at. We added Bcash support to 1410, and it's been immediately adopted very, very widely. It's great, clean, and robust code. The performance on NVMe SSDs is unbelievable. You're now up to 900,000 a million plus IOPS. And so the acceleration that you get by one of those devices in front of 20 terabytes of rotary is fantastic. 20 terabytes of rotary now is cheap. And that SSD may be quite expensive, but it looks now suddenly like 20 terabytes of SSD instead of half a terabyte or one terabyte. So that's one big story. Oh, look, I had a picture. What a very pretty picture. But the interesting thing on this is that you can see perhaps that in fact that's a compute node. So when I say storage everywhere, I really mean storage everywhere. We are increasingly able to isolate and pin workloads to CPUs and guarantee IO latencies and so on and so forth. So what's the difference between Cep running on one of your compute nodes where you've allocated it and can defend the allocation of just enough CPU to make sure that that doesn't stutter, right? Okay. Distribute all the work widely. Again, if we're going to hum, then we want to be as distributed as possible. And the controversial part of this is that that includes potentially the admin work that's running the cloud, right? What's the difference between a 36 core server which has four cores running MySQL which happened to be providing database services for a cell, which will never get bigger than a certain amount, right? And a dedicated server which has the same essentially ratio of cores and storage. Very little. The difference is if you've fragmented your device types, right? And you lose that MySQL node, you now have to mentally think, okay, I want to go out and get another one of those. And if they're special, that means you've had to provide a special set of things for that. Whereas if you've said, okay, look, we've got as flat a configuration as possible. We have lots of cores and lots of RAM and hierarchical storage in as many places as possible. You can grab that resource to recover that service as fast as possible anywhere. You don't have to go anywhere special to do that. And this is the bit that I think we're gonna see the most evolution on in the next 12 months, right? So we've talked a bit about storage and principles there. We've talked about compute principles there. This is the bit where I think everything's gonna change in the next 12 months. People often think about networks and throughput, you know, one gig, 10 gig, 40 gig and so on. To me, the key thing to be thinking about is latency because for so many cloud operations, whether they're administrative operations, start a node, right? What does that mean? That means a cascading series of HTTP calls and AMQP messages, right? Every single one of those laws of physics has latency associated with it and they add up. Launch a hundred nodes and you may be generating tens of thousands of messages, right? And all of the latency of those adds up. So the latency of the network, we think, is far more important or much more important than people have necessarily given it credit. So the intersection of HPC and cloud, I think is gonna be a huge focus over the next 12 months. I may be wrong about this, but where we've seen low latency networks, the cloud feels much faster. Right? There's also really interesting generations of generational change happening. We went through a phase of convergence where people sort of took a rack and put it in a box, but it was still essentially 10 gigi with all of the same latency type properties. I think we're about to see a phase change essentially in the network and that's something to watch really, really closely. If you're in a position to get early access to that, if you're in a position to evaluate that, then I would really urge you, because my sense is that, you know, again, where we've seen low latency networks in Finneban, not FNNET, or other technologies, there's a material impact on the whole performance of the cloud. I mean, imagine CIF where all of those round trips are now a third or a tenth of the time, right? It really starts to change the dynamic of the network storage versus locally connected storage. So in a terribly ugly picture, right? What do you want? The flattest and fattest network with the tightest fabric, in other words, the lowest latency, right? Again, see a lot of people spending a lot of time trying to look at their network and say, well, I'm on a spine with this kind of bandwidth and then edges with that kind of bandwidth. Well, how can you predict which VM is talking to which VM, right? If you can, magic, I don't know how to do it, right? And I know that if the cloud is successful, I definitely won't be able to do it because everything's gonna be everywhere. Now over time, I think we will see annealing technologies in the OpenStack controllers. By annealing, what I mean is that when somebody asks for an orchestration, an application to be spun up of 50 Hadoop nodes and a bunch of other analytics and blah, blah, blah, blah, blah, what we'll do is we'll spin that up as fast as possible, right? Then what we'll do is we'll look at patterns of traffic. We'll say, what's talking to what here? And we'll look at patterns of isolation, right? What's not supposed to be on the same failure zone however we've defined that? And then we'll anneal. In other words, we'll say, well, if I just move this part to here, now two of its conversations become lower latency or local type conversations. And I haven't broken any rules of isolation or separation. So I think we will see that, but for the moment at least we don't have that and it's not predictable. So everything's gonna be talking to everything and you may as well essentially go as flat as possible in your network. In compute, the real question there is the RAM core type ratio. And then as admin, as I said, the real question is whether you're willing to go as far as distributing admin widely, obviously, but actually interweaving that with compute. Okay, Mark. Thank you. Thank you. So I think, thank you for that overview Mark. The, when we were looking at the architecture and indeed drawing this out on whiteboards and other areas, it's very easy to fall into the trap. And I'm sure others have done this of drawing a logical architecture and then mapping that to a physical architecture. I have my storage resource, I have my compute resource, I have my controller nose and separating those, putting those on different boxes. Certainly we do it. We'll have fall into that trap before, but logical doesn't need to equal physical, right? Just because you have storage on the diagram as a separate pool doesn't mean it needs to be on its separate box. And as an example, in our reference architecture, we're sharing with you, we put storage on the compute node by design, right? Not necessarily in a dedicated pool. The bits that everybody cares about, of course, in an open stack cloud, are the key services, the cloud controller services. So it's, you know, obviously horizon, the teleometer, the database services, to believe MySQL, messaging services, but Keystone, all of these Nova services, right? The cloud controller services, plus the messaging, plus the database. When you lose these, then you can have problems, right? This is when problems start. Just to refresh on terminology, a region of course, different areas of essentially, it's a bit across data centers, if you like, completely separate infrastructure apart from the shared Keystone environment for your authentication. Cells where you allows you to be able to scale out your open stack environment by having compute, having the messaging rabbit and the cloud controllers located with compute, but using an API, cell, in fact, service, if you like, to be able to interact with that environment, having multiples of those. And then an availability zone, where essentially your compute resources, and sorry, apologies for the diagram, continuing the poor diagram meme here, but the Nova compute services, essentially sharing the same block and object storage, plus the same cloud controllers. All of you should know this or probably know this if you've been in an open stack for a while. This is standard architecture stuff. I'm just ensuring that we have the same terminology here. Conventional wisdom, right? I'm sure if you've been to reference architecture sessions this week or at other summits, there's a lot of conventional wisdom around how you build a reference architecture, what a typical reference architecture looks like. And often it'll be a load balancer, 80 proxy for example, talking to those cloud controllers, you can have two or more multiples of those with the services on those boxes. Likewise, the rabbit, sorry, messaging and database services, set up on the same server with high availability type of technologies, whether you're using DRBD or Coruscant pacemaker type setups to be able to fail over your mask or environments, even if you're using Galera and other pieces. This is kind of standard motherhood and apple pie type of reference architecture for open stack. All goodness. And then of course, dedicated storage, those storage arrays that are running your block and or object storage to be able to manage those control points. That's conventional wisdom. And indeed, there's a lot of goodness in that that we use, that we have implemented at places like NEC. So as an example, we do use the Galera libraries for active active, right? But we're very cognizant of the fact that those changes are synchronized across the node. So a right to that environment is only as fast as the slowest node, right? So people use Galera here, a few, good, good, good. It's good practice, absolutely we endorse it. Likewise, in the other areas, rabbit and queue. So we use mirrored queues with rabbit and queue to be able to scale that environment out. It's new active active, right? So there can still be some limitations. You don't wanna just keep adding an infinite number of rabbit nodes. But you know, it's definitely good practice. And likewise, MongoDB, if you're using Solometer, use MongoDB and replica sets to improve high availability, right? And redundancy. MongoDB giving of course a scale out, a great scale out data technology. Finally, Sef monitors. If you're using Sef, as Mark said, we love Sef. We would certainly recommend it as part of reference architecture. Using three of those, this is good practice, right? We're not gonna tell you to do things differently. But cloud thinking requires the use of shared pooled resources. So having those islands, having the island of storage, having an island of compute, having islands of cloud controllers, islands of databases matched with rabbit and queue or with a messaging rather set up in HA configs. We don't think it's the right route to be going. So as Mark was saying, what we want to do is spread our environment across as many servers as possible. So on the compute nodes is locating MySQL, messaging services, the Sef block storage and the Swift, but spreading it across that compute environment. And you'll see the CC, because I didn't have enough space because I'm a very poor artist, to write cloud controllers, right? Of course, we're setting these up in a way. I've got the diagram right. Yes, we're setting these up in a way that each process is not going to gobble up the vast resources, the available resources of that unit. So using containers, we love containers. If you've been in our sessions today, it's containers, containers, containers. We did a lot of work with LXC. We announced LXD this morning to be able to provide live migration of containers. Still the best demo of the day. Thank you, Tyco. So the database messaging service in LXC containers, where we're managing what resource they take using the C groups and nameswaces, right? So we can allocate four cores, 16 giga ram to MySQL, likewise the rabbit, right? Typically we wouldn't use more than three of each because that's kind of optimal unit, but we can still spread the cloud controllers because they scale extremely well with HAProxy in front across very many machines. Smart of you all noticed, okay, well, if we're only using three, at what point is that MySQL environment gonna start getting hit too hard? Is it gonna become a bottleneck? What about rabbit? For those of you who saw James Page's talk wherever James is this morning, talking about some of the challenges scaling to 500 nodes, right? Which he did in the performance test. Great details and you should definitely look that talk up if you're interested in that level of scale. So we'd suggest limiting this in cells, not suggest we limit this in cells. Cells are still emerging, but limiting in cells means that we have great control over the amount of resources, the number of nodes, the number of, it's ultimately VMs that are gonna be placing the demands upon those units. So it makes it very controllable, but very scalable and creates a cloud that hums, just hums across that cell, hums, TM. So for networking, again, motherhood and apple pie, we recommend the standard types of network separation. So a service network, public network, I won't read them all out, you can see them. And in fact, hopefully I've got the pretty diagram that shows us, a flat diagram. So separate out your different types of network requirements. But these are, again, don't confuse the logical with the physical, right? These can be VLANs, absolutely. But we suggest is stick a neutron gateway on every host. This is new capabilities, new features. This is a Juno-led feature, I believe. I'm looking for James to nod at me at the back. So this is new, and this is, you know, we are on right at the forefront of thinking here, how we can architect in a long day, architect and optimal architecture. So putting a gate on every host means we're limiting a bottleneck that previously existed within Neutron where everything had to go via the one unit. That's why everyone was putting Neutron on the bare metal, on a big box, with lots of CPU, with lots of RAM, because that could be the limiting factor. James, again, to steal some of James's from earlier on, he was talking about some of the vast improvements that have been made using Neutron gateways in Juno. The fact that it was able to get an order of magnitude faster jumping between Icehouse and Juno at the networking layer. So spreading across our pooled resources is the way that we believe we can make this successful. So how do you get there? So I'd like to ask Tycho and Dan at the back, just to come up and show you very quickly one of the ways that you can get towards creating a scale out cloud that uses these pooled resources. Yeah, so while we're handling that switchover, I want to do two things. First, I really want to have a session here that's not Ubuntu and not tied to any Juju or Maz or anything like that. This is really about principles. You should be able to replicate this on any platform. Great to see some red hats in the house. And go make the most of it, right? And what we'd really like to do is learn from each other. We learn a tremendous amount from customer engagements. And sometimes these architectures don't turn out to be right, depending on hardware selections or workloads and so on. Also, the point of this exercise where we will use Juju and Maz is actually to make a key point, which is that the way you learn is by being wrong. And the way you find just the best way to be wrong is to be able to make that cheap, right? If it's gonna be cheap and quick to be wrong, then that's okay, you learn something and it didn't cost you much. So I mean, the right approach to architecture is always to have a good idea of where you're going but then be able to iterate really, really quickly, do experiments, bake teams off against each other. At one stage, we were having kind of pizza competitions to say, okay, teams have ideas about what we can do and then we can, as long as we can try them cheaply, right? Whoever gets the best benchmark keeps the pizza. So if you take that approach, right, a very, very rapid iteration and very cheap iteration, then you are in a very strong position to end up with the right architecture for your particular use case. How are we doing? So the tool we're going to use here, the tool we're going to use here is actually the what we call the unmanaged installers, all open source, it's all in Ubuntu. And one of these orange boxes, the center one, is basically set up to go. So there's a bunch of little servers in here. And the point of the exercise is just as a group to try some sort of crazy architecture just to show you what it feels like to be able to say, ah, okay, I've got a bunch of hardware and I've got a bunch of services that I know I need to deploy because OpenStack won't work until I got a bunch of Nova and a bunch of Cep and a bunch of this and bunch of that and the next things. But in this exercise, you can create whatever architecture you want, right? It could be a stroke of brilliance that teaches us all something or it could be Darwinianly futile. But the point is to be able to iterate very quickly and try a particular architecture. You guys good to go? All right. So I was setting up, I know Mark said a lot of things, I actually didn't hear what he said, so. We spent the last session talking about building big scalable clouds. That's something that we do a lot. We're very good at it, we're very proud of that. But how did we get there? We got there by actually building small clouds. And like Mark was saying, being able to iterate very quickly in order to do that, we needed ways to actually get to the big ones and obviously you can't iterate as quickly on very big clouds. So you all saw today that we launched the OpenStack installer and the autopilot program, which is our embodiment of the reference architectures that Mark has been talking about. So when you want to build big serious clouds with our reference architecture, that's the way to do it. When you want to actually iterate very quickly on a small scale, you can do that as well. And as you see, it's as easy as sudo apt-get install OpenStack. Control plus. Oh, no, it's not. So we got the OpenStack package. Now it's as simple as sudo install. Okay, so this is our small scale, iterate very quickly installer. It's also the path that'll bootstrap you to the landscape installer. So basically it's just asking you for an OpenStack password. Okay, so then you have a couple of options. You can do a single system install. So everyone probably uses DevStack. DevStack is great, we love DevStack on Ubuntu. But DevStack is basically running from tip. What if you want the DevStack experience but you want it with our supported packages and you want it basically using our tooling? We can do that now. Single system install will actually create a container on your laptop, on the system you're running on, whatever it is. So it's completely safe. It's non-destructive to anything on your system. Within that you'll actually install a complete OpenStack. It takes about 10 or 15 minutes, depending on the system you're running on. And you're good to go. The second one is that we actually have the ability to run a multi-install, which will actually talk to a NAS. So you'd have a pre-existing NAS. You can just point to the NAS. If you don't have a pre-existing NAS and it's all net new for you, it'll actually give you the option to install NAS so you can see one with existing and one without. The landscape path, obviously, we want everyone to give this a try. So you have to get install OpenStack. Give the landscape path a try. It'll install landscape for you. Publish the URL for you to go to and landscape will take over from there. So I'm actually gonna kick off the single. The network here is pretty bad so it's not gonna be all that exciting. But you'll see that it's actually creating container. That's gonna take a long time because it's actually downloading the image. But just so you get a feel for what that's about. So I have a multi already set up here. So if you chose the multi-install path, and this is one with a pre-existing NAS, so the center orange box there actually has a NAS on it right there. So that's the NAS that's connected to. You can see a bunch of nodes there. Node five is actually the node that I was just on where I was showing you the actual OpenStack installer. Node three is a juju bootstrap server that we already did just because we wanted to get you to the actual experience. Just so you know too, if you went to our website, these are the instructions for installing what you're seeing here. It's as simple as adding a couple of repos and you can see there is the sudo apt-get install OpenStack. So we try to make it as easy as possible in that it just works. So then if I switch over. So what you see here on your Lapt or all the services needed to install a base OpenStack on the right or all the systems that are currently in our NAS. And basically nothing's assigned right now. So, oh, apologies, hold on a second. Part of my cheat sheet. Okay, I'm not on the right network, apologies. Yeah, so while he's doing this, I can tell you a little bit. So one of the things that we did with this is we built this entire install path with our tooling. So even if you try the non-landscape branded version, you will still be using juju, mas, all of those things. So if you're interested in, like we say, the single install, build an OpenStack on my laptop using juju, this installer will do that for you. What else can I say? I guess we have the multi-install will set up a mas for you like do all the sort of mundane things, like you have to sync down images. You have to sync down images from the cloud and all this stuff you have to set up and our install path will do all that for you. So anyway. So then what you see here is by default all the services are unassigned. We have the option to auto place the services if you want, so hit auto place. You can see what I did here is, I can speak loudly, but on node six it auto assigned in eight containers, all those core services, and then you can see on node eight it auto assigned neutron to bare metal because neutron doesn't work in a container today. Same with compute. Compute requires either KVM or bare metal. So by default, if you just want it to do it for you, it has very simple logic and it'll choose the best fit. But if you want to play around with it like my market said and you want it to iterate, you can clear all the default placements. Then I can go down for instance to open stack dashboard of the horizon. And it'll actually give me the option to go one by one for all the nodes that are available and assign it to either as a container, as bare metal. So I'll just pick for instance node one and I'll put it as a container. So you see it actually went from one side to the other and it's assigned to node one in search on node one. You can see it's assigned as a service on node one. So there's lots of flexibility there. I know I don't have much time, so I'm gonna clear all the placements again. You'll see that they all go back. We're gonna do auto place, which auto place them all and I'm, yeah, clear my filter. We'll be done in one sec and then deploy. Guys, thank you very much. Thank you very much for your patience for sticking with us. I hope you've had a great day and I hope some of you will join us at the Musee de Sey. There are buses leaving from the Hyatt lobby at 6.30. Starting at 6.30 and continuing on and then buses going on from there to some of the other functions and parties. Thank you very much. I hope this was useful. I hope that's useful. There's a nice quick, cheap way to do quick, cheap experiments and learn tons of stuff. Please come and tell us what you find. Cheers.