 If you still want to make it for a minute, get started. All right. Welcome, everyone. Last mini-conf of the day. It's good. I'm glad we actually got some people to turn up, and you haven't all gone for beers yet. Today, we'll be chatting about OpenStack operations for the engineers. And hopefully, you'll get some tips, tricks, and learn a few things from the three of us. We're going to try and make it as interactive and fun as possible, so there are three of us. We're going to have to run through it pretty quick, because we've only got 50 minutes. So that would be myself. My name's Anthony. We've also got Daniel as well. No one can pronounce his surname, so we just call him MasterChef for his surname. Actually, I shouldn't say that. People can pronounce his surname. I cannot pronounce his surname, so I call him MasterChef. And we've also got Alex as well, Alex Tesh, too. Feel free to reach out to us if you have any questions on Twitter or via our email addresses. We're always happy to help out wherever we can. So today, we're going to take you through the deploying area. So we're going to talk about some of the installation tools for OpenStack. We'll also be having a chat about the orchestration and the backup side as well, so we'll delve into heat. We've got a couple of live demonstrations on that. And we'll also be talking about workload backups as well. And then from there, we'll move into the final section, which is the consume area. And we'll have a chat about Murano. Can't speak today. And also Sahara, so we've got a bit of a Hadoop demonstration as well, too. Part of that's live, part of it's pre-canned, because of the time that it takes to run it. So that's the basic format. 50 minutes to get through, lots of live demonstrations. What could possibly go wrong? Who knows? By the way, this is our manager driving that car. And it is a hire car, so just in case you were wondering. So deploy. Just before we move into deploy, a quick room survey. Who has actually deployed OpenStack? Just deployed it. How'd it go? Yeah, cool. Yeah, OK, excellent. So we've got lots of people who have had a crack at it. The first time that you did that, did it take you hours? Days, OK, days, weeks, OK. Haven't made it yet, yeah. OK, but look, this is not uncommon. I was recently presenting at OpenSource in India, and I asked the same question. And not one person put up their hand saying that they'd successfully deployed it. One of the guys in the front row said, how come you haven't got years on that slide? And I was like, come on, OK. The other thing to remember too, if you are presenting like I was in India, don't use pets versus cattle as the slide. Over there, they don't talk about pets and cattle in terms of the virtualization and shooting the cattle. Don't do that, all right. Definitely use fine china and paper plates, OK. That's what they prefer over there. So just a little tip bit. I told you you'd learn a lot out of this presentation, so hopefully if you take one thing away, take that one away. I'm going to throw over to Alex now, jumping into the installation tools. Hi, hello, everyone. All right, we're going to do this one a little bit, a simple fiction style. So let me start with the demo first, because it's going to take about 10 minutes, and we have about four live demos. So we're still trying to work out where it's going to run within the time or not. OK, basically what we're going to show you is pack stack. I don't know if any of you tried pack stack before. No one? Oh, this is kind of cool. Oh, two of you, all right. So basically pack stack is the one that we use with RDO, or Red Hat Open Stack, right. It's a very simple installation tool. And what we need to do is basically make sure that the repositories are in place. In this case, I am using a satellite. A satellite server is running as one of my VMs. I actually have three VMs running at the moment in this notebook. OK, one of them is the only one VM, which is basically going to run the full Open Stack in a single machine. A single VM is going to be a nested KVM, which is not very convenient for production, but still for our purposes will do fine. And I'm running satellite just basically to provide the repositories, OK? For you guys, likely you are going to try it with Fedora or something like that. Basically, you have to make sure that the app is in place, the app repo, and there is an Open Stack repo, depending on where you want to use iHouse or Juno or whatever installation, whatever Open Stack version that you want to do. OK, so for the pack stack, it's basically one single command. What we do is just call pack stack only one, all right? It's going to ask us for the password. And basically, it's going to create keys across all the servers that you want to use for the installation. Pack stack is kind of cool. It use POPET in the background. And basically, what it does is it calls the POPET manifest. And it makes sure that everything is deployed across whatever servers that you define, OK? Let me go back to the, this one is going to take about 10 minutes. Let me go back to the presentation. And basically, we can talk about the rest of the tools that we have for today. The very first one is Triple O. Anthony is going to delve into details later on. But basically, it's the only official Open Stack project that deals with installation today, OK? We call it Triple O because it means Open Stack over Open Stack, OK? So basically, you have to build an undercloud, which is a short stream Open Stack. And from there, it's going to be the final production cloud, OK? Later, we're going to the details. Pack stack is the one that we run through. I would say that perhaps this is the easier way to install Open Stack. You see in pack stack, OK? Basically, a single command, the repo's in place. And you can get it running in about 10 minutes. You can start your own testing. Fuel is maybe the most popular one. It's a tool by Marantis. Basically, what we do is we just download the ISO. It's free to download if you want to get a try. We install it in one machine. And from that particular server, actually, we can manage several Open Stack clouds, OK? It can do bare metal provisioning. So we just have to boot the servers, OK? And then a few will detect these machines. And then, basically, we just tell them how we want to deploy. OK, some of these tools are enterprise kind of tools that we can use in production environments because they build the Open Stack in a Heche kind of way, OK? So we will have fully redundant components. Triple low is one of them. Fuel is one of them. Pack stack is not, obviously. OK, there is a lot of improvements to be done there. Foreman is one of the improvements that we are making. It's basically a UI's base. It's kind of an easier way for the users to see what's going on with the stack. And it will let you manage it as well. It's also based on Popet. And Spinal Stack is quite an interesting one. It's from a company called Inovance, a French company. It got recently acquired by Red Hat. And Inovance was a very particular company in terms of Open Stack because the professional services is very strong, OK? So they have this tool. It's basically a script base. It makes use of Ansible and Popet to carry on with the installation. And it's quite flexible, but it's not as simple to use. And Crobar is maybe one of the first tools that came about with Open Stack. It's a collaboration between Dell and Suze. OK, I think you guys are about Juju before in another session. So we are not going to delve into that one. OK, for the pack stack installation demo, basically, we cover this one a bit. We make sure that the reports are in place when we do an all-in-one. This guy is actually going to make sure that we have what we call the answer file in place. So let me see if I can show you that file. OK, here we have the shell. OK, so basically it's creating this file. And from this file, we can say all the components that we want to install. So in a typical all-in-one installation, basically almost everything is installed excluding heat. If you want to install heat, you just look for heat into this file. Change the equal to yes. And basically, we can run pack stack again. It's going to go on to the manifest while it is already in place. And the manifest is not going to bother, right? Pop it style. And then it will go through the heat installation. So we just have to run it twice. OK, what we do now is let's check how many manifest are running. It's still going through Cinder. Right now, perhaps five more minutes. Let me switch back to the actual deck. And let's cover a little bit about Ironic on triple O. Cool, thanks, Alex. So look, basically most of the installation tools that Alex has spoken about, many of them are utilizing Puppet, many of them are and or Ansible as the primary drivers in there. The interesting thing to me is that I have a lot of friends who have used Puppet in the past. I'm one of them as well. But if I can just see a show of hands, who's used Puppet in anger? Let's say, OK, so it's probably barely 50%. That's pretty common from what I'm seeing out there in the industry. There's a lot of operations groups and even developers who are not familiar with Puppet. So if you do want to move into and start playing with OpenStack, one of the easier ways to possibly give it a try is to use one of the OpenStack that maybe has triple O. And you can utilize triple O because you don't have to go and learn Puppet. It's OpenStack on OpenStack. It makes it a lot easier for you to be able to do the installation. So one of the distributions that I've used, which leverages triple O, you can have an enterprise grade community edition of OpenStack up and running in under 40 minutes. It basically utilizes about six or seven commands to get it up and running. So it is quite easy. You can be into the Horizon portal and you can be off and running. The actual project itself, it utilizes Ironic and Nova to be able to do the deployment. And then from there, it leverages heat. So they're the three main projects that it reuses within OpenStack. So obviously OpenStack is made up of a number of projects. They're the three main ones that it leverages. It pretty much just doesn't, as really, really small OpenStack installation, to then go off and be able to install itself. Quite ingenious, quite tricky, but makes it a lot easier for yourself to be able to leverage that. So just to sort of cover off Ironic, because it is one of the main projects which triple O is leveraging, think of it very similar to Nova. So in terms of your compute, but in this particular case, it's primarily for bare metal, OK? So which is what you're going to require to be able to do the installation for using triple O on OpenStack. Ironic is going to, now my understanding is that Ironic is expected to be integrated into the new kilo version of OpenStack, which is coming soon. So it will be integrated as part of that at the moment, not currently, OK? Some of the main use cases, some of my colleagues out there, the two main use cases that I've been hearing that people are leveraging Ironic for, mainly databases. So I've got some colleagues who are doing a lot of data warehousing. They're leveraging or making use of Ironic in this particular case, really because the size of the databases and the power that they're requiring, because they're using a lot of cubes within a data warehousing solution, they're leveraging Ironic in that particular case because it gives them that flexibility and it also gives them the compute power that they need. So a big use case for anyone who's running the larger databases. Also the guys in the Hadoop world as well are leveraging Ironic quite a lot too. They obviously need the power, especially if you're using things like MapReduce on it and stuff like that. Ironic is definitely a good use case for that. The third area too, which is being leveraged from a number of customers, if you do a bit of a search on Google, one of the big airline companies in Australia recently leveraged Ironic to do some high performance computing. And they were leveraging that on an OpenStack platform as well just to do a Google. And you'll find that one quite an interesting read and was a good use case. So obviously the deployment steps are pretty simple for Ironic. Basically, you have to register it. From there, you have to obviously create an image. And then that image, your storing glance, is not much different to a virtualized one. And then it uses an Overboot for the physical server. So the three main steps of triple O, coming back to triple O. First of all, it creates the seed. From that seed, it then goes off and deploys the undercloud. The seed is basically the smallest installation of OpenStack that can be done. From then, it goes off, creates the undercloud and deploys the overcloud. The seed and the undercloud still remain running as well. So you will need to keep them up. And if the undercloud, for example, does go down, the seed can bring it back up again too. So there's a little bit of HA there, but it currently doesn't reverse. OK, how are we going with our? All right, let's check how good that one is going. All good. With triple O, like I was saying before, it literally is six command lines, takes less than 40 minutes to be able to execute it and get the horizon portal up and running. There's a number of YouTube videos out there that you can go and have a look at to get some tips and tricks on where to download a distribution, which is capable of doing that. And also the installation steps to be able to run and the commands that you need to know to be able to do that. The one that I've done up here, which is running on the demo video, is just running on an Ubuntu 14.04 server. So you can just Google triple O and my name on it, and you'll find a video that can show you all the links that you need to do and all the information that you need to be able to run that as well. All right, how are we going with pack stack? Almost done. So we're right. Let's check. So before we jump into orchestration, oh, you want me to cover orchestration. Back to the demo. OK, let's see how the pack stack is going now. We need a harder oven. It's not a pretty wood, does it? We're not up yet. Do you want me to cover orchestration? Yeah, let's go with the orchestration side. Let me just get this screen back. All right. Oh, are we up with pack stack? Go ahead. OK. Before we go back to see the pack stack finalization of the working software, so the next section we're going to go into is really around orchestration. Orchestration, to me, is really about automation. Many of the groups and the technology leads that I talk to are sometimes hesitant to get into and start working with automation. Maybe because they see videos like this. I don't know. I mean, what could possibly go wrong? But the fact is that human error is literally the main cause of deployment failure out there. So the more that you can automate, the better reliability and repeatability that you're going to have within your applications, within what you're deploying, and with the environments that you're setting up. They're repeatable. It makes it very, very easy for us to be able to push the environments in a continuous delivery way. And I think this video really resonates with everyone because we've all been there at some particular stage. We've all been there when everything's fallen to pieces and you thought to yourself, well, why didn't I write a script to do that? Because it was 2 o'clock in the morning and I typed in the wrong command. So we'll move into orchestration. We write to jump back now. Yep. Yep, all good. All right, so here's the rest of the pack stack demo. All right, just in time. So basically here, the pack stack just finished, right? So there are a few things that hopefully we should be able to see. Let me go back to my shell. Go back to the shell. Basically, from the nova.com file, we should be able to see that this one is actually a Kmoom virtualization. It's not quite convenient to use it in production, right? Pack stack will configure for you what we call the breach interface. So these VREX that we have here is basically the one that is going to connect to our external network. So there are a few more steps that you need to do in order to have a working configuration for OpenStack. Basically, using OBS, the open virtual switch, we need to attach whatever interface that is going to our external network into our breach, all right? And after that, basically, we're going to get a working OpenStack configuration. Let me check the virtual machines that we have at the moment because for the next demo, which is the orchestration one, we're going to go into heat. For heat, actually, I will need a little bit more of memory in this machine. So what we will do is let me check back. OK, let's destroy one of these guys. We don't need any more. I think one is gone. All right. All right, what we will do now is I'm going to use another VM, which is only the controller, the OpenStack controller. My compute node is going to run in the actual notebook, OK? So let's log into this one. I'm going to use a heat admin account. And what I want to show you from here is basically how to run a CFN template. Let's go to a network topology. Let's get this one running. All right, basically, we are kicking one of our templates that we have. Let's try to run some checks. Just to make sure that this guy is actually running from the background, you can see, actually, how the networks are being deployed. Initially, we only have the external network, which is the one that we get out of the box after the PAK stack configuration. So now, our script created a database network. We are creating a DMC as well. And basically, what we are doing is we're going to create three instances, OK? Two of the instances are going to be attached to the DMC. They are going to be load-balanced, ideally. And then our database is running Oracle Express Edition. It's going to be attached to the DB segment, OK? So we're going to assign floating IPs to these two guys. And basically, the database is going to be isolated from external. So it can only be reached through the DB tier, right? So we should be able to see the instances provisioning here. And this demo, again, is going to run for around 10 minutes. So what we can do is let me put this one back so at least they can see when this one is finished. And let's run back to the presentation. OK, this is basically what we call a CFN template. OK, it's AWS compatible for those of you guys who are familiar with Amazon Web Services. Basically, this one is what they call the Cloud Formation and Automation Scripts, right? It's based on JSON. This is usable in OpenStack with some minor modifications. We can actually get this kind of JSON script drawn in an orchestrator full cloud solution. OK, so a bit more information on heat. By the way, do you know why we call it heat? No, no idea, because we need heat to build the cloud, actually, so that was the whole idea behind it. All right, so the project actually got started in Havana. OK, in Havana, basically, we were using the CFN templates, Pure JSON. It got interested in Icehouse. In Icehouse, basically, they create the whole template that we call heat orchestration templates. And the whole idea was to basically make it more legible for the developers and easier to program, right? So in this case, we're using YAML. We have something called autoscaling that works pretty well in heat. And there is quite a lot of collaboration going between the silometer developers and the heat developers in these two different projects. Because silometers, basically, will keep track of the utilization that you have in your instances, right? So whatever CPU and memory utilization that you're running, we can take advantage of this information in heat. And basically, we create autoscaling groups and we define thresholds there, OK? So based on these thresholds, once you hit certain utilization from the CPU, then heat will trigger the autoscaling. And then basically, we can provision additional instances for the web tier or whatever tier that we have inside, OK? It works also the other way around. Once the utilization hits the threshold for the bottom side, the utilization goes down, basically the instance will be terminated, OK? So it's quite cool in that sense. Interesting point to note, after NOVA and Neutron, heat is the hot project at the moment. So there is a lot of commits going in for heat, all right? Let's talk a little bit about workload backups, OK? And when I cover this portion, I basically come from the perspective that we are using pure open stack, OK? We are not using additional tools. So there are quite a few ways in which we can do this. Perhaps the first case is when the users want to back up the actual instances, OK? They say, OK, yes, I know that this is supposed to be a cattle kind of workload. But what happens if I really, for any reason, I want to make sure that I back up this particular instance? I don't want to lose the configuration that we have inside. So this first case, we can run through these steps. And basically, what we are doing is cloning the image in a way that we can get it into glands. And we can download it into any machine and basically just move it around, copy it to another open stack cloud, and get it running again, OK? Interested fact is once we do this, it's going to be a raw image. So if the original one was a cue cow, then basically you are going to get the full size image. You will need to use chemo IMG to basically reduce it again to cue cow. Not a big deal. OK, so this one is perhaps the first case. For the second one, Cinder related, we are going to do what we call the volume-based approach. Most of the installations today still use LVM as a backend for Cinder, OK? So basically, we configure what we call the Cinder volumes VG and we just attach the loans to this particular volume group and whatever instances, whatever volumes that we want to create on Cinder are going to run inside this VG. So what we do is we connect to the Cinder host and basically we just take a snapshot of that particular volume, OK? We can actually do a key part to recognize the partition of the snapshot. We just mount it and we come back up specific data that is running inside. Interesting facts, the volume can be associated to the instance at that time. Of course, if the workload is dynamic, you may want to quiet whatever information that you have inside just to make sure that it is consistent. OK, this one is widely used at the moment. Case number three, we're going to use Swift. This one is pretty useful because, as you can see in the blue square, so basically this will back up only the actual data, OK? It's not the whole row volume that is being backed up. OK, we follow these steps. We have to create a container and inside the container basically we define the backup coming from the Cinder volume that we have, OK? Cinder snapshot is the next one. This one is quite a simple one, but the drawback is the full row backup. So whatever size that you have in the volume, even if you're using 10% of the capacity, then you are going to take the full size for the actual backup. All of these cases, of course, can get a lot more interesting if you're using safe, OK, safe. They have their own way to doing backups and things can get pretty interesting and complicated as well, so we're not going to cover that portion as of now. And a lot of customers, they also come to us and ask us, how about enterprise backups? Whatever backups that we have been running in the past is it possible to get this integrated with the instance? There is no reason why you shouldn't, right? It's only that they have to spend a little bit more money to get the backup or network running with the workloads, but it's workable as well. All right, let's see how the CFM demo is going. All right, so it seems that all the instances are running at the moment, OK? So we got a floating IP for the web server one. I'm pretty sure that the second one should get it as well. Let's do a refresh. Yeah, is there as well? And let's see what's going on with the actual database server, because that one is the one that usually takes longer. Let's try to open a console. All right, it seems to be running. Oracle is running. Let's do a quick check, OK? So in this case, what we want to do is I create a particular schema, and this is supposed to be kind of a real estate database. So let me try to remember which tables we configure in this thing. There is one particular table called SON. I don't know if some of you are familiar with Singapore districts, because this is a place where we configure this stuff. So let's run a quick query here for the SON table. And basically, it will give you all the districts that you have in Singapore. So hopefully, we should be able to draw this information from the TomCapt servers, which are the web instances that we have running here. Let's take one of the floating IPs as an example. Let's try to access it from here. It's import 8080. All right, it seems to be there. So the SONs basically is loading the information from the database. OK, this is a typical use case. Some customers ask us today, OK, we have this particular world we're running. It's not really meant for cloud. But how about if we wanted to port it to OpenStack? Is it possible? Yes, it's possible. Would it be an ideal world for OpenStack? Maybe not. OK, then they come with a question. How do I back up this kind of stuff? So that's how the particular use cases that we saw before come out very useful, right? So basically, what we're doing in this case is we have an actual sender volume, and our database is sitting here. OK, so whatever issue that we have with the database server, basically, we just terminate the instance, spawn a new one, and then we can basically attach our data, the archive logs, and just roll forward the database to a consistent state. This is quite a cool case. All right, let's delve now into the consumption portion. And we're going to see two more use cases. One of them is Murano, which Daniel is going to cover. And finally, we're going to delve into Sahara, which is Hadoop as a service. Thanks for that, Alex. So you can switch over to the Sahara demo while we're talking about consumed. So as Anthony has introduced, we've covered off deploy, orchestrate, and backup. And now we're on to consume. I always like to think that I'm the pretty one of the group, so I get the pretty section for now. So we'll look at more of the kind of the eye-candy interfaces that are available out there project-wise. So with Murano, this is a project started by Morantis that provides a marketplace, an application catalog for OpenStack, Cloud Foundry, PAS-based offerings, et cetera. So the idea here is to allow app developers, admins, to publish various Cloud-ready apps and then effectively allow users to deploy those apps using that push-button approach. And utilize some of the underlying orchestration mechanisms, such as the heat that Alex has demonstrated some of that other OpenStack goodness. Now, the Murano also has that tidal metering and billing around a salometer. So if you're here for that talk earlier, covering off salometer to look at and trigger against certain types of events. Now, it's an early-stage project as well, so it might be something that, and we'll go through a bit of a demo here, you might want something you might be interested in looking into and getting involved in. So this is just a quick screenshot of Murano in action, let's say, within the Horizon portal, the Horizon portal for OpenStack. And some of the apps, some of the sample apps that have been stocked in here. The other screenshot we have here, of course, is an example of, let's say, HP Vertica analytics platform or the apps that we have out there as a community edition that's available for people to download and kick the tires on, publish through Murano. Now, swapping over to what we have here. The interface that you're seeing here is the HP development platform. HP is a PaaS offering based off of Cloud Foundry. And within this PaaS offering, what we have is the Murano marketplace driving the ability for users to deploy from sample applications that are stocked by, again, your developers, your cloud admins, et cetera, for users to consume. Now, as a PaaS offering, this is catering for the development demographic, if you want to say that, around taking applications, defining them via YAML file and having them sourced via internal or external repositories published through and then allow this sort of self-service consumption approach. So Murano driving along the marketplace so that, so again, you get that push and go user interface to get on with work. Now, another element of Murano in action here is looking at an existing app that's been deployed and drilling into the WordPress app that's available. In this case, what we have here is just an idea of a simple auto-scaling, a simple auto-scaling GUI entry here to allow users to drop in, define a CPU threshold to trigger an instance growth. So in this case, we're starting a singular instance with the one instance minimum. Once we just drag and drop the sliders to a one, two, or four instance minimum, then the PaaS platform starts to spin up additional instances, so again, that users can get on with the work. This case scaling out and an application that can cope with additional instances being spun up and sort of back the services that it's responsible for. So again, Murano driven so that it's nice GUI marketplace driven experience and give users an idea of just that push button consumption experience or give everyone here an idea of that. So that was, was there anything else we wanted to add to that, Anthony? I think that, yeah, I think we're going for a different one. So are we ready with that Sahara demo and all that goodness as well, Alex? So we'll flip back to Alex, he's been doing a great job on our demos here, our demo death march. So we've got, we'll be talking to the Sahara project. We'll just flip back. All right, for Sahara, I'm going to use iThouse in this case, okay? Sahara is not really officially incubated yet. How do I get into this screen? Okay, all right. It's not officially incubated. If you want to give it a try, there are not so many distros that you can install it out of the box. Basically, you can try here in OpenStack if you want to get a kick of Sahara or if not, my ranties will be perhaps the easier one to get started. Okay, so there are quite a few things that we need to do here. This demo, by the way, runs in about 10 minutes as well. I think we are doing pretty good in time. The first thing is basically, we need to take note of the plugins that you have. Okay, so for this demo, I'm going to use the Vanilla Apache Hadoop, okay? For those of you who are Hortonworks fans, the plugin is there as well, so you can give it a try. Or this one is perhaps the less popular one, all right? We will need to get an actual glance image. We will need to load it and configure what we call an image registry. In this case, I'm using Fedora, okay? And basically what we do is we associate tags to it. So I'm going to use the Vanilla Hadoop and I'm going to use the 2.3 version, which is at this moment the latest one. All right, after we have the register configure, what we do is we go to the group template, okay? And basically here we have to configure the master and the workers for those of you familiar with Hadoop, all right? We are using Vanilla as well. We have to associate the process that each of the notes is going to run. For the master, we have the name note, the OZ resource manager, history server. Workers is only that and other than the note manager, all right? Once we have this one going, we have to define the actual state of the cluster, how many notes are going to be inside the actual cluster. So in this case, I'm going to configure two notes for time reasons, okay? Let's hope to get it going fast. Once we have this, we are ready to basically configure the cluster. Configuring the cluster is as easy as basically just saying the version that we want to use. Okay, Vanilla Pache Hadoop. Let's give it a name, Hadoop, this one, sounds good. If you want to assign a keeper, I already have one pre-created and all the Hadoop notes are going to be attached to our private network, okay? Floating IPs will be associated. Later on, if you want to try them up and reduce or any other analytics feature, and basically what we do is we kick it in, all right? So this one will take about five minutes to eight minutes, maybe, let me go back to the deck. Okay, Sahara was previously called Savannah for those of you who are not familiar, quite similar to Neutron, used to be called Quantum in Grizzly. We have to change the name for trademark reasons. We don't want to get sued. Okay, it's an open source, native and open stack, right? And it's a very useful use case for those people doing that analytics today. Let me do something now. Actually, we have some videos running here with the actual demo, okay, just in case that the other one doesn't work for a reason. So basically what we're going to do here is go through all the steps that I showed you just now. Okay, we have the different flavors. We're going to choose the vanilla flavor in this particular case. We got the registry loaded, okay? We could as well have been using Ubuntu or Rails. It's also supported. Okay, it's not really a concern. Configuring the workers and the master, okay? We went through this one for the cluster templates. Basically, I am using exactly the same ice house that I loaded just now, okay? So what I am doing here in this demo, I am basically cutting the provision in time, okay? Which takes about a minute, so we can see actually what's going to happen then result. All right, so once we get it running, actually we can check the actual configuration from the network topology. We should be able to see the two instances kicking in. And I'm from Nova as well. If we go back from Horizon, we should be able to see the compute nodes provisioning. All right? Okay, all right, our topology shows that the two instances are already attached to the private segment, okay? And basically from the project side, if we check the instances, it should be provisioning at the moment. Oh, it's actually active. All right. Okay, let me go to the next one. Okay, in the next one, basically you will see that this one is already running. We can go to the cluster details. And from here, we can see the actual capacity that we have inside the Hadoo file system, right? So at this moment, I only have WordWorker and the capacity is about 18 gigs, okay? And from here, we can see that we only have one instance for the workers. It's possible to scale it. It's not an auto-scaling feature yet. Basically, we have to do manual scaling. From here, you can see the jobs. If you want to do map and reduce from that particular screen, you should be able to see what's going on. So let's try to give a try to the auto-scaling. Basically the demo. From here, we just select that we want to add one additional node to the workers, okay? And it's going to add the additional instances into it. We should be able to see it from the network topology. There it is. We have the workers. And if we go to the next portion, assuming that everything is complete, once more, we can check the cluster details. The capacity should have been increased by twice the amount. So now we have 37 gigs. And we can see that we have two nodes running as worker, okay? So the scaling parties is working just fine. We can scale down as well. Basically, we just go back to the cluster, reduce one of the workers, and just kick it in. All right, so what I will do just now is let's get into the wrap up. We have some links that could be of interest to you guys. All right, well, you just want to do the last part of the demo in a moment. Yeah, all right with the links. So we're basically finished. I hope what you got out today was just a few tips and tricks on some of the more interesting projects that are out there in and around OpenStack. Some of the more interesting pieces that you can leverage to be able to help you to be able to deploy faster, to be able to orchestrate quicker, and to be able to leverage some of the larger Hadoop-type solutions, or at least be able to have a play with it. You can also reach out to us. We can hook you up with and show you how to get free cloud accounts to be able to stand up any of the development platforms, or even to be able to test some of the Hadoop clustering and things like that too. Or we can also give you the links so you can go off and do some reading and then be able to install it and play with it as well. So I hope you found it interesting. We tried to make it as fun as possible. If that is possible, I'm not sure, but anyway. Yeah, at 5.30 on day two. So thank you all for your time, and enjoy the rest of the conference. Thank you. Is there a reason you didn't use Cinder Backup Create in any of the backup methods for volume backup? Is there a reason you didn't use Cinder Backup Create to do backups? No, actually, that use case is fine as well. I mean, guys, what I was trying to do is basically run through a few use cases that are possible to use. That doesn't mean that it's the only way. Actually, there are more ways to use OpenStack to do your backups if you want to use that particular case is fine as well. Thanks, guys. Sure. Sure.