 Good afternoon, everyone. So presenting with me today is Alex Tesh. And my name's Anthony Rees. And today, we're going to take you through some of the use cases that we built for a customer of ours based in Australia. And some of the enhancements that we did utilizing OpenStack with Ansible. Most of the use cases that we'll be going through today will be neutron-based. So hands up, anyone in the room who has had a customer or even a work colleague come up to them and say, hey, I've been reading about this Gartner by-model IT. And we really want to do something about it. And we've got all these legacy systems. And now we also want to be able to start using code to be able to transform our business and to be able to leverage OpenStack in some particular way. Has anyone had a customer or even a couple of hands going up around the room? Well, this exact same thing happened to us. And I'll admit, in about September of last year, I looked at my colleague, Alex, and Alex looked at me. And we went, what's this bimodal IT thing? And from there, a set of use cases developed. And we went on a bit of a journey of learning. And hopefully, what we'll be able to do today is to be able to pass some of that learning onto you. Not only will we talk about what worked really well, but we're also going to highlight a number of the distinct challenges around working with some of the examples that we've created and also some areas where the solution is quite mature, but also other areas where you're going to identify. And we're going to call out to you hopefully pretty much all of the pitfalls that we came across and some of the issues that you'll come across if you're trying to do similar use cases as well. So this afternoon, what we're going to take you through is we'll start off with low balancer as a service. So we'll talk you through what the customer was trying to achieve there. We'll go straight into a live demo. So everything that we're doing today is live demonstration. And then from there, we'll move into firewall as a service, talk you through what we were trying to achieve out of that. Working next into VPN as a service with another live demonstration. So we'll actually be joining two open stack clouds together. And we'll also talk a little bit about back end as a service in that particular area too. And the last example with a live demonstration will be bare metal. So bare metal provisioning as a service, utilizing Ansible for that. We'll take you through that. We'll talk you through the different playbooks, how it works. And we'll kick that off. Probably won't have time for that to finish because it does take a little while to run. But we're happy to take questions afterwards once we get through those. So four live demonstrations that we'll be moving through this afternoon. What could possibly go wrong? I think we're crazy to try this, but anyway. All right, low balancer as a service. Should we get started on it? OK, low balancer as a service. So as I alluded to before, this enterprise-based customer of ours already has a large legacy set of systems. They've got in excess of around about five to 600 applications, disparate applications that they have running in legacy environments. And they've been playing around with AWS for quite some time. But they're in the financial and banking area. So of course, they've got a couple of issues around data sovereignty and also the ability for them to be able to meet the regulatory requirements of the country. So they wanted to be able to test out auto-scaling via threshold. And they also wanted to be able to support a number of different low balances as well. So what we did was we put together a framework for them to be able to utilize. This particular framework is centered around Elbas 2 in this particular case. But we'll talk you through some of the challenges that we had around that, and especially as we're going through the demo, to be able to get that to proactively scale as well. So they wanted to be able to integrate that with auto-scaling too. And all in all, they wanted to be able to control everything by code. So they wanted to be able to use similar low balances that they were using in their legacy type environment and do exactly the same thing within their new mode 2 area, as they would call it from Gartner. So with that, I think we'll jump across to our first demo. Thank you, Antonio. Yes, what I will do first is let me kick off the demo for a while, and then we go into the challenges that we face it. All right, so what I will do is I'm going to connect to our jumping host, which is currently sitting in Sydney. And I will trigger the very first use case, which is basically Elbaz. So like Antonio mentioned, basically this customer, they wanted to achieve quite a good amount of SDN use cases. The funny thing is that initially, we pulled NewAge as well. So NewAge was actually playing closely with us. We managed to create a plate books to integrate NewAge with our own Hylian OpenStack distribution. And there were long nights, right, Antonio? So basically, we spent a couple of long nights working with the folks from NewAge in Sydney. And suddenly, after we got that portion working, it turns out that there is no project to bring NewAge into the book. So we have to figure out how to redo it again, basically just using pure Neutron. So all the use cases that we have here too there is basically based in an OpenStack Neutron, all right? There is no DCN. There is no NewAge, sadly. We're hoping that for the next customer. That will be the case. All right, so we can see from here, from Horizon, that basically we have, Jose, the font size. All right, for the console. Yeah, let me give it a try. Perhaps let's make it 18. Is this better? A bit? I think there is no one. All right, cool. Well, we'll translate as we're going through as well. Yeah, later we will go into the challenges. But basically, we scripted everything. We're going to run this script, which basically starts calling a heat template, a heat orchestration template that we have running in this environment. And basically what this one does is it will start to stand up database network. We have a DMZ network as well. And this customer, like Antoni mentioned, they were looking into mode 2. So it's not really a cloud-based kind of application. They were actually having some back-end applications that we wanted to migrate into the cloud. And they say, look, Garner said that mode 2 can also be ported into OpenStack. So you guys have to give it a try. So that's what we did, right? And basically, this is not exactly the same infrastructure that we were running at the time. But we tried to emulate roughly the same just to give you a rough idea, right? So what we have here at the moment is we have two web servers. They are actually Tomcat servers. We are going to see the load balancer in front of them. In this case, we're using Heshape Proxy. So we are using the native Neutron APIs for that. Of course, this was mostly on development and QA stage. They just wanted to give it a try, see the behavior. For production, they were actually going to integrate with F5, make use of some production load balancers. OK, well, this one is running, which is going to take about two more minutes. Let's talk a bit about the limitations. OK, how many of you guys tried before Elbas using Cilometer for the scaling groups? So as you may know, in HEAT, basically, when you define the scaling groups, you can define thresholds. And once you hit a threshold for your web farm, let's say we define it at 80% CPU utilization, then the scalability group will kick in. And hopefully, it will trigger the scaling, bring a new web server into the farm, and basically just put it as a member of the load balancer, and everything should be fine. However, having said that, Cilometer has quite a few known issues. There are some instances in which the utilization is quite high, actually, for your web farm. And Cilometer just didn't kick in the auto scaling group. That's very often. So what we are trying to achieve in Helion OpenStack at the moment is we are limiting the scope of Cilometer just for billing purposes. We try the utilization of whatever instances that you have running in the cloud, but we only use Cilometer basically to track and build. That's all. Fortunately, we are working on a project on Monasca. I don't know if you heard about it. Basically, it means monitoring as a service at scale. And the scope of Monasca is basically monitor the instances and make sure that we can trigger this auto scaling. No HG capabilities for Elbas version 2 is exactly the same issue that we have with version 1. So basically, the load balancer is running as a namespace in either the compute node, if we are using DVR, or the network controllers. Yes. Talk about it. OK, yeah. So the question is, in the previous session, they were talking about Mitaka and the integration with Cilometer was it, and the new project that they are working on. Is it coming? It's definitely coming. Look, they are quite a number of improvements happening in Mitaka, sadly, or I shouldn't say sadly, because actually it's a good thing for the enterprises. The commercial distributions, the ones that are enterprise-grade and ready for production, are behind the cycle that is happening upstream, right? And it's usually between three to six months, the gap that we have. All right? We have people like Marantis, which are very efficient. Joe's releasing very close to upstream. In the case of Helion OpenStack, actually, this week, we are releasing Helion OpenStack 3.0, and that one is going to be based on liberty. So again, whatever improvement that is going on at the moment in Mitaka is going to be likely ported over in 4.0. So it's getting there, but it's not as close as upstream as what you may like, all right? So again, this is your challenges from the operational side. May not apply for Mitaka itself, but is how we address them in this case using OpenStack 2, which is based on kilo, all right? So we are a bit behind. Okay, so you may know that if we are using basically kernel namespaces for the load balancer, the moment that either your network node dies or their compute node dies, basically your load balancer is gone, right? So that's a limitation that we have. This is actually been addressing the next release, liberty base, with something that we call Octavia. Okay, so basically Octavia is going to be an actual virtual machine, which is going to be acting as a load balancer, and there is going to be a shake-up abilities. So hopefully when the network node goes down or the compute node goes down with DVR, then we should be able to migrate Octavia to another node and your load balancer will continue to work in a seamless fashion. And the last section really is around, this particular version doesn't have horizon integration, so you can't see the ALBAS being spawned shown within the horizon interface. To me, that's not really a big issue, because you can still do it through command line, so which is fine from my perspective. But the particular customer that we were dealing with, they wanted to be able to see everything from a single pane of glass, so there was a little bit of a challenge there from their perspective. I don't think most people in the room are going to see it as a massive negative, but that's okay. And then the last issue was around ALBAS v2 at that particular time with heated integration. We all know it's coming, so which is good, but at that stage, we had to go down the Ansible path because the heat integration isn't there. So this was the workaround that we came in to be able to do it. Let's take a look how our demo is going. Right, I think it's open running. So like Anthony mentioned, look, if you go to an Edward tab here, you won't be able to see ALBAS. And the reason for it is ALBAS v2, no integration at the moment as of kilo. Liberty, I understand it won't be there either. At least in our commercial release, it's supposed to happen in Mitaka, hopefully. So not a big deal. Again, this particular customer, they were very keen to do infrastructure as code and everything is supposed to be driven via APIs or command lines, so it's not actually a big issue there. Okay, so let's take a look at the actual script that was supposed to stand up the infra, and hopefully we should be able to see from here the floating address for our so-called load balancer on the Niklossys tab. Let's just try to paste it. In this particular case, we are using Tomcat, the port that we are using, the load balancer is 8080. And if everything goes smooth, we should be able to see from the, that would be your left side, the connectivity to the database. All right, so all these fields that are coming from this side are actually coming from the backend, which is actually running Oracle Express Edition. That was the instance that we spun it on the DB segment. So how to test the round robin because this thing is supposed to be working in a round robin fashion. Again, I am using forms here, so this particular font may not get the updates from the second load balancer, so what we can do is we can actually call the IP address for the load balancer. The load balancer is just a round robin, so we should see it's just toggling between the two, web one, web two, web one, web two. Yeah, that's right. So it's supposed to work in a round robin fashion. Let's talk a bit about what we call proactive scaling. Like Anthony mentioned before, there is no integration yet with heat, and Elba's version too. So by right, Cilometer cannot really work with auto scaling groups, which is actually a good thing because it doesn't work most of the time anyway. So what we did in this particular case, the customer was using OpenView, which is an enterprise monitoring tool. So we get OpenView to monitor the actual loading of the instances and just trigger our proactive scaling scripts, right? So we just don't want to call, let's say, a scale up. We go back to our network tab, we should be able to see that a new instance is starting to come up. So this is called a scale web one. So again, if we want to use a pure OpenStack solution, what would be the way to do it? So ideally, we won't be relying on enterprise tools, monitoring tools in order to achieve this. Okay, the good point is that in Monaska, I don't know if they were quite a few Monaska sessions during the summit. I don't know if you have the chance to attend it, but basically they mentioned that we can enable some alarms, we can create our own alarms, and we can actually monitor the loading the instances running inside the cloud, all right? So we should be able to create the triggers as well, and basically, based on events, call the scripts and do what we call proactive scaling. So again, all of this is trying to bypass the limitations that we're facing with the current release. And this particular customer was using third-party monitoring tools, which they wanted to be able to continue to use no matter what resource pool they were consuming, they were monitoring. So what we did was we actually tied into those and we were able to do proactive scaling and scale down based on the monitoring tools that they were already utilizing and basically just hitting the API from there to be able to get the responses. That's right. Let's see if our new Tomcat server is up. It seems to be there. Sorry about the resolution once more. Let's just clear this. And let's try to test our advanced again. Hopefully we should see a third web server, which is a scale web one. All right, so seems to be working fine. Antonio, do you want to talk more about the firewall as a service? We will be the next one. Yep, okay. So the next live demo that we'll go through is firewall as a service. What we were basically looking, what we were asked to achieve here was to be able to create a dynamic firewall for their POC environment to allow them to be able to block ports or to be able to create security profiles. And what they were looking to be able to do is to be able to control that via code. So at particular runtime, they use a lot of continuous integration. In this particular environment, they had a Jenkins rig that was running. And at runtime, what they wanted to be able to do was to inject into the IaaS layer the policies to be applied for the tiered architecture that they were going and standing up. And what we were able to do was to be able to give them via API in this particular case. And this is the demonstration that we'll do. Although there is integration at the horizon level and we'll show you that before we start. Don't forget to do that. And once the rig was initiated, the code is obviously then pushed to the platform as the infrastructure is being stood up and the security policies are injected at runtime. So what we do is we'll give you a demonstration of those security policies that were being literally injected. And we'll show you how they run and what they look like from the horizon portal and what they look like from the command line as well. All right, let's go back to our network tabs. Let's check the firewall as a service and there is nothing of the slips, right? So we don't have anything at the moment. Okay, so this is by no means a replacement of an enterprise firewall. So what we are using again is basically a firewall as a service which is provided by Neutron. And what I will do is a very simple test case. I will take a look at the IP address that we are running for the load balancer. In this case is 202.33. And let's just try to ping it from this console. Okay, I can actually ping it. So the customer was a bit concerned about this. They said, look, you know, have you heard about denial of service and this kind of stuff? You can actually ping it from many sites and so maybe we'll go down. So we are a bit, a bit uneasy with this implementation. The sad thing is that we cannot really control ICMP by using security groups in the load balancer, right? Because this floating IP is actually sitting in the load balancer. The load balancer is not an actual instance, at least not until we get to Octavia. And then the only way that we really have to block this ICMP from within OpenStack is actually enable firewall as a service. So let me go to case three. What I will do is just call this particular script and we will see that basically I am starting to deny it. I'm creating some rules to deny the ICMP and to steal a labor or allow the traffic going to ITAT, which is our tongue observer, right? Let's go back to Horizon. Let's take a look at the firewalls and hopefully we should have a firewall with has two policies in place, one for deny ICMP, one for allow HTTP and these are the rules. All right, let's try to ping it again. Where's my ping? Here. All right. Two out of four. It's working fine so far. Hey, don't jinx this. Okay, let's get Anthony to talk a bit about VPN as a service. Or let me see, I think we have some limitations that we want to highlight about this particular case. Yeah, there's a couple of limitations we need to talk about. Yeah, well, like we mentioned is by no means a replayment of an enterprise firewall. They are APIs in place for major vendors like ShapePoint in which we can integrate with Neutron. So that's the ideal case for production environments. All right, again, this is only QA and development. So it's just for testing purposes. The second issue is if DVR is enabled, which in our case it is because it's the defaulting Helium OpenStack, then it's not going to filter the bandwidth traffic. It's only going to filter the north-south, right? So any traffic coming from outside. So again, in this particular case, a combination of security groups for the particular Tonka instance, let's say, and a firewall as a service to basically block the ICMP in the load balancer that would be the ideal situation, or at least as a way that we presented to this customer. All right. Yeah, that's pretty much what we mentioned before. Let's jump into what I believe is the most interesting use case, which is VPN as a service. And Tony will talk a little bit more about this requirement from the customer. Okay. So VPN as a service was interesting to the customer that we were working with for a number of reasons. Probably the biggest reason that they were interested in it and they wanted to see how to be able to connect to OpenStack Clouds in this particular instance together was because like I alluded to before, they were utilizing quite a bit of public cloud, but because of the regulatory issues that they had and a number of the monolithic databases that they have sitting at the back end, they wanted to be able to have their private cloud environment be able to hook up to either alternate private cloud environments or even look to be able to hook up to public clouds in the future, but still be able to have their databases located where they are at the moment and have them securely left in the locations and the data centers where they currently were residing. They have a number of Oracle Teradata databases for example, so massive systems of record. And a lot of these need to be made available to the digital groups that need to be able to consume them. And this is one way that they were looking at to be able to do that. So literally to be able to link two clouds together, to be able to do proof of concepts on whether they can be able to ship information from one legacy environment across to what they were calling their bimodal new world, so to speak, and allow the legacy groups still to be able to continue on, working on the solutions and the platforms that they were currently utilizing in their legacy environments, but also allow their digital teams to be able to move faster and be able to still get to that information at the back end. So to describe it at a very, very high level, we've essentially got two OpenStack clouds running. And those two clouds, one of the clouds we will show having the web servers sitting in. And in the second site, so site A in this particular case, we've got the database located. So it's a pretty simple two tier architecture in this particular case. We've got a small Oracle database on site A and on site B Tomcat with a set of webpages heading on it. And what we're gonna do is show you that they can't talk to each other. And then when we enable VPN as a service via code, which we can also de-enable via code as well, you'll then be able to get to the database and retrieve the information. All right, there are a few discrepancies between our actual demo environment and what the customer used to have at that time. This poke actually happened in January. And at that time, we still have our defunct helium public cloud, which is based in OpenStack. So at that time, some of the workloads that they plan to run, we actually tested on the public cloud, right? Based on OpenStack. And the web farm was basically sitting there. So ideally, the database will be kept on premise. They don't want the data sitting God knows where. So that was the whole idea that they have in mind, right? So let me go back to environment, which is here. And what we do now is lend me Jaws. I think I may need to re-log in again to this particular second cloud. Both of these clouds in this case are sitting in the same data center in Sydney, but they are isolated clouds. So we have to separate helium OpenStack implementations, right? I'm going to log in as the VPN tenant for this particular cloud. And if we go to the network topology, we should be able to see that this one is only running the web services, the Tomcat, right? So Tomcat is here. Let's take a look at the other cloud. Get out from this tenant. Let's quickly go into my VPN tenant. And this guy should be running our database, okay? Two isolated clouds, one running the backend, the other one running the web services. So this is a slightly simpler use case in terms of the infrastructure. I don't have a load balancer sitting in front. So what I will do is I should be able to actually hit the floating IP from the web server itself, which is the one that we have over here. So this tab, let's open a new one. Port 8084 Tomcat. And there you go. We are getting some JSP errors. And the reason for this is, like I mentioned before on this side, we have actually the database connectivity, right? Which is being pulled from our backend. We have no backend connectivity at the moment. This one we can actually double check it from the Neutron tab. We should be able to see the VPN tab here. And there is no policies and there is no IP sec site connection, right? So let me try to enable it for this particular tenant. Now once enabled, we should be able to go back to that web page and the JSP errors should disappear because it should now have a database that it can talk to if it works. All right. Let's just give it a few seconds. Cool, let's try to refresh our horizon first. Let me just make sure that we have an IP link. It seems to be active. The IKey policies are in place. And let's try to refresh this guy. We have a database now. Okay, so basically we just created the tunnel between the on-premise cloud and the public one. Okay, not a big challenge there. Still, we have quite a number of limitations which I want to cover now. And yeah, basically this doesn't work with DVR. Okay, if we want to run this implementation and DVR is enabled, which basically will put the floating IP on the compute side. We should have an FIP running as a name facing the compute node itself. The implementation is not going to work. And the reason is because there is still no integration with VPN and the DVR. This is going to happen hopefully in our next release. I think that a lot of work was done in Liberty, so hopefully that won't be a challenge. So what we have to do for this particular tenant is the moment that we create the router that is going to have the IPsec link between the two sides, we have to create it as a centralized virtual router. Okay, that was one of the limitations. Again, no big deal. The only problem is, as you know, the FIP will be sitting on the network controller, so all the traffic will travel through the compute to the network and then outside. So it's just a single point of contagion. The second challenge, and which is perhaps more worrying, is that the current implementation that we have for VPN as a service is based on what we call pre-share keys. So basically we just define a word that we're going to use and that word is going to serve as the seed for the encryption, and it has to be the same on both clouds. So it's not precisely the safest of the implementation, but it's what it works, right, which is an important thing. And the last implementation, helium open stack is still based on open swan for the VPN implementation, and open swan happens to be run as root. So not to worry you too much, but at the moment they are no holes, no bugs that we know or that we are aware of, but as you know, if there is an exploit of open swan and it's running as root, it's not that difficult actually to get a root change in your machine, right? So yeah, there should be a concern. We reckon that again, there is work taking place at the moment in the community. So hopefully there will be a stronger implementation in the future. Yeah, just to be clear on the open swan challenge, let's call it that, the financial institution that we're working with, even though it was a Poc environment, they appreciated being able to control VPN as a service by code, but that was seen as quite a large security hole. So please keep that one in mind, all right? It's good for Pocs, good for demos, but yeah. Back end as a service. Bare metal as a service. Bare metal, yes. All right, do you want to talk about this particular customer? Okay. So this is our final live demo. And Bare metal as a service was interesting to the customer because they have a lot of challenges, number one, actually time to market. So even being able to deploy into their particular environment would take quite a substantial amount of time, let alone if they need to go to bare metal. And to be brutally honest, if they've deployed into a virtualized cloud set of instances, taking it from a set of cloud instances and then deploying it onto bare metal after having already pushed into a virtualized environment, took them four times longer. And even working out how to be able to cut across from that was a policy nightmare within that particular organization. So what they're really interested in here was really bare metal in two ways. So what they wanted to be able to do was, yeah, can we actually increase the amount of compute nodes in an automated way to be able to increase the size of our environments? But number two, can we come up with a methodology to have a singular framework in terms of scripts to be able to deploy into an environment which is running on OpenStack? But can we also do the same thing, provision bare metal and deploy onto bare metal as well, using a very similar set of code? And this is what they asked us to be able to prove. It was pretty challenging, I must admit. But... Well, the good thing is that in our distro, we already have all the playbooks in place for this kind of situation. So ideally, it's just as simple as dropping a new server in your rack, taking out of the MAC address, just defining the model for this particular compute and just run the playbook, and it should work fine, right? So one of the main things that we are trying to address here with Ansible is basically what we call HLM, the Helium Lifecycle Manager. If we have the same conversation with this customer one year back, it would be a totally different one. Let's try to go back in time to our Helium OpenStack One release, which was basically based on triple-o, social wonder. And the conversation basically will go, okay, guys, I want to upgrade today, or I want to patch my cloud. Let's say I am sitting in Juneau, which is the case of our previous distribution, and I want to migrate to, let's say, Kilo or Liberty. How do I go about it? So basically, the conversation was, well, you stand a new cloud. You stand a new cloud based on Liberty, and then we start moving workloads, she's perhaps no idea of the situation, right? So with these Ansible playbooks, what we're trying to do is to make a seamless experience for the customer when it comes to migrating across cycles, all right, and make it seamless for the world loss. So the end user shouldn't be affected. Let's take a look at one of the playbooks that we have here, for example. This particular server, what I will do is I'm going to check how many systems I currently have inside Covular. So for HLM, we have basically with a few components, our repositories are held in Covular, which is, of course, open source, and in Covular, we also hold the definitions on how many computes we have in this particular cloud at the moment. So I only happen to have one, which is compute two. I create already the definition in the model, which is sitting in this particular directory, and what I will do is just call my playbook to try to update Covular and inform him that there is a new compute node coming into the cloud. Okay, this playbook is going to run for about a minute, and hopefully after this is finished, when we check the Covular, again, we should be able to see an additional compute node. In the meantime, let me check, we're going to connect to our ILO in this particular HP machine to use Internet Explorer for that. Yes, this is the most boring use case. That's why there is people walking out. All right, the machine is currently down, so I only have the option to momentary press, which is your favorite button, right, to power on the machine. I'm not going to do that. Hopefully my playbook is going to take care of everything. Okay, so this one's wrong without any issues. Let's check Covular again, and we can see now two compute nodes. So let's try to power on this baby, and for that, I have created a script which basically is calling a playbook again. And that one is sitting in case where metal. We call it ray image. So basically what this one will do is it's going to power on my machine. In Ansible, we are making calls to IPMI tools, and basically using IPMI tools, we just control the booting, rebooting of the machines. And we can also set and configure the PixiWood and which one is the interface that is going to PixiWood to grab all the binaries from our repos, right, sitting on the Covular machine. Okay, so hopefully this guy is going to power off. Let me just get rid of this slip condition. Control C to continue. Powering it now down, it's already down. Let's continue. And if we're successful, we should say that other server starts to Pixi. Yeah, we should. This slip condition is going to be there until this machine actually powers on. Which is happening, we can see more options here, so that means that it's actually coming up. Okay. Again, the installation is going to take about 20 to 30 minutes, which I don't think we have the time. Actually, we have, we are the last session, but I don't think you want to see it anyway. I don't want to say it. So what I will do instead is let me walk you to some of the slides, and we will basically explain how the model works. This is how the Ansible model works, okay? We basically created definitions on this particular find for all your environments. So all the computes are basically here. All your BSAs, in case that you are using BSAs as in their back end are here, or if you have a three-part definition, it's also here. Your controller definition is sitting here as well, so everything is basically Jamel defined in the Ansible model, okay? So what we do is basically the very first time that I run the Cobbler Playbook is going to create the repositories holding our HLinux distribution, which is the one that we use for Helium OpenStack. And it's also going to configure the HCP.conf, right? So basically we're going to provide the MAC address of that particular machine, the IP address that we want to use, and everything is basically seamless to the user. So the only thing that we need to worry about is logging the server in the rack, taking note of the MAC address and updating this particular model, okay? Okay, so what we did just now, the last playbook that I run, which is called Reimage, is basically making a few calls to ICMP to the IPMI tools to get in the machine boot. We are not using Ironic at the moment. Perhaps that will come in the future, but at the moment we think that the ICMP tools is good enough for what we want to achieve. So the moment that this machine actually starts booting, it's going to take a while, it will go into Pixiboot, all right? It will grab the Pixiboot interface and we will start to get the binaries for our HLinux distribution loaded already in the machine. Step number two, after we have the machine running with the operating system, is basically run another set of playbooks, which is basically going to bring the machine inside as an over-compute inside our cloud. And that's pretty much what we do for Bermetal as a service at the moment. All right, guys. All right. I think that's all we have. That's it. Do you have any additional questions? Four demos, and yeah, we'll be... No, I will leave this guy running. We'll be here for any questions afterwards, but I think we're out of time. So thank you very much for attending. We hope you enjoyed it. Thank you for coming. Cheers.