 Oh it failed because of the network problem, so I will get it and if it's there I will put it on the... Well, it sucks, but... Ah, yeah, that's working. Okay, I needed to be able to see the slides to... Ah, because I thought... We still have two minutes. Can you record it on the... Yeah. We have to start with that later. Yeah, it's fine. Also you have spots, so you can give out spots. Okay, I'll try to remember to leave time for that. Yeah, okay, that sounds good. I'll see if I can... Leave enough time. And the only other question, just things to remember in case we cannot reach the person asking questions if you can repeat it. Yeah, for sure. I'm just going to have so many people from me. Yeah. What do you mean by that? Yeah. Yeah, there's a bit of a few talks, and so J.D. from each country won their answer. This kind of follow is on from that, so people who've seen the earlier talk will probably help. Great. Okay. Kind of. Yeah, I mean, this is kind of one of the advanced use cases for heat, so we go into something more advanced sort of details. It doesn't have to be complex for simple use cases, but it can get complex, I agree. Yeah, the one I'm missing here is the dance ledon. Dance ledon was here this week. Is he? Well, he was here for some meetings, but I haven't seen him yet today. I'm not quite sure if he's stuck around. Okay. He was here yesterday, I saw him. So he's on the bed? Or is he something? So we had some meetings at the office the last few days, but I think we should be around. Okay. Hello, guys. Just, it's already time for our next presentation. I'm pleased to welcome Steven Hardy, principal software engineer from Red Hat United Kingdom. This next presentation will be about main components of Triple O, and we will have deep dive into heat templates. So it's kind of a follow-up on the previous Jay's presentation. So please let me welcome Steven Hardy. How's that? Can you hear me now? Okay. Let's try that again. So as you heard in the introduction, my name's Steve Hardy, software engineer at Red Hat, and I've been working full-time on OpenStack for nearly four years. Quite a lot has happened during that time. OpenStack is a very, very fast-moving project, and I'm going to try and give you a bit of an overview of OpenStack, the various components, and we'll go into a bit of detail in particular on heat and Triple O, which are the two projects that I'm currently most heavily involved with. So what are we going to talk about here? The majority of it is going to be talking about OpenStack. So it might be worth considering what OpenStack is for a moment. Obviously it's cloud software, which a lot of people have heard about, but really it provides you with an abstraction layer. So if you have a data center and you want to be able to provide on-demand access to all kinds of different resources, not only compute resources running VMs, which is the use case most people tend to think of when it comes to OpenStack, but also storage and virtual networking and various other things which we're going to talk about in a moment. So we're going to try and give you a bit of an overview of OpenStack. We're going to do a bit of a deep dive into our advanced capabilities of heat, and this will follow on somewhat from Jay's introductory talk earlier on. So it would be good if anyone has seen that. If not, you might want to check out the recording at a later date. And then we're also going to talk a bit about Ironic, which is the bare metal provisioning piece of OpenStack and how that can help us with the triple O vision, which is using OpenStack components to deploy OpenStack in a production environment. So a few years ago OpenStack was very much smaller than it is now. It was primarily focused around abstractions for compute, and that is mostly running VMs across multiple hypervisors and block and object storage. Now, a few short years later, there are a large number of different projects. This may not even be accurate, because I did this a few weeks ago. But as you can see, there are... everything as a service is the way I like to think of it. Any conceivable abstraction in your data center is likely to have someone working on a REST API and allows people to more easily interact with those resources. So we're going to focus on Ironic, which allows you to do provisioning of actual bare metal hardware, so dedicated bare metal machines for your workload rather than VMs. And then we're going to talk about the heat orchestration component, which allows you to have a more declarative interface to all of these different tools. You can see that if you've got this proliferation of different HTTP APIs, they all follow some common patterns, but they're not necessarily 100% consistent. There are different command line tools, although there is a common OpenStack client effort going on as well. But really, you need some way which is less imperative to define your resources in your cloud. And as Jay's talk introduced earlier on, there's a template model, which he accepts, which allows you to define relationships between your different resources and instantiate them in the right order for you without you necessarily having to worry about dependencies between components explicitly yourself. So before we get into the details of the orchestration features themselves, it's worth differentiating a little bit between orchestration and config management. This is a bit of a blurry line, particularly in the cloud world, because there's a number of different config management tools that provide some orchestration capabilities, and then there's some more orchestration-focused tools which do have built-in config management capabilities. So he tries to draw a fairly defined line between those two features. We don't have any built-in software configuration capability, but we have implementation agnostic way to drive existing config management tools. So we'll get into a bit more detail about that, but I just wanted to clarify that, you know, it's not a replacement for Puppet or Ansible or anything like that. It's really about organizing and managing your interactions between different services and OpenStack. So to follow on from Jay's introduction earlier on, the instantiated environment for he is a stack, and so that's the name for the resources which have been deployed for you by heat based on the YAML template that you have fed in. If you're all familiar with cloud formation, similar kind of concept, we have a native template language in addition to capabilities to drive cloud formation templates. Another really nice feature of heat is that there's a very easy way to compose multiple fragments of your environment. So you can define a heat template that contains, say, some particular piece of software and the server resource that is going to host it, perhaps some networking to support it, some particular type of storage, and that can be a unit that is then easily reused. The templates are all parameterized, and as Jay introduced earlier on, there's a concept of an environment. So say if you need a staging workflow and you need some different parameters or even different nested stack implementations between say pre-production and development and production environments, it's very easy to do that with a maximum amount of reuse. So this is just a very quick example. Try not to overlap too much with the earlier talk, but it's going to help with the concepts we can discuss later. When we talk about composability, we're really talking about referencing one heat template in another template. So it's kind of a parent-child relationship. So in this case, we've got a parent template which is referencing an open-stack controller type alias, and that is just a way of referencing this server with controller config.yaml. And the way which you will create that is to reference the resource registry, which is just a mapping between an alias and an implementation. So you do your stack create, pass in the template, and an environment file, and those two combine to fully define what's going to be deployed in your cloud. So once you've got your unit of deployment sorted out and you've got something that works really, really well, pretty much immediately the next thing you're going to need to do is to build lots of them. You're going to need to scale out horizontally when your application becomes successful because you're going to need to handle an increased load. So there's a couple of different ways of doing that within heat. The one which I'm going to talk about primarily today is the resource group abstraction. And this just provides a really easy way of saying, make me, however many, of a particular resource type. And so you can combine that with the composability we just talked about and just scale out a heat template to any number depending on the capabilities of the cloud you're deploying onto. There's also an autoscaling group resource which has integration with salometer alarms, and that allows you to do much more event-driven scale-out. But we're going to talk about the more static grouping method today. So you've got your server stood up. Perhaps you've got some storage. You've got some networking setup. The next thing you're going to need to do, as we discussed earlier on in terms of config management, is deploy some application onto the hardware. The way you do this in the heat model is basically you define a software config resource in your YAML template. This doesn't care what tool you use. It just accepts, for example, a puppet manifest or an answerable playbook or a shell script or a Python script or whatever it is that you want to run on your server. You then reference that from a software deployment resource. And this is the thing that actually runs your config. This is the thing that knows how to associate that piece of configuration with a particular server. And there's a singling mechanism which is using some agents inside the instance, which basically knows how to collect that configuration, run it on the node in question, and then send a signal back when it's done or if it fails, and we collect the standard out, the standard error, and the return code of whatever it is that you run. And the nice thing then is it's well integrated with the templates. You don't have to do a handoff to another tool, although you could do if you wanted to. You could deploy with heat and then configure with answerable or a puppet master or whatever. But particularly in the case where you want to scale out, it's more convenient if you can define everything in one place and then just multiply up the environment as it grows. So another common requirement if you're deploying a more complex application, and in particular OpenStack, which is what we're going to be talking about deploying in a moment or two, is configuring each individual node is not enough. Each individual node can be set up, but then they need to be configured to know about all of the other nodes. And so I'm calling this cluster configuration because that's effectively what we're talking about. You deploy a cluster of near identical nodes or completely identical, and then they need to be wired up so they can talk to each other. And the example we're going to talk about today is OpenStack controller nodes where you install a bunch of API services and a bunch of RPC and database components, and all of them need to be wired together. Otherwise they're not going to cooperate and increase your capacity as a whole. So you've got this nice configuration method. You've got your template sorted out. You deploy your workload on a virtual environment and you're happy that it works great. But a lot of people have requirements for better performance. And this is where ironic bare metal provisioning comes in. So this is an OpenStack API that basically makes bare metal provisioning possible in a way that's very, very similar to deploying virtual machines via the Nova Compute Service. And in fact it has a driver such that something in the exact same way that you deploy a VM and then what you end up with is a bare metal machine. So the key difference between this and more traditional provisioning methods is that you don't run an installer. You prepare an image ahead of time and then you basically deploy that image onto the bare metal node and then you're basically ready to go. The advantages of there are, you know, in some cases performance, but in terms of trying to make sure that all of the nodes are exactly the same and managing drift between different nodes, this can be a nice method as well. So there's quite a lot of support from the hardware vendor community. Ironic is proving to be quite a successful project. And so that's quite a good motivator if you want to build a deployment tool if you already have built in support for a bunch of different hardware. And so that's one reason why Ironic also uses Ironic for bare metal provisioning because it has very good plug-able support for different hardware types. So hopefully you can see that okay. This is just a diagram which tries to give you a bit more of a granular view of how things work inside Ironic. And so as I mentioned, so the Nova API in OpenStack is the compute interface. Traditionally you would use that to launch VMs on, say, a KVM or Zen hypervisor compute node. But in this case it's configured instead so the Nova scheduler knows how to talk to the Ironic API. And the user comes along and says, okay, I want a server running this image and the flavor is a configuration that points to a bare metal resource type. And so it works in a very similar way to starting a VM. Only we have an extra step that the producter needs to know how to power on the physical hardware. And then we do that via a plug-in of some sort and in this case I'm illustrating you might have an IPMI interface. That's quite obviously quite a common standard which would know how to talk to your hardware and turn it on. And perhaps do some other actions as well. So you power on your node and then the Ironic service is able to PXE boot around disk onto that node and then pulls down the image which you want to put onto the node. There's some code in the round disk that knows how to deploy that onto the local disk. Then it reboots the node and then it comes up with the image that you want to be running. So that in a nutshell is what Ironic does for you. And the key advantages, as I mentioned, are the plugability and the driver support that you get for free if you choose to use it. So Triplo is choosing to use Ironic several other deployment tools inside the OpenStack space and elsewhere such as Bifrost are using it as well. And so it's proving to be quite a nice solution for these sorts of bare metal provisioning cases. So this brings me on to the integration of these two pieces. So when you have a heat environment and bare metal provisioning capability you can then deploy a complex workload and OpenStack is one of the more complex workloads you can consider. As I mentioned at the start of the talk things are moving very, very fast and there's a lot of relatively complex distributed applications that all need to be configured in subtly different ways and so you need a flexible and repeatable way to deploy that workload. The exact same problem exists for many other kinds of workloads but Triplo is focused solely on deploying OpenStack on physical hardware. So it's a bit of a weird concept it's kind of the chicken and the egg kind of thing. You start off with an OpenStack environment and you end up with another OpenStack environment. You can reasonably ask the question how do you get the first OpenStack environment? The answer is at the moment we have some scripts that do a single node install and configure OpenStack on one node using Puppet and then the exact same Puppet implementation is used to configure the OpenStack services on the production cloud and so we've adopted some terminology here which is worth kind of remembering because it ends up being in quite a lot of documentation and blog posts and things related to Triplo. So the deployment cloud is called the undercloud and this is basically the small OpenStack that is used to bootstrap your production OpenStack and then the overcloud is the production cloud that you would then deploy. At the moment the tooling expects you to only deploy one production cloud in the future there's no technical reason why other than probably a few hard coded assumptions that you couldn't deploy multiple over clouds and that's definitely something which we would be looking to support more completely in the future. You can imagine that would be particularly nice in a developer environment where developers might want their own test environment which is separate from other users doing testing or in a pre-production test environment I think that could be very useful in terms of the staging workflow that you end up needing before rolling out things to production. So you have your deployment and management tooling in your small OpenStack and then you have your OpenStack production cloud which is the thing that is actually deployed by your undercloud. So if we go back to some of the heat concepts we kind of looked at very briefly a few minutes ago that is grouping resources, composability which is the nested stacks, a stack which references another stack and the software configuration we can see a bit more concretely how those features combine to make this kind of deployment possible. So the undercloud is deploying groups of nodes and unsurprisingly seeing as I've already talked about this resource group abstraction inside heat we make use of that in order to deploy however many controllers you want for instance you would probably deploy three if you want an HA deployment and however many computes you want which is basically the hypervisor nodes that support the deployment of VMs and there's three different types of storage node so you can deploy Ceph storage nodes which have the Ceph OSD component on it Swift storage nodes which allow you to scale out your object storage and block storage nodes which is basically if you have a requirement for a basic Cinder storage implementation and you choose not to back Cinder by Ceph. So we've now got multiple resource groups and you can imagine these are all in one template and that's actually how we do do it we've got an overcloud YAML template which defines a number of groups of nodes and you say how many of each you want and then we create some nest-to-stack templates which define each node and that goes away and it creates an OS Nova server and as we've discovered if you have Nova configured to create bare metal nodes via Ironic you can just go away and deploy on bare metal so we're making use of that integration and then we're making use of the software configuration interface of heat to in some cases run scripts to prepare the network and in a lot of cases run puppet in standalone master list mode so that's kind of a weird concept until you get your head around it there is no puppet master and all of the data used for puppet is coming from heat and the way we handle that actually the first thing we do before running any puppet on the nodes is we deploy some higher data and if anyone knows puppet you probably will know more about the details of higher data than I do but we deploy a big map of higher data key value pairs and then we run puppet in a series of passes on the nodes in order to get the services fully configured so this is an illustration of that process this is not every step that we run because I wouldn't be able to fit it on one slide but it hopefully gives you an idea of the conceptual process so the first step as on the previous slide is you deploy the server you do the initial configuration of each unit which is being scaled out and then the next step is you do a number of configuration passes doing the cluster wide configuration which is the interface which I described earlier on using the OS heat software deployment group and each one of those accepts a configuration which is in most cases a puppet manifest everything is wired in in a bit of a puppet centric way in the default implementation but we've been careful to keep the abstractions in place such that other deployment solutions would be easily plugged in and in fact there's an effort going on at the moment to deploy via Docker containers using the caller containers which is one of the container communities within OpenStack and they've been very easily able to wire in deploying via Docker instead of having to use puppet to configure the services so the point being that if people are sufficiently motivated they could wire in using any config tool they are invested in but we've chosen to use puppet as a first step because of prior experience using that tool so this is a slightly more granular model which kind of combines the workflow we described for Ironic and the triple-o deployment workflow so you have an interface to deploy your cloud that passes in some templates and some puppet manifests into heat and heat then basically builds a big dependency graph and this is kind of the main thing that heat does it really cares about dependencies between different components and it passes the YAML model and then inside the heat engine you end up with a dependency graph which we then know how to walk in a certain order such that you create things in the right sequence but as a user or an operator you don't have to care about that explicitly unless there is nothing in the template which references between those two resources so Jay earlier on mentioned this depends on directive if for example you were just creating two completely independent servers and for some reason you knew they had to be created in a certain order you can control that but in most cases you don't have to care about that explicitly and so the other thing that we're making quite heavy use of in triple-o at this point in time is neutron we're primarily using that as an IP address management solution there's been some really good work done on network isolation so this is quite a common requirement for production open stack workloads where you want to deploy and keep say the storage traffic separate from the compute traffic or the management traffic separate from some other category of traffic and so there are a number of predefined overlay networks that can be defined and we use neutron to handle that so I've got a few minutes left I'm going to attempt to run a demo this is going to be live and I'm running bleeding edge upstream code so every chance it could go wrong but I'm going to run this it also runs quite slowly on my laptop what I might do is talk through a few things and get it up and running and then we can break for questions and then providing it doesn't fail horribly we can go back and look at the result at the end of the talk so as you may have noticed I didn't come in with a rack of bare metal hardware so my work around for that is to use virtual machines which are configured to pretend to be bare metal and Ironic has been configured to drive these basically via PXE however which basically uses SSH between the round disk and the Ironic service to control things so this is not representative of how you would do a real hardware deployment but it's a reasonable approximation for these purposes and this is also the same environment that most folks would use for development unless they happen to have access to a bare metal testing that and you'll notice that there's one VM already up and running so we've talked about the undercloud the management node is a small open stack however you want to think about it just so happens that for reasons related to the tooling we use to create it and we call that in stack and if you create a default environment using upstream triplo or also the audio community have the audio manager tool which is based on triplo they're both nice ways to get up and running using using triplo depending on how bleeding edge you feel like you want to be and in this case the in stack node is the undercloud so I've got a shell window on here and let's have a quick look at some of the services which are running on here so Ironic represents the bare metal nodes we've registered three nodes here and these are these three VMs which are pretending to be bare metal at the moment they're all in state of power off and provisioning state available so that basically means they can be accessed by the Nova scheduler and they're available to be provisioned to we can see there's nothing running in Nova yet and so as we talked about earlier on there's this flow between Nova as the user facing abstraction and the API that launches the nodes even though they're bare metal and Ironic which is the back end that manages the actual hardware deployment itself and so heat is the orchestration tool that's used to drive the whole process and we haven't got any stacks up and running at the moment so in order to make life easier for operators you can drive this directly via a heat command it's a bit inconvenient unless you're a developer so there's an open stack client plug in and it's as simple as doing this if you want to deploy an overcloud you do open stack overcloud deploy and then in my case I've made a copy of the heat templates used to do the deployment to my local directory if you don't specify a location it will just use a default location which is user share, open stack triple heat templates so this is where I cross my fingers and I hope it all works fine so you can see there is a warning because I haven't told I haven't tagged any of these nodes in Ironic to say they're going to be a controller or a compute node that's actually just a warning so it's going to go through and it's going to pick at random two of these nodes the default is one controller and one compute I would run more but I'll run out of RAM and then we get a lot of fairly verbose event listings so these are events that are coming straight out of heat and it gives you a nice view of the progress of the deployment unfortunately they're getting a bit chopped by the resolution of the screen so I don't think there's much I can do about that without making it too small but basically it's going to go through and it's going to create a group of nested stacks each one contains one server and some software configuration and then it's going to create a bunch of resources in Neutron and then it's going to do a bunch of configuration passes with Puppet and then if nothing goes wrong it will all be completed and we'll have a running open stack environment so I've got another window here so we can just watch things as they progress so we can see now that open stack over cloud deploy command has gone through and it's created a heat stack and this is going to stay in progress until such time as all of those nested stacks and all the configuration steps have been completed and when it goes to create complete that means that basically your open stack deployment is completed so similar to Jay's demonstration earlier on he did a resource listing of the heat stack and so you can see there's quite a lot more in this case because it's a much bigger template and you can see that there's a bunch of random string resources, a bunch of virtual IP resources and a series of config resources that are used to configure the nodes if we now look in Nova so we can see that Nova has launched two nodes which ordinarily would expect to be VMs but because of the way which we've configured things it's talking to Ironic and it's going to spawn to bare metal nodes which are, again, these two nodes which are configured to be dummy bare metal for a better description so we've started these two nodes they're currently boosting, they have already booted the RAM disk and have deployed a sentos image which contains all of the open stack packages so here we can see Ironic has powered on two of the nodes picked by the Nova scheduler and we're going to have to wait a few minutes whilst this goes through and then the puppet configuration and other resources within the heat stack get created so thank you very much we've got ten minutes to go and I know having tested this earlier on that it's likely to take most of the remaining ten minutes so I would suggest maybe we break for a few minutes of questions now and then if we have time I'll come back to this in a minute yeah, that's a very good question so in this case there's a tool, a script called Instac BERT setup it's in the triple O documentation and the audio manager documentation it goes through and basically uses verse to create the VMs and then it creates a configuration file which contains the details required an SSH key and an IP address basically for real bare metal you would create that JSON file manually and it would create, you would need things like well at a minimum you need the IPA microdentials but you may well choose to put other details such as MAC addresses and things in there in terms of discovery you need to be a bit careful about the definition so we can't just go out and automatically discover random nodes on the network you need to at least provide the IPA microdentials but having done that there's another part of the process that I haven't talked about today which is doing node introspection and this uses the ironic inspector service and so this has been developed closely with the ironic community and it uses a similar process to the deployment where it boots a special RAM disk and then it runs a bunch of introspection tests and then pushes the data back which is then used to populate things like the amount of RAM in ironic so the answer is yes in terms of introspection but you do need to manually input the inventory for the nodes so Dimitri is one of the main developers on the ironic inspector project and thanks for the clarification it sounds like discovery will be a future feature so that's something to keep an eye out for any more questions? Do you do any kind of fan out to keep the glance images and the pixies from all saturating nodes? Is that part of the orchestration you do? I'm not sure if ironic will put in a random factor it's not something that we do within the triple O code itself which Dimitri may be able to answer It doesn't like cascade When you're booting 100 nodes or 200 nodes from a certain amount of images from a certain number of pixie nodes you can overwhelm them unless you do some kind of cascading or is there any kind of clever way to not overwhelm some of the source nodes? Yes, thanks You mean overloading network for PXC requests or overloading LAN service so first of all we're using IPXC by default which is HTTP based so it's not that bad it's tftp based PXC This process itself is very interesting other than that and as to deploying process itself there are two options actually in ironic you can directly deploy every node from glance which is essentially not from glance but from Swift temp URL and you can scale Swift pretty well or the second option which is default for triple O is each node is exposed as an ice-skies issuer and is deployed from as a former option which is not default it's probably much better at scaling that's I know some people using we can separate both we just don't pre-configure it I mean in my experience the way which people tend to want to deploy is start off with a relatively small deployment and then scale out but you're right if you wanted to deploy hundreds or thousands of compute nodes for example at once there are several interfaces to the triple O heat templates that allow you to override custom configuration and you might do something like having a script that runs on all the nodes and waits for a random amount of time so there's certainly in addition to the scalability configuration that Dimitri mentioned there's ways that you could do that if you had a reason to deploy a very large environment in one go okay so only a few minutes left any more questions yeah so the question was if the undercloud stack node fails will the overcloud be impacted and the answer is no it will be fine although if you were in the middle of doing a deployment you probably would be in an incomplete state but we've got documented procedures for backing up the undercloud node and if you had some kind of disaster you would restore everything from backups including the database contents and then you would be able to continue managing the cloud which is currently deployed there are no requirements for the undercloud to be continually running whilst the overcloud is deployed so providing you haven't got an in-progress action you can take the undercloud down for maintenance perhaps if you need to do an upgrade of the undercloud to a new version that's perfectly fine you just need to schedule an outage window make sure that no one's doing anything in terms of configuring the overcloud and then you can take it down and everything will keep running absolutely fine so by default they are they don't have to be so in case anyone didn't hear the question was is the image deployed on the nodes the same for all the different types so you can specify a different image per role so for example the openstat controllers versus the openstat computes you do expect to be able to use the same image within a given group so if you had completely different architectures of hardware within say your compute group that could be a problem so we don't currently support mixing totally different architectures but if you had a requirement for a different image between say the controller nodes and the computes that would be perfectly fine by default we just build one which contains everything there any more questions at all probably the last one and then we'll quickly see whether this is it possible to associate before a cloud deploy an ironic node to a hostname so to say this MAC address or this CPMI node is overcloud controller 0, 1 is this new? yeah it is I've actually got a documentation patch up at the moment for DRIPLO docs which hasn't yet landed so there's two different ways of achieving that depending on how much control you require you can either tag a subset of the nodes in Ironic with say this is a controller and then you can guarantee it's always going to be a controller perhaps because you know your controller nodes have more or less memory or something or you know your ceph nodes have a particular disconfiguration and that is data that can be derived from the node introspection process and then we've got some tools that allow you to have matching rules so you can have some automatically applied tags but I think your question is more granular if you want to say this node will always be controller 0 and the way which we do that is you again have a capability to each node in Ironic which might be node controller 0 and then you use Nova scheduler hints to basically force Nova to always pick that node and yeah we've got a documentation patch up for that at the moment in the future there might be a more direct way of doing it but Nova scheduler hints provides a workable solution at this point so we've only got a couple of minutes left I'm going to have a quick look and see whether this is progressed enough to be any more unfortunately I think it may be running too slowly okay well I'm sorry but it looks like the demo is going to take a bit too long to run due to being overloading my poor laptop we could probably take well I think we're actually out of time so I think we'll leave it at that but thank you for listening and I hope that was useful deployment right now yeah it's going to finish now isn't it like one second on my test machine at home it takes like 8 minutes to deploy we've got the same issue we have laptops and just you know it's not easy to install undercloud and overcloud in it if you have 12 gigs of memory it's not even I mean you need the gigs for the undercloud I think a minimum and then you need for the overcloud yeah I mean this is 16 gigs and it's only just enough but at home I've got 32 gigs box and it still works a bit easier but we're looking at ways of trying to reduce the ground requirement we have some happy guy from NNA who is now trying to install it the undercloud and the overcloud of his laptop is very optimistic about this can you copy the presentation so I was very like without discussion about the differences without discussion without debating on which is better yeah we're not going to do that thank you very much how you doing? nice to meet you I think we've spoken on the phone before cool I'll let you get on with it the battery is low anyway it will last I mean he will use this and this is enough for the presentation this should be enough then we need to find after that we will put it in the chat yeah that's what I was trying to do but with two presentations