 OK. Good morning to everybody. Welcome to the OpenStack infrastructure management play by Managed IQ. Hopefully you're all in a white session. I won't keep you too long before lunch. Quickly about myself, I work for Red Hat, the product manager for the downstream product called CloudForms. Hopefully that's the last time I mentioned the word CloudForms. I was going to put a jar up here to put money in, but I've got any money. So if I do sell CloudForms again, do hold me up to that. I've run a blog called CloudFormsNow.com, which... There you go. It's going to be a very difficult presentation. So I do run this blog here. Actually what I've been doing is I used to put automation and methods and as you'll learn in this presentation what Managed IQ is all about. Managed IQ is a Cloud Management platform. I'm going to get into what that really is all about, but if you go to that place, you'll find lots of interesting ways that you can use Managed IQ to integrate with IP address management systems, ticketing service desks and stuff like that, and that all makes sense very soon. I like to fly ready control helicopters. Anybody else do that here? Cool. Let's have a chat. I'm a rugby coach and I've got quite a few kids for some reason, so they're all mine, they're all my family, so they keep me busy as well. Gender. We're going to talk about the history to Managed IQ. Managed IQ has got a great history to it and it really sort of concrets why Managed IQ is so good and great and why you're all here. I'm going to talk a bit about community and open source, what it means to Managed IQ, what actually is Managed IQ. Does anybody not know what Managed IQ is here? Good. So you're in the right session, everybody else. You're going to hear what it's all about again. When it comes to the demonstrations, that's the really cool piece because I'm going to show you some really good, interesting things that you can do at Managed IQ. With the underlined one being the real keynote thing here. There's lots of cool things going on in OpenStack today, but I think what's happening with Managed IQ in its upcoming release in the next few weeks is you're going to be able to do something with OpenStack that nobody else is doing out of it. Just position that to you. A lot of people out there saying, the first kid on the block to actually get an installer working for OpenStack successfully is going to be Crown King. I think that's old school. I think that actually what we can do at Managed IQ today, and you're going to see a demonstration of this, is not taking something and installing it, but actually scaling it out. So how do you actually scale out an OpenStack infrastructure on demand through alerting, automation manually and so on? So we're going to show you that as well. There is no summary, that's pretty much where it ends. I'll take some Q&A at the end, hopefully, if we've got some time. So let's try and whisk through this history. So we started in 2006, like most things, we started proprietary close source, Managed IQ Inc was formed. 2008 was the first product release. It was called Enterprise Virtualisation Manager. It was pretty much going after the big virtualisation vendor out there, which I'm trying very hard not to mention their name. So again, hold me to that if I do. But obviously we will get, in the very early years, 2008, that's who you went after. Supporting and managing their environments. And in fact, some of the biggest v-sphere environments in the world are actually managed using the downstream product, better than that vendor can themselves. In 2012, and the proof in the pudding is, even at their own show, they awarded, well part of the award ceremony is to obviously do the virtualisation info stuff, we got the finalists award at 2012. In 2013 we added the overcloud tenant space provider. And I'm going to go into the difference between what an overcloud and undercloud actually is in the next slide. But we added the overcloud provider to be able to manage your tenants, your instances, availability zones and so on. We're also working very hard in that year to open source the product and bring what we had engineered for many years and closed source proprietary. After the acquisition of Red Hat, with managed IQ, we open sourced it in 2014 and we held our first summit. It was nearly as big as this and nearly as exotic a location. It was in Marwa, New Jersey. And we had our summits. So we're going to be having more summits. There's some interaction going on with the managed IQ community right now after this session. So do please try and hunt that down. There's an open day on managed IQ where you can do some labs and stuff like that this afternoon. And then in 2015, which is actually now, we added the undercloud provider. Does anybody know the difference between undercloud and overcloud? Yeah, good. So in 2015 we ended up actually only a few weeks ago getting a Kodi award. So the downstream product actually got Kodi award. Where the previous awards at the aforementioned vendor show were very much about private cloud, the Kodi award is really interesting and very important for us because it's actually cloud management platform. It's CMP, which is what all of the other vendors are trying to go after and we won it. So the downstream obviously is where I am. The upstream has huge amount of heritage to it, a lot of history and a lot of reasons why you should really get involved in managed IQ. There is very little difference between the two. I can tell you that right now. The upstream is branded managed IQ, the downstream is branded cloud forms. That's the difference. So as far as the open stack, let's get back on track with open stack. So version support, how have we been playing? You know about the overcloud provider that we introduced in 2013. We've been doing, there's my little bear saying hello. We've been doing Grizzly since then and then we've been going all the way through. We're going to be supporting Kilo as well. We start to introduce the undercloud capabilities. I'll tell you what undercloud actually is. So the community itself is made up of a number of people. There's obviously different contributors to that. I'll go on to that in the next slide. We've got seven technical leaders, 35 developers, we've got huge documentation sets. I don't know the experiences you have with other open source projects, but managed IQ has bundles and bundles of documentation covering every aspect of feature capability for it. There's lots of blogs out there covering it now. And it's a really vibrant growing community. We've got a good forum there where you can ask for help and so on. And I've got a slide at the end to list those off for you. Even internationalisation is very important for us. So if you speak Japanese, anybody Japanese in the room? Excellent. So you guys get the first language after English. So in the next release of managed IQ you get Japanese support for the UI. We also do QE and each sprint, which will last about three weeks, averages around about 200 pull requests. In fact, if you look at the feature set that is making the next releases of managed IQ, you would argue how are we pulling it off and there's so much capability. We're only speaking about OpenStack here today, really, but some of the other features that are coming downstream along the next few months is your support containers and so on. Right. So contributors. Red Hat's clearly the main contributor at the moment due to the acquisition of the aforementioned company managed IQ. But we also have other contributors into the community, Booz, Alan Hamilton, with their project Jellyfish, which is a cloud-brokering UI that allows you to wrap on top of cloud forms the ability to do projects and costings and so on. So they've jumped on board and they're producing lots of good stuff for us as well into the community. Right. For those who don't know what managed IQ is, that's some of it, a lot of it. Interestingly, containers jumps out here. It's a big word. I didn't tell it to. It just seems that even the internet knows that containers is the topic of the moment. But we support lots and lots of things. We support VMware, Red Hat, Microsoft, Amazon. So we support all the big vendors when it comes to technology areas like vSphere, Red Hat virtualisation, OpenStack, EC2. We're onboarding things like Azure soon. We do SCVMM. So we're managing VMs, instances, and containers at multiple vendor levels, multiple technology levels. We also have lots of interesting capabilities inside of the actual managed IQ community release, which is things like tagging. So tagging in most communities or products is just the ability just to put a tag on something. But how do you actually use it? Is it pervasive or not? In managed IQ, we can do really, really clever things with a simple capability like tagging. Facebook, everybody uses Facebook, everybody has smart cameras, smart phones. If you take a photo with your smart phone, behind the scenes, lots of metadata is being attached to that photo. You don't know it, but it is. So the GPS coordinates, the date and time stamp, the phone that was used, and so on. When you take that photo and you put it into something like Facebook, it then does signature recognition on it and tries to work out if you've got any friends and if you do have some friends, it will try and recognise them for you. So how does that relate to managed IQ? Well, managed IQ has very similar capability, in fact, very virtually identical. We're able to go across your environment, signature recognise everything and then automatically tag them through policy state management. Why is that important? Because how many people out there name their servers using obscure names like LDN for London and then T1 for Tier 1 and so on? You do that because those vendors force you down this route or having to name everything in a particular way so you can manage it. With managed IQ, we say, hey, why don't you name it whatever you like and create this taxonomy that makes sense to you? In other words, managed IQ show me all of the Vancouver machines. That's it. That's all I want to know. I want to know everything that's Vancouver and managed IQ can go through all of the hosts, the virtual machines, the instances, the resource pools, the Vapps, everything and bring all that data back to you because you use the word, you know, the tag, Vancouver. We have a service catalogue inside of managed IQ that allows you to request new services. So those services are heterogeneous in nature because that's exactly what managed IQ is bringing to the party for you is that because we manage VMware, Red Hat, OpenStack, Amazon, Microsoft infrastructures, you're able to create heterogeneous bundles of applications inside of managed IQ. So it's very common for us to go and create a service that has got a couple of web servers sitting inside of EC2 and a bunch of database to back ends sitting inside of Red Hat virtualization and then tying them together with load balancer, configs, and we can do all of that inside of our service catalogue. We have some really, really cool features around inventory and in fact that's one of the big differentiators between managed IQ and all of the other solutions that you see out there. I'm sure many of you know some of the solutions from other vendors and I can tell you right now that all of those solutions out there started with provisioning. Every single one of them. Managed IQ didn't start with provisioning, it started with inventory because the most important thing to provisioning is actually inventory. If you have capacity performance utilization data, now you can actually use that data to drive your provisioning. So if you want to put instances, containers, virtual machines, whatever you want to call them, if you want to provision them in a particular fashion based on things like anti-affinity rules or based on CAP and U data, capacity utilization data, then you need to target inventory as your primary source first. If you just go straight into provisioning something, then you're not going to know where to put it, you're just going to say, go and put it in this cluster and you deal with it. Managed IQ is capable of doing really clever things like saying, I'm provisioning a web server, can you go and put it somewhere where there are no database servers? Again, using tagging. So you can use your tagging to tell Managed IQ, do not put these two items together. How does it actually achieve that? It achieves that through the inventory. It's got really deep capabilities. You've maybe seen the word fleasing earlier on a slide and there's some t-shirts we're giving away on the red hat booth which also say fleasing. We're able to effectively fleece the workloads that you have in your environment and grab all of that inventory data. We're going to see that a little bit later on. So with that word cloud spouting lots and lots of things and I urge you to look at that later and see if you can pick out anything that takes your fancy. What is Managed IQ really? Well commercially it's really a cloud management platform. So it brings a number of capabilities to the party like self-service provisioning. Chargeback is a core discipline of Managed IQ. Because we can do that inventory and because we concentrate on inventory first and we have capacity and utilization data our chargeback is far more powerful than you see in a lot of other solutions out there. A good example of this would be if you're going to charge somebody for the allocated amount then that's not very fair. I allocated you 40 gig of disk space. Well I don't want to pay for the 40 gig because I only use 2 gig of it but you can't do the 2 gig charging unless you've actually got capacity and utilization inventory which we have. We've got change management capabilities so we can do configuration management inventory and then we can hook into something like Puppet or Chef and do the remediation and the application. We've got massive amount of orchestration capabilities so our orchestration capabilities are Ruby, PowerShell, Pearl, whatever takes your fancy you can wrap it up, you can put it into our orchestration layer and then we can automate that across your environment. So if you've got IT processors there that say you must enforce domain controllers. Windows domain controllers are not allowed to be cloned at all. Has anybody ever tried to clone a domain controller? You should try it. Try it on your production system and see what happens. I can tell you right now that Managed IQ can actually put a policy on all your domain controllers and stop them being cloned. In the event that a super admin in their wisdom who understands everything about virtualization and nothing about domain controllers decides to do that task and then bring your entire environment down. So we can do that through the mixture of inventory, policy state management and then we actually have the orchestration yet to be able to do actions against things like cancel the task, clone a virtual machine start another one deploy a container and so on. You'll see here the bottom three stacks here. We talked a little bit about virtual infrastructure which we look after like red hat virtualization and the next slide actually starts to talk about how virtual infrastructures and clouds, there's actually a fairly blurry line between that but we also do physical. We've been asked for years and years and years. Managed IQ is an amazing product and community of capability but can we not have that feature set on physical and we are now bringing that to physical as well because we understand that consumers out there are not just literally running virtual workflows, they're running cloud virtual and now actually we're mixing that with physical. Let's move on to this blurry line. Now being open stack I'll use the open stack analogy here but if you're a consumer coming into an open stack infrastructure you're seeing pretty much what you see in Amazon for obvious reasons. You're requesting services you're tenanted and you're doing it on demand. You don't see the hosts. You don't see the underlying infrastructure that is actually running that open stack cloud. You may have a PAS layer on there as well you may have like an open shift running all your containers for you on top of open stack but what's underneath that open stack because at the end of the day there are some computers in a data centre somewhere presenting that all to you and that's the virtual infrastructure is that from an administration point of view from an operations point of view open stack is a virtual infrastructure it's a bunch of compute it's a bunch of storage, compute and networking that needs to be managed allocated and made available so that the veneer of the cloud can be presented to the users. So with that understanding let's look at some sort of example here so the common example would be you have an application an application runs, you see it it's on your iPhone, maybe on your laptop but somewhere in the cloud, deep in that cloud underneath that virtual infrastructure through those hosts into the storage layer, into the aggregates there is a physical hard disk usually mechanical unless you've got lots of money and you're buying SSDs but is there some way to tie back the application to the individual disk spindle and why would you want to do that? well because disks go down and you may want to create a report that says if I pull this disk out now and put a new one in is there any impact to any applications what sort of applications are running on this disk spindle now a lot of vendors won't give you any visibility of that at all and this is really where managed IQs really deep into your environment is that the more providers that you plug in to manage IQ the bigger the picture you get of your environment and managed IQs very much, and I've started to do this now is that it's very much provider based is that you plug in the open stack over cloud provider you now see the tenant space you plug in the under cloud provider you now start to see the hosts that are running the tenant space above that in the net app provider then you start to see the disk storage subsystems that are presented to the virtual infrastructures everything's a provider so using that example of an application to disk this is how complicated it's going to get if we look at containers because everybody's jumping up and down about containers at the moment HR application is sitting there it's a service probably running inside of something like Paz platform like OpenShift of course now that service is constructed from a pod or pods which are container groups effectively say pods are made of containers and containers are made of images but that's going down a route that doesn't actually get me to my answer I want to know what disk this application is running on and knowing what docker image is running that container does not actually help me understand whereabouts it is in the infrastructure as far as an operations point of view so let's go back a little bit so the pod belongs to a Kubernetes node well Kubernetes nodes belong to clusters again that's a bit of a dead end there but a Kubernetes node actually runs on something it's not a magical piece of software it's pretty cool though and it will run on a physical machine but it could run on a virtual machine so let's stay on OpenStack Kubernetes nodes running on software virtual machine that is running inside of OpenStack that is running on a node and the nodes belong to regions or they may belong to host aggregates and cells and so on but that's still a dead end for us the node actually is running on a piece of physical tin so there's a physical server actually running the KVM hypervisor presented to OpenStack now we're down to two new routes we can either go down a networking route and start looking down that way but we're really after storage so that physical server has some storage connected to it and that storage could be red hat storage, it could be NetApp, it could be whatever and that storage has arrays and disks and spindles so Managed IQ will actually give you this complete path from the HR application all the way down to that disk spindle through a breadcrumb trail allowing you to see the capacity and utilization of each and one of those elements as you just stepped through it, you can automate those orchestrate them, you can do actions against them, manage them so like for instance nodes you can scale out more OpenStack nodes that the Kubernetes is running hot so if Kubernetes is starting to say hey I'm running out of capacity here, I need some more capacity you can now start to fire at OpenStack and say hey OpenStack start growing yourself make some more nodes available to Kubernetes right demo time, I've got a number of demos to show you I've recorded them just for safety so they won't go wrong, I can guarantee you that the first one is going to be about capacity and utilization and in this particular case the question is well how much capacity does my environment really have and that's where Managed IQ will grow across your environment it will look across all your nodes, your hosts, your clusters your resource pools, your Vapps your pods, I mean apps see everything and give you your capacity and because we have that capacity utilization story we can use capacity to do planning so we actually have a capacity planner inside of Managed IQ where you can pick up artifacts I've been careful to call them artifacts because they're VMs, they're instances they're containers, they're lots of different things you can pick up these things and you can say hey if I pick up these things and put them over here how many can I actually fit in the question is who's doing that today well some of the biggest engagements we're working on at the moment is actually migrations from the aforementioned vendor we don't speak about to OpenStack which is so we're going with hundreds of thousands of virtual machines from a virtual environment into OpenStack using Managed IQ in a batch processing mode so we'll analyze the source environment we'll look at the utilization of those actual virtual machines inside of their resource pools there's a tip and a hint to who it is and we take them and we put them into OpenStack now again, earlier I talked about if it's allocated a 40 gig disk do we really want to take 40 gig from that source environment and put it into an OpenStack environment absolutely not so that's where Managed IQ is really clever because it has that utilization level we can actually look at the usage of that virtual machine in the source environment and say it was allocated 40 gig disk but it's actually only using 2 gig so why don't we just double that up and make it 4 gig just to be safe and when we throw it into OpenStack we'll know that we're using double the amount we need but not as much as what the vendor wanted in the source environment so that's capacity but utilization is the second piece obviously how much is it really actually utilizing and we can see utilization not just at the over cloud level like instances but we can actually see it at the under cloud level like the hosts so the real hosts that are running OpenStack underneath we can give you the CPU the memory and the disk IO of those hosts we know when those hosts are actually running out of memory and when they run out of memory we can then do something about it we can raise a ticket in your help desk we can go and update the CMDB to say there's no space anymore or we can do something really clever and say why don't we scale it out we've got spare compute here why don't we rob Peter to people there's a vSphere environment here underutilized let's re-provision some of that into OpenStack let's break out of this second if it works so this one's capacity and utilization so we log in to manage IQ and this is the login you're presented with and the landing page will be the the initial dashboard has everybody downloaded managed IQ already while you've been in this session really cool okay so what we're seeing here is we're seeing a cloud we're seeing a bunch of instances so we're in the over cloud area here's an instance it's going to run pretty quick I'm afraid but here's the inventory detail you can see some of the really cool data on the right hand side that I'm going to go into later called smart state data and on the left hand side we've got that breadcrumb trail, relationships we see what provider it's in, what disks it's on what hosts it's on and so on but I can quite simply click on monitoring utilization for this instance and as it's pretty new I'll just set it to hourly and it will bring back all of the CPU the disk IO and the network IO so if this particular instance starts running hot I can see it in managed IQ and then with the orchestration pieces of managed IQ I can induce something about it disk IO is running hot well why don't we go and migrate it to a host where the disk IO is a little bit more compatible and that's where we can do this is that we can actually compare a running instance against its actual under cloud counterpart so never could you have done this before you could always look at the CPU memory and disk and so on from an instance but now you can actually compare it to the actual running cluster the deployment role that it's running on or the actual individual host so by looking at the CPU we can see here that there's a spike in the instance and that's obviously manifested itself on the actual host as well so here's our hosts or nodes in OpenStack and we can see here that we've got three currently turned on we're going to select one of the Nova Compute nodes you see the inventory is very very similar so whether you're looking at instances VMs containers we're constantly giving you the same look and feel to the instance in this case this is the node now so you can see that the node is presenting CPU memory network IO the virtual machines if I just move that out of the way so you can see that you can see that the provisioning rate of virtual machines inside of this actual individual hypervisor running on running inside of OpenStack so the OpenStack cloud is being supported by an individual hypervisor and that is actually able to tell us the number of provisions that it's done over a period of time we can now so we've done instance cap and U we've done host or node cap and U and now we're looking at cluster view so we're now looking at the deployment role or deployment roles within side of OpenStack and we're looking at the Nova Compute one so we're able to select the Nova Compute one and then do exactly the same thing now we're going to see an aggregation of all of the cap and U data for all of my nodes inside of OpenStack so you may have separate deployment roles created for different clusters of use front office, back office, HR apps versus marketing and so on you can now choose the right place to provision instances based upon the performance of each cluster and you can see now down in virtual machines that the deployment rate is we've had some spikes where the other host was provisioning instances but then at the same time there's another host in this deployment role that's actually taking some instances away and dropping off so that's pretty much cap and U let's go back to the presentation so I think so inventory we start speaking about the basic inventory that we're getting back and you saw some of that if you're quick to look the IP address, the flavour stuff all of the basic characteristics of the object we're looking at whether that's a host so if you're looking at hosts you're going to see it's IP address IPMI data, serial numbers the vendor is it a pro line or a UCS system or whatever but then there's the relationships we've pretty much done the relationships now haven't we we can see where all the relationships come in but really really really cool stuff is this smart state is that we're able to and we can do this at an infrastructure level as well that's why it's important today it's not all about instances it's also about the actual hosts we can actually take in the example of an instance we can crack open an instance and then fleece all of the information from inside of it so what does that actually mean we can get you users, your groups, your applications files being predominantly an open source community you would have thought it's all Linux we actually do windows as well so we can actually get registry keys so we can mount the SAM account databases inside of a Windows virtual machine and fleece all the information from that we can bring all of that data back into into our VMDB it's like a CMDB but better and we can get the contents of the files ok so there's a couple of examples that I want to throw at you to show how this works anybody suffer from heart bleed or shell shock you guys get that so manageIQ actually within a few hours that all that being announced put out some policies that you could go into your environment and go and scan your entire environment whether it was a vSphere environment or a rev environment and so on we could go across all those environments continuously and holistically and go and tell you whether you actually exposed a heart bleed or shell shock and the really cool thing here is that it didn't matter whether the virtual machines were on or not ok so the virtual machines could actually be powered off in a powered off state they could even be suspended they could even be broken not too broken but they may be broken to be on where but to us we can still read them as long as we've got that disk file we can crack that disk file open traversing all of the structures with inside of that disk file like the partition table the file system, the registries and so on and we can go and tell you whether you got heart bleed why because we can read the RPM database we can read the RPM database and we can tell you exactly what versions of shell you got installed and then check that against an assessment as far as contents are concerned you may have an IT process that turns around and says something like we're not allowing root logon ok that's quite a common thing there's no root logon allowed well how to enforce that especially in a virtual environment people are bringing virtual workloads in and out daily you can't please that all the time unless you've got managed IQ of course because with managed IQ you can now create a policy that says can you go across all of my virtual environment and go and collect all of the SSHD and then grab a particular value in their root logon and tell me the value of it and if the value is yes then I've got a problem I'm non-compliant but if it's no leave it behind now it's up to you what you do when you find yourself as non-compliant do you want to just report on it do you want to send an email to your boss in a PDF format and say here's all the machines that were non-compliant for SSHD root logon or do you want to go and migrate them somewhere else to an environment that's more secure go and quarantine them shut them all down go and open tickets in your help desk like you can do that because it's able to fleece at this level this forensic level it's able to pretty much do whatever you want in that area but like I said this is working at host level as well and you saw that earlier if you looked at the video of CAP and you you saw the services that are running inside of OpenStack we could see that Neutron was running how many hosts are running there that's all smart state data so let's go and have a look at smart state working logon so there's our dashboard the dashboard is made up of widgets you can show pretty much any data we have in our database in pie charts graphs tables and so on so here's a list of instances in OpenStack these are real OpenStack instances we've selected one we can see on the right hand side this smart state data we can see power management we've got list of users we've got list of groups if I select users what am I expecting to see here well I've got a count of 20 and there they are all 20 users now if you work in a secure environment would you not want to know people were adding user accounts to your builds well manage like you would know that because it will see it so you can now create a policy that says if there's more than X amount of users or different applications and you can see here there's cloud in it and cloud config so we know every single application that is inside of every single instance that is spun up now instances are meant to be the same as their images but users are bad and they will add applications to your instances they will change them and manageIQ can see that manageIQ will be able to see if a user consumer has gone into an instance and started adding software to it because we have that close five minutes to go I'm going to cut this short what we're going to do now is we're going to pick up this data and we can actually do a drift comparison so we can actually compare the data between one instance and its image so if you imagine in this particular example here the instance is being we're adding to this instance right now Apache but the image doesn't have it so by cross comparing the image with the instance we can now create a report that says hey all of these virtual machines have got Apache installed when the image does not for instance with only five minutes to go I'm going to skip through let's get to the scale one because I think that's all what you want to see okay so we can scale open stack infrastructures and there's a bit of positioning on this as far as you've got your provider which is your under cloud provider and the under cloud provider is going to give you visibility to the physical infrastructure that's making up the open stack cloud here it is here we can see here on the right hand side you've got nodes five there's five nodes available to the under cloud server doesn't necessarily mean you've got five nodes running open stack just means there's five in the bucket now this is deployment roles so we've got two types of deployment roles in function here we've got the controller deployment roles the compute controller compute deployment roles and as far as compute is concerned we can see all of the relationship data here as far as what data centres are in how many instances are actually running inside of our deployment role that is called compute in other words the cluster that is actually open stack how many instances are running there and if I click on that I then get taken to the instances and that is whilst it's a very simple example it's very difficult to do because you're now tying the live running instances in the over cloud with the underlying hardware infrastructure in the under cloud as well as down on the right hand side you've got open stack status so we're now starting to do best of breed management where we're not just normalising all of the data across all of our vendors we're actually saying we know this is open stack so we're going to show you the number of services that are there so we're going to show you what's going on inside a neutron what's going on inside of Nova we selected the provider here and I've turned around and said I want to scale this provider and what you should see on screen here is that you can see the number of hosts in the pool of five with the controller and compute nodes currently running one each so we've got one compute and we've got one controller and what I've done is I've just changed the figure of the compute nodes to two so I want to scale manually from one compute node to two compute nodes and that's where I've bring you back to whoever gets the installer working is going to be Crown King this is way better this is now taking an existing open stack infrastructure and just scaling it out manually with what two clicks of a button tell me how many you want and then click scale and then manage IQ we're going to speak to the under cloud server which in open source world is the RDO director it's the downstream it's the enterprise Linux open stack so you saw some status messages going there here's our list of nodes remember there was five there was always that magic figure of five but there was only two running one was compute, one was nova one was compute, one was controller now you can see there's a new one so there's now a third node so the time it takes to re-provision a bit of tin physically and install an image is how long it would take to actually scale it's not this quick clearly but you can go into the system and say right scale from one to two it will speak to the RDO director and then scale it the other one I wanted to show you was quite simply the automatic scale because manual scale is pretty cool but the real cleverness here is actually auto scale now you've seen evidence of manage IQ all the way through this presentation so far with its capacity and utilization capabilities when you put all of the features functions and manage IQ together you can do this and you can do it really easily and this is why I included this now is because I want to show you how easy it is to take open stack manual scaling into an automated fashion so in other words it's got to the end of the month we've got credit card processing let's scale our environment when that's automatic you're not going to be sitting there looking looking at a clock going out to the 27 plus scale why don't you let the system do it for you so with automatic scaling basically you've got your nova compute roles here you can look at your utilization we've already seen this before you get your CPU your memory, your disk IO what happens if that memory is running hot what happens if the cluster is now starting to run out of memory well we know that because we can see it so can we inside of manage IQ create a policy that says if we go above 50% of memory that we want to scale that cluster and this is how easy you do it you go and create an alert and you basically turn around and say for this cluster if it goes above 50% of memory utilization I want you to run something and then that run something is the scale now in this particular case we're sending an email out the box we're also putting on the timeline so when you look at the timeline all of these different tasks going on and then you can actually do the scale itself and that's the last one which is highlighted with that in mind I just want to show you the timeline and then I'm going to take some questions so there there is basically the timeline and we can see there there's the scale and if I click on the the actual item in the timeline so on May the 12th a few days ago my open stack environment automatically scaled out another node why because I told it to on the alert I said if it gets more than 50% memory utilization start scaling it so I'll just go back here so that was the scale out we went from a number of controller compute nodes with the audio director integration as the under cloud provider we then added more computer controllers so the packaging of managed IQ how easy is it to work with quite simply it gets delivered as a red hat virtualization image open stack image so you can run managed IQ in open stack inside of red hat virtualization plus the one we don't mention and then we got Hyper B coming on board and Amazon so Amazon will allow you to actually run managed IQ inside of it as a native appliance and then obviously Hyper B the release cadence for it by simply we have an obsession with chess our releases after grand chess masters so an and was first we are now into Botvinic that's literally on RC3 right now and that's where you can go and get it does anybody have any questions because we are now out of time I'm afraid a quick one the policies can we use it to do other stuff besides auto scaling absolutely so policies can be used to identify vulnerabilities in your environment to do compliance mark things as compliant and non-compliant policies to do auto tagging you can use policies to stop tasks so like the domain control for example or anti infinity rules there's loads of built in out of the box actions that you can do with policies like suppose a host goes down move all the VMs to other host absolutely yes evacuation whether you're looking at and the key to understand manage IQ is about holistic management so if you're going to go and evacuate a bunch of open stack nodes then what about the clusters that are running Kubernetes containers at the same time manage IQ will take you through all of these steps and migrate them off to other hosts for you okay I think we're out of time there is the manage IQ open day I urge you if you've got more questions come and see us down there you can download it you can talk about it you can read a documentation and you can even contribute by github www.github.com