 Hello everyone, thank you guys for joining me today, good morning. So welcome to repeatable benchmarking of OpenStack architectures, thank you guys all for joining I'm sure, a lot of you look at those words and think well I know some of those words most all of them I know what OpenStack is and I know what repeatable means in general benchmarking architectures but it is kind of a weird sentence to throw together repeatable benchmarking of OpenStack architectures and a lot of vague meanings behind that like what architectures, what are we benchmarking, how are we making it repeatable? So I plan on diving into that a bit, I've kind of sublined this getting to know your cloud or getting to know the cloud you're going to stand up but really what we're going to be talking about mostly today is performance benchmarks which in of itself is actually a ridiculous, an equally ridiculous title because benchmarking kind of does infer performance but I'll explain more moving by performance benchmarking kind of dive down into these each of those words and what I mean when I say repeatable reliable sorry repeatable benchmarking of OpenStack architectures so first a little bit about me nice Marco Chetpe I've been working in the industry for quite a while I'm not actually an OpenStack developer I'm also not even an OpenStack admin well at least I wasn't until very recently what I am is I'm an OpenStack user I consume OpenStack a lot on daily basis but mostly what I like doing is I like benchmarking I love performance I love the whole concept of finding out if what I have is better than what I had previously or is better than what someone else has and we do we do a lot of benchmarking in tech outside of tech and everywhere so I guess what I could cover first is exactly what is benchmarking I know it seems silly to go over benchmarking but reality benchmarking is actually a lot of different things or can cover a lot of different facets and the end of day it's actually very hard to do despite how simple it may seem you run something you get a number you have a benchmark but that number is worthless it's really useless unless you have something to compare it to in order to have something to compare it to you have to have a way to reliably say that this benchmark has run in the same way that I expected it to and I have run here against this other architecture which is maybe a mutation or the same architecture or something as an evolution of that and so because of that because of what benchmarking encompasses if you look at all different types of benchmarking we have benchmarking probably the most common one that everyone in the room is familiar with is benchmarking hardware running things like Google's Perfkit or running Veronica's test suite against hardware which is a standard set of suite of benchmarks that do not really change against underlying hardware which does essentially change whether it's a virtual machine whether it's a physical machine whether it's something all entirely completely different that is one facet of benchmarking that is essentially doing a profile of that machine how this machine performs under this specific task the task itself doesn't necessarily mutate or change very much between runs it's that's the constant that you measure against but there's benchmarking across all different facets of life you have things like well for instance skipping with the tech idea you have things with video games whenever a new hardware vendor produces a video card the first thing to do is they show this is my video card running this game which is the constant at this frame per second which is what every gamer loves to get is more frames a second and that is itself in the benchmark we have a constants of the game running against their hardware platform and comparing it to other hardware platform but even in the real world you have things like cars automotive industry will always try to sell you on a benchmark here is my zero to sixty which is necessarily which is essentially showing how fast my car can go and move the constants is the sixty second time frame and here is the speed that which you can excel upon that and those are essentially repeatable benchmarks as we have one constant that constantly flows through them as if you maintain that constant you can essentially do comparisons so while one car may have far more horsepower but different torque output the other one has a different inversion radio what you have the constant is how fast it goes from zero to sixty and that's the same thing with hardware it's the same thing with anything else but we start digging into this new realm of benchmarking which is more or less I can benchmark machines I can benchmark hardware I can benchmark things because I have a way to measure a constant this is exactly what I'm running and I can measure that against any piece of hardware what happens when we start mutating it's not a piece of hardware but it's actually more a living deployment how do I benchmark a series of services together in a row how do I benchmark an entire workload and workload could be something as very small and minuscule as a single web app service that's just serving static HTML or could be a very complex living organism something like OpenStack which is comprised of several large components that are interconnected that have maybe varying underlying scale or components or plugins or architectures from a CPU perspective different machine types and instance size that's actually quite hard to start modeling is how do I how do I know when I'm doing a benchmark I can actually compare reliably and repeatedly against these different facets how do I know that my OpenStack cloud that I just benchmark compares to this person's cloud how do I start getting where do I start defining a constant is it the actual benchmark I ran is it the the parameters I supplied for that benchmark is it the actual cloud types is how do we start comparing that and that's why benchmarking starts to become really hard because as we grow complexity in the size and scale of workloads that we're modeling it's very hard to go back and start picking out how do we reliably repeatedly and be able to do these comparisons how do I know that I'm actually better than myself yesterday or than my my competitor from for tomorrow how do I measure that and so from up from a from a user standpoint from a software developer standpoint I love benchmarking because it is that ability for you to tune tweak performance check tweak tune again and that ability to go through and say I have this workload at benchmarking I can make modifications and changes I can scale components I can change components out I can modify underlying hardware I can change stuff to how how the connections do I can modify networking and then I can rerun that test I can verify that I have actually gotten better in performance than I have before and that's partly having good benchmarks but it's also partly being able to do that repeatability so ultimately what I'm interested in doing what I've been interested in doing is how do I model benchmarks how do I model this whole concept of running and repeating and going through benchmarks and then being able to validate that as I progress in time whether I'm benchmarking against myself from a former time or someone else's cloud or any other permutation a point of a production cloud versus a proof of concept cloud that I stood up but the modeling of benchmarks is becomes an intriguing story so with that I want to actually dive into a small story so I submitted a few talks to ODS I've never been ODS my first time it's been fantastic so far the talks have been engaging the discussions going around the ecosystem are awesome I've known open stack I mean I'm aware of what it does we use it inside the company I'm very familiar with the components I've spun up instances I have workloads running on open stacks in our private hardware I've used HP cloud and other public open stacks that are coming out and for the most part I love it as an ecosystem but I submitted a few talks because I wanted to talk about benchmarking performance because it's there's a lot of great talks about where we're going to go with open stack what what is what these these little components do and we go to these when I go to these talks I see people talking about well this is the setup that we use and I'm interested well I just saw someone speak about something similar to a different cinder plug in it a different change in modification or people talking about doing code reviews I want to know do we get regressions in performance in code reviews and stuff so benchmarking is a very interesting thing for me from a user and a consumer standpoint make sure things are quality going forward so I submitted a few talks not gonna lie I want to definitely when the common hang out here in ODS I want to talk about performance and benchmarking the when I got accepted I was very excited I submitted a few talks and most of the talks I submitted were basically there's just two really it was this one which we're giving now and the other one was benchmarking workloads on top of open stack so which is similar in that we're doing benchmarking but it is a completely different topic at the end of the day because I would have just an open stack and I would show how you can tweak and tune underlying open stack components to make more performance for your applications and services that you're running in top of your cloud so yesterday afternoon I happened to be sitting around I figured I would check the schedule and I realized a critical mistake it's my talk I actually had this talk accepted and not my other one which I had planned the last several months around so this is slightly problematic because I had one cloud that I set up I became an open stack admin it was quite fun and suddenly I have to now show benchmarking open stack components well I know the components I know them pretty well I know the tools that are built on the swag so I said I could probably tackle this so the first thing I did was I grabbed a co-worker of mine happens to be here and I said please help me out a little bit so we got in contact with our internal services team that manages our infrastructure I said hey I need like I need a couple of physical machines I need to set up like a bunch of clouds I need to do it before tomorrow so yeah exactly no sweat so I stayed through most of the evening at the bar working through this kind of process and at the end of the day I was able to spin up about seven clouds I spun up spun down made some changes spun up some more clouds again and it's it's interesting because this is not the key point of my talk but I want to talk about how I was able to do this and why it's important to things like benchmarking other things is is this idea of the keynote speaker Monday said it's open stack is generally not that easy to set up and I didn't want to do just a dev stack I wanted real open stack that things you would find on bare metal on hardware with real underlying clouds so doing so I was able to get these clouds set up and in the process of doing so I realized that modeling benchmarks is actually something that we can it's a solve problem speaking things that we've seen before if you've ever been to ODS previous to this one if you've seen Mark Schiller's keynote you've seen him talk about this it's juju it's a we've talked about a service orchestration tool but really what it does it helps model things it's a modeling tool it's a model you can model things and you can execute those models it gives you reliable and repeatable patterns using juju I was able to set up a bunch of clouds for this conversation to this talk today and I was able to then do benchmarking on top of it in a repeatable and reliable way and I want to talk about how that benchmarking looks but I want to make sure that as I'm talking about the language of benchmarking and how we've been doing modeling I want to make sure I show just real briefly exactly what juju is as I start using this verbiage I don't get too far down the weeds so juju just is a simple modeling tool it allows you to do things like model a service which is essentially a set of scripts that run you could model units the scale of that service how many different machines need to be run to do that one task juju will do things like leader elections for you and then you could model relations between services how these services communicate and using all of these tools I was able to do repeatable benchmarking which is something that from three p.m. yesterday till about 10 p.m. at the bar I was able to stand a bunch of clouds benchmark them tear them down re-benchmark them again without having to really do much physical groundwork I just modeled what I wanted executed and came back to it so this is essentially what you get at a higher level again you get units that that that build a service group a peer of units that that's a complete one task you can relate things to each other services expose things like configuration they expose things like actions which are ability to run tasks strongly type tasks so if you've ever had to juju if you've ever had to ssh into a machine and run something a rake task ssh into a machine running rally and run rally things like that you can actually model within juju and then things like storage and network so you can do take take advantage of the underlying cloud whether it's another open stack cloud you're standing open stack on top of or it's bare metal or something else so using this model I was able to do all these things around benchmarking and I want to talk about that real briefly but that's essentially where we're going to dive into the rest of this talk is a demo it's live demos because I love live demos as much as I love benchmarking but before I jump in I want to just briefly describe the architectures that we've set up so when I start doing these benchmarking and showing where we've come through you can kind of see what we're doing so what we've done is we've set up five clouds for this demonstration we've set up one huge cloud cloud you expect to see in production 23 physical machines that are running and entire suite of open stack services from all the key components out to heat salameter everything you'd need to run a robust catalog of the open stack services that's actually running on top of another open stack cloud so we're doing open stack on open stack for that I don't have 23 physical machines sitting around and our infrastructure team was not going to lend me that many machines in this short amount of time I also have six physical machines that I have sitting around that had access to and I stood up two clouds on those one of them has a single Nova compute instance and it is running on top of a power eight ppc 64 little Indian architecture machine and I have an exact same duplicate cloud except it has three Nova computes that are running on top of an Intel x86 machine I did this because the power eight machine has quite a lot of power behind it it comes out to something like 48 cores or 40 cores and a hundred and something gigs of RAM and the Intel the combined x86 architecture through Nova compute to come out to be about similar just slightly higher in resources available in that pool so I figured be interesting to see the differences between as an architecture standpoint both an architecture of open stack does one Nova compute on a beef your machine outweigh three Nova computes on slightly less beef your machines and then also if there's any real difference between x86 and arm I'm in a power eight architectures as far as what you get for performances so we talk about architectures architecture really goes down the chain it's not only just the underlying architecture the physical CPU but also the architecture you've designed and modeled for open stack and then testing those permutations all that becomes quite intense the last one I have is to I call them partial clouds mini clouds and these clouds essentially are the bare necessities to get an open second since running but I've switched out a couple of key underlying components so in this case we're benchmarking or comparing how my sequel and Percona compared to each other using it as a data store so we talk about benchmarking there's again many different facets which makes it really really hard because you are you can benchmark an entire open stack workload with something like rally you can benchmark single components you can use rally to benchmark single components there are a lot of different ways to model these permutations and by using juju as a model I was able to pretty quickly in the span of about five six hours I don't see my co-worker he must still be sleeping but in that time of in that time we were able to stand up and start benchmarking clouds and we actually were quite successful and we were able to do this permutation so I'm going to go ahead and again this slide's been up for a little while run through some demos so the first one I want to show you guys is this one here which is the partial open stack I guess you can say actually it's not the one I want to show you this is the one I want to show you so this is a more or less partial open stack you have your bare necessities you have your networking we're using Neutron gateway excuse me with glance and Keystone using Neutron API cloud controller compute rabbit MQ for messaging and then we have Percona cluster which is kind of backing as our SQL store back end for this and in addition to this this being the model that that juji's provided this out of the way it's not quite open stacky I've deployed this my SQL bench service and what I'm able to do now is I'm able to using juji just execute tasks against an action saying benchmark my SQL and the parameters I want you to benchmark with and it will benchmark that service directly and give me back results results in a repeatable reliable consumable fashion and we start talking about different suites of benchmarking tools all of the benchmarking tools out there the matter how robust or great they all have different formats and it's quite hard to compare one format to another for next test suite is a huge suite of tools and it does output in the same format for every suite that you run inside of Veronica's but comparing Veronica's results at CPU benchmarking results is something like perf kids is completely in well it's not impossible but it's not easy to do because you have two different languages that come out in this result by using juji as a model I'm able to say things like well I know how these guys report back the results and I know how to parse those results so I can create a very simple definition which is a description of the result of that benchmark and I can use that to compare things so I can run my SQL bench I can run let's use a suspension of the cover but there are a few other my SQL performance tools which all report back in different results I can run all of those and get the same set of data back not the same set but the same definition of data back and start doing comparisons on that and that adds to the repeatability of this benchmark so in this one I have Bracona in this one hello I have the same my SQL bench service connected to my SQL and I can go ahead and start kicking these off and watch what happens and it will take a little bit of while probably about 10 minutes so I'm going to kick them off come back start talking about other kind of things you can do for benchmarking so from the command line I've logged in juju has a command line in a GUI we saw the GUI here's a command line oh hey yeah that's probably want to see what I'm doing so this is the which cloud is this this is our big cloud let's go over to this one I've got in my haste to create these I didn't really name them very well so we're just going to use Marco 3 as our environment to check against so I'm just going to show you guys essentially what we're seeing here this is again more of the more of the model from juju so much like we saw in this GUI view these are all services and again services encapsulate different size different sizes of scale so from a machine perspective I can see that I have 10 these are virtual machines running on top of an open stack cloud already so I open stack again an open stack and I can see the components in which machines are attached to the same way here I can see these are the services I have including my my sequel bench service that I've added this is the size of the scale each of them has just one unit underlying it so it's not scaled out there's no each a and these are the physical machines behind it they're Nova their instances running in an undercloud that we have set up that is has given me access to run on top of so I'm gonna go ahead just run an action against the my sequel bench charm so juju action do and actually I don't even know what the name of it is that's just benches the action okay so I'm just gonna run the cis bench action which since in juju in the model I've I've told I haven't done anything special behind here I've just simply deployed the services and I've connected them and juju takes care of the transportation of information between them so my sequel is given suspense everything it needs to know in order to connect to that my sequel instance and in doing so from an admin perspective I simply can just describe my model connect the things I wish and then execute actions against it so I'm gonna run against this I'm gonna use the default parameters so that cues an event and if we go back to our status output I'm just gonna look at so slightly better a slightly more condensed version I can see that my sequel bench of being attached to my sequel is currently executing the suspense it's gonna take a little while to run so we'll come back and check on the results for that later but I'm also going to in this time kick off the same thing for the Percona which is in Marco 2 so this is go ahead and kick that off and I'm just gonna do the same thing here but switch environments so we go Marco 2 we see the same things running but it's running this time against Percona cluster here so both these are now executing benchmarks they're both on separate clouds and they're running while we're doing this I'm gonna go ahead and move on to more interesting things this is single services and benchmarking single services is great it's fun to do well at least I think it is I'm also probably a little weird in that respect how much I love benchmarking but it is it's interesting to do from a perspective of how does this single component fit into the larger scale of my workload so instead of running things like Perfkit where it sets the default install for my sequel and then benchmarks it on hardware this is a my sequel I've deployed I've tuned I could have potentially scaled it and I'm using that same benchmark concept to create load against it and then get the results from my deployment so in a way this is becoming what I like to call workload benchmarking it's not necessarily benchmarking of the hardware it's benchmarking of the workload I've defined and I've deployed and how that how that reacts to the load I generate against it and the parameters I've selected for it so moving on I have this cloud here which is a quite a large cloud a lot of components that you can see deploy here this is my full-blown open stack install and I've got components ranging from the core things you'd expect to find the novus the swifts the keystones the neutrons down to things like we have heat in here we have glance Seth backing those for both object and block storage cinders in their salameter so just a rabid mq in my sequel and MongoDB for salameter so all these things are deployed right now what I can do from a from a juju perspective and from this people benchmarking standpoint is I have a rally charm so last night we came into this I said oh wouldn't it be great to benchmark the whole open-stack level not great but we have to do this because this is what the talk is about so I sat down I said I knew what rally was I've seen the rally entry page is like great eight great blog posts about you know using rally finding all these different bottlenecks inside the open stack and then working on getting those things API messaging down so it's a little more robust and a little less noisy fixing things with the finding delays in where and where things are falling down so that when you do start doing open-stack at scale open stack will respond properly and won't start getting bottlenecked so rally for me as a benchmark standpoint is really awesome as a service we didn't have any way to deploy rally currently we don't have a charm for it which is the kind of definition of that service model so I just I sat down I said okay well let me just go ahead and install rally so I created this which is everything I needed in order to model how rally works in an open-stack ecosystem so at the end of the day I just simply defined rally needs to connect to Keystone so Keystone can give it credentials to access the cloud and then I defined a couple of these actions which again of these tasks that you can repeatedly run against against the service I just simply said do a boot and boot delete the latest article I found on the rally page was by James Page and he uses boot and boot delete so I figured that'll be a great way to start modeling this obviously this isn't every scenario this is a really early start of the service I just wrote it last night but in doing so I found not only learn more about the architecture of open-stack but I also found a few things that I can already do to improve scenarios that are existing in in rally so as soon as this talk finishes I'm going to start filing a few bugs and work my way into becoming an open-stack developer and adding more scenarios to rally that model of things that I'm interested in from a user standpoint and performance but for time being we'll use this boot boot delete they run against Nova spin a bunch of instances and then tear them down give you the time it does to do so so this is everything I needed to get this running from an action standpoints once I've connected this service to Keystone I get the credentials all I really do and I'll show you what this looks like not good this colors will not look good so I go ahead and just build a place to put my results since this is again repeatable running I just kind of put them in this uniquely identified action UUID which is supplied from juju I then create the scenario file which is just a bunch of it's just very straightforward template that I plug in a bunch of values that the user can supply when they execute this task and then I also use this from the blog post which I thought was amazing which is the ability to disable quotas on a run for rally I then source my my authentication information the Nova RC file essentially I create deployments I run the rally task which is a scenario file I've just generated in YAML and then I parse the things I build the report HTML file and I do some additional post-processing so that I can get the results that rally provides in a way that juju can model so that I can have that repeatability across that model I can see things from rally I could eventually do something similar with this with Tempest where it has the same kind of output format across across the stack across the model so I don't have this deployed so I'm actually going to go ahead and deploy this real quickly so this is against the big cloud which is Marco which you just trying to help me I'm doing silly things so I'm just going to juju deploy from my local repository here since I haven't quite submitted to the store yet rally so this is going to deploy and set up rally it takes a few minutes to run but what's great is that it's the hands-off experience if you've ever run rally before you know you'll have to have your Nova RC cred somewhere this is where the model really starts to come into play for repeatability and reliability so I just simply tell rally that I need to relate it to keystone and even though the service isn't running it juju is a model I just described juju what I want to do and it executes that model for me on the back end so I'm just going to go ahead and run juju status now we're going to watch rally and we're just going to watch it real quickly set up and run through so juju is doing the underlying provisioning for me it's getting me a machine somewhere it's setting up so that I can stop that it runs the installation methods for this and again real rough cut of this this is essentially what it looks like to install install rally this is exactly what it runs is I copy down the install rally.sh file that's in the github repo and wow that looks terrible let's add syntax I'll do this one I install Apache 2 to serve the web page I because of restrictions with our enterprise and networking I have to run through a pip mirror but this will be fixed when I get around to finishing this essentially I pip install dependencies I just run the install rally rally.sh script with a couple modifications that makes it more robust run the db recreate which is in the the wiki page and then pip install the tools I need in order to do benchmarking from a juju perspective and that's it this is everything I do so instead of having to every time I want to benchmark spin up a machine ssh into it run these commands walk away juju lets me model that one time and repeat that everywhere and this could be a bash script python script this could be using ansible playbooks under the book at any sort of any configuration management tool which is why I like juju from a model perspective because doesn't enforce any kind of real constraints on how I can write my stuff I can continue to operate as willy nilly as I am which as a as a general developer and user I just write a bunch of bash scripts and stick them in places and rerun them again but from an ops perspective you can actually use and leverage things like configuration management which make this really nice and that's essentially it and then this event is executed whenever I connect to keystone keystone sends me credentials I write them to a file and then I source this file whenever I want to run it a benchmark so we go back to here so we can see that it's now running the install right ssh this takes a little bit of time to run so while this is running I'm also going to move over to another cloud so many clouds oh here it is let's move this back over here so it's connected here to keystone the GUI kind of updated what I did on the command line this is that cloud this is these two guys here these guys are pretty interesting so this is my other two clouds these are my my my ppc64 little indian my power 8 cloud and my x86 architecture cloud well this is installing rally we'll come back to it I'm going to go over here and run some benchmarks as soon as I fixed these screens I wasn't should have known to increase my font size there we go so I've got two rallies already deployed this is again this one here is my power whoa my power 8 install up here in this tab this one here is my x86 install I've got two rallies deployed they're both ready to benchmark at least that's what juju's telling me so what I can do down here so I can go ahead and just run these benchmarks what's great and what I've done with the rally charm is I don't just want to run rally as the same defaults because that's what we can do currently today that's not very interesting I want to be able to tweak my benchmark so I can simulate different variances of load and in doing so with the model I've created here where is it there you are I actually define the boot and delete and the boot cycles and the parameters that I want to set for this so this I went pretty simple you can set the flavor the image the number of tenants the users per tenant the number of times the concurrency and the networks per tenant I went and picked defaults that I thought we're saying I think I may need to tweak them a little more they're not quite as sane as I imagined so I'm just going to do a few overrides from the command line here but I'm going to execute this with just a smaller batch so it completes in less than 10 minutes which is the goal let's see I am currently pointed at the ppc one let's do juju so because there's no siros image for for ppc for power 8 I'm going to use trusty ppc 64 so I'm going to run a little less than this we'll run 10 times at a concurrency of 3 this will give us a quick simple repeatable benchmark results that we can look at so I'll go ahead and run this and so it started the benchmark up there I'm going to come back over to here we'll just use the trusty so I don't have power 8 on this cloud run it against here so now we should both these running should take about a moment finished a lot faster than I thought soon as a run these will come through then she will get results out of them so the we're creating a we created the scenario we're running the benchmark on both of these let's check what we're doing over here our rally against our big cloud is set up so I'm going to go ahead and run juju action do we'll do something real big on this one just to kind of get some comparative numbers so we'll do on rally zero we're on a we'll show the defaults which is a hundred at 10 instances so we'll do boots and delete so while this is running I'm going to go ahead and pull up the noble list for all tenants this is on the ppc64 machine we can't really see it very well but we kind of watch its rally spin up and dump out instances to here um and a few moments we should be done we should be able to look at results for this so there we go so we're building a few more these are coming from the the rally benchmark now so the charm is essentially exercising everything and I didn't have to do anything but tell it's describe my model and execute it once these are done we can look at the results but let's see how we're doing with our my sql clouds so you're still running they're going to run a little longer I know coming close to the end of the session so I want to make sure if anyone has any questions I can start answering them and then right as the session ends we should have results I'm going to show you how that looks so while these are finishing up running um does anyone have any questions from me so far it's that thorough man um cool so I can go ahead and show you previous results actually if that makes sense so juju gives you a mechanism to go back and look at everything so I'm just going to say which environment am I in there's like 12 juju switch juju action status so I've been running a bunch of a bunch of benchmarks as you can see um I'm just going to grab the most recent one um so that's not the most recent so it does summarize very briefly the results I'm still working on making the parsing as intense as you'd expect um but it sets things like the overall average number of seconds to run for the entire run combined and it also gives you the URL where to get all the data that was generated from rally at that time so this is something that the charm will do for you and I can see uh the report html uh if you ever run rally you're familiar with these things they're amazing little reports that give you all the details that ran during that duration um as well the raw json results are dumped here as well so if you want to grab them you can do so and then also the scenario that was built to run is in here as well so I'm looking forward to over the course of the next um couple months leading up to Tokyo to kind of hardening and adding in scenarios that I see from a user perspective that are missing in rally um modeling those in in the rally charm and then working on getting the rally charm itself incubated as a process so that people who wish to use or using uh juju can actually use the rally charm itself to execute and run against that um there are much more interesting things I could have done with this demo um we could start comparing things like what does kvm do versus lex d in nova compute what does uh hyper v do compared to other nova compute instances um I don't have all the hardware available for that for this demo but the permutations you can start testing and modeling inside of of this example here become quite immense and the actual ability to say here is my hardware stand up in about a matter of 20 30 minutes a full functioning cloud then benchmark how that performs tear down and run again is from a benchmarking perspective from a user who's just becoming an admin it's very compelling story for me which is why um this stuff really caught my eye and um the the testament of what we did over the afternoon yesterday I think shows that uh with very limited time and resources you can actually start modeling and solving the problem of open stack being too difficult to set up and then not only that but taking the next further to bullet proofing it for when you actually need to go production to make sure it is as performant as you'd expect um we'll just real quickly check on these if they didn't finish they didn't finish um they finished so I'll go ahead and grab these real quickly so I'm just gonna refresh this page um since juju is a model we're building tools on top of juju to make that modeling easier one of the things we're building are things like lightweight ui is like this one where you can go and show results of the actions start drilling down into other components so this is the power eight run and this will be the ppc 60 I mean this is the intel one kick this off at probably 1836 so if you look just real briefly between the two of them it looks like within a duration of two seconds the run um so it's interesting to see that when you have yeah when you have slightly more nova computes that are a lesser power they kind of equal one nova compute that runs on a more beefier machine and that the distribution of maybe multiple novas versus one nova is actually not is actually pretty negligible so if you have one beefy machine that's running nova versus three well you don't necessarily have the high availability that you want from nova compute you can actually start modeling maybe instead of scaling out more scale up for hardware for nova computes so that's things that you can get from interesting just overviews from running these in different permutations with that we have a minute left for anyone questions otherwise thank you all for your time enjoy the rest of the conference