 Welcome and thanks for joining us today. I'm Paul Feigli. I'm a director of technology at AT&T. And joining me today are two members of my team, Mike O'Connor, a senior systems engineer, and Tony Morell, a principal systems engineer. Today, we're going to be talking about our experience with hardware certification. And both Tony and Mike have extensive certification in certifying solutions for public and private clouds. We've been a period of remarkable transformation within the telecommunication space. Network demand is skyrocketing. We're up to 137 petabytes of data a day crossing our network. Video traffic has grown substantially. It grew 75% last year, driven by smartphones. And when you think about it, the smartphone that we all carry really was invented only about 10 years ago. Apple introduced the first iPhone in June 10 years ago. But since that point, AT&T mobile data traffic is up 250,000% since 2007. And the trend continues. So it's network demand skyrocketing. Fantastic times for carriers. Oh, wait a second. Prices are dropping. They're dropping like a rock. Very transformational times for carriers. Because to meet the falling industry prices, we can't continue to do things the way we used to. With all the demand that's occurring, we cannot afford to keep on buying the exact same hardware and doing things the same way. So AT&T is making a very major effort to pivot from proprietary solutions to open source solutions. Using OpenSack and AT&T Integrated Cloud is one of our key initiatives. Network functional virtualization is another key initiative because we really have to go from purpose-built equipment to open source white box commodity products. As we work through this process, we, AT&T is still making remarkable investments. We're investing over 22 billion in capital investments a year. So it's not like we're not investing, but to meet the soaring demands, to meet the dropping prices, we have to do things radically different. Just like to give you two simple examples of how much prices have dropped. I remember it used to be calling long distance across the country. You could easily pay a dollar a minute to place a phone call. Now everyone with their phone plans, it's virtually free to place long distance anywhere in the US. Just a year ago, many carriers charged $15 a gigabyte if you exceeded your data limits. Now with AT&T on a family plan with four phones for only $40 a phone, you can get 22 gig of data without any throttling. If you go over that, you just get throttled. So that's just an indication of how dramatically the prices have been dropping and why a radical transformation is required. So it's exciting times for consumers and all the new services, all the needs for data that's coming out. It's also a transformational time for the carriers because you can't continue to do what we used to do. Like to describe a little bit about what we're moving to OpenStack in the virtualized environment. In central offices, where we had our dedicated hardware and network equipment that was specialized, we're moving those to virtualized network functions on AIC. In the data center space where we had large complexes of VMware or other proprietary virtualization centers, solutions were also moving those to AIC. And it's a very strategic focus within AT&T and from what I described in the first slide, you can understand why because our whole business is transforming. We really need to get these enterprise and central office workloads to the cloud and leverage software-defined networking to be successful and continue to thrive as a company. AT&T's had very aggressive network virtualization goals. In 2015, we had a goal of 5% and we actually achieved 5.7%. Last year, we had a goal of 30% and we virtualized 34%. So you can see we're going to a real tipping of the traffic. This year, our goal is 55%. We're on the range to 75. The reason why we're going for 75 is we're not expecting 100% of the traffic to be virtualized. We're expecting some of that traffic, legacy traffic will be retired in place. So we're not trying to move 100% of our traffic to the cloud but we have a very strategic focus for very compelling business needs that we can't keep up with the demand and keep on buying that equipment and stay in business. Based upon our planned workloads, AT&T has a very unique cloud configuration. We're one of the largest open-stack deployments in the world. Domestically, we have more than 74 zones and globally, we have more than eight zones. And this year, we're more than doubling the amount of compute nodes in those various locations. We're looking for a very dynamic, flexible workload because we're not bright enough to know what the next hot thing's going to be. So one of the key things we need is a virtualized infrastructure that can be quickly reconfigured so we can leverage what we have to meet rapidly evolving consumer demands. We believe very much that you cannot reconfigure things fast enough manually and that's the reason why we have contributed our ECOM platform to ONAP. So there's a command and control structure with ONAP that you can do, rapid deployments and rapid reconfigurations of the underlying network infrastructure. Our workloads vary dramatically. We have many small locations that are in central offices for workloads that require very low latency. When you think about placing a phone call, you don't like long-time delays going back and forth in the middle of your communication. But if you're watching a video, you don't really know whether it's a five-second buffer or not as it's downloading it. So for streaming where you're not sensitive to the latency between two people talking interactively, that can go very well in our larger locations. We can scale up and scale down very effectively. One of our key goals is to try to run any workload in any ASC location. And for many people in the audience, you might think, well, that's a relatively simple goal. But we're a telecommunications company focused on security. So we have a lot of VLANs, we have a lot of firewalls, and we have a lot of other configurations that when we move workloads around, there's a lot of network changes that need to be done. We do not run AIC on a flat infrastructure where any node can talk to any node, but we follow best-in-class security, which means the workloads are very isolated. So moving workloads around and reconfiguring the networks and the firewalls is a non-trivial problem. As we went through this hardware experience, we found that OpenSAC hardware requires integration. We had some leaders who said, do you know OpenSource is, and OpenSAC is very widely used in the industry, you can just put these things together and use them. And we tried that and it didn't quite work to our expectations. I guess we shouldn't have been surprised because even when we were buying proprietary solutions and we tried it, we always found we needed to certify and test and validate that we really met our business needs. So this is not something new, but it was just reinforcing, despite all the changes, we still needed to do integration on our hardware. We have some unique workloads. We're putting the core of our business on OpenSAC. So from that standpoint, we may have higher expectations for reliability, we may have higher expectations for failover than other people in the industry. So what we're describing is what we found necessary to do our workload. If you're doing a standard IT workload and there's no real impact if you're down for a few hours, you may find that you can skip our steps, but we found for providing high reliability solutions to AT&T consumers, we needed to certify and validate the hardware configurations. We also found we had a very wide performance in response times between different pieces of hardware. An extreme example was volume creation for one vendor using spinning disk took about 100 to 120 seconds. If you're dealing with IT workloads, we only create the volumes once a month or once a year. Who cares? But within our planned usage configuration where we were expecting that we would be routinely reconfiguring workloads, redeploying them between locations. And we said, you know, for some of our scenarios, it wasn't one volume that needed to be created, it might be 20 or 30. Under those circumstances, the view was that time was totally unacceptable and a different vendor was chosen with using flash arrays versus spinning disks. So we were also changing technologies around and we had a very radical difference in performance. What I'm really trying to emphasize is know what business need you're trying to solve because your business need will determine whether the hardware configurations and whether response times really meet your needs. And all the hardware that we were using really did work with OpenStack. It just, it did not meet our needs. It may have met IT workload needs. It may have met student computing needs but did not meet the expectations for very rapid configurations for AIC. Okay, next up is Tony Morello and he's gonna be talking about how our learnings were applied and some of our certifications experiences. Okay, thank you, Paul. So to address some of the areas of concern that Paul referred to earlier, what we did is we created a team that we kinda referred to is the AIC hardware integration team. And the goal of this team was to allow us to keep up with the constantly changing hardware that we introduced into AIC. So this team is responsible for validating the new hardware identified for AIC and ensuring that the interoperability of the hardware with the target version of OpenStack all work seamlessly. Each OpenStack environment is unique and has its own characteristics. So ensuring that there is this compatibility between the hardware and software is critical. And if you think that you don't need certification testing because you're dealing with the top shelf vendors, I think again, because what we found as Paul mentioned earlier is that even with top shelf vendors who say they've certified with OpenStack, you still need to go in there and test your own use cases. So testing software and hardware is not a new concept. And historically, we have tested hardware and software in many different scenarios. And we've leveraged on these learnings from this testing for this new team. I mean, we've tested hardware and software with VMWare. We've tested base operating systems such as Windows and Linux on new hardware. And we tested many, many, many software and hardware configurations as we dealt with hosted customers in both internal and external. So new hardware means new drivers. And that's true obviously of across vendors and it's even different versions from the same vendor. So basically the takeaway is that any piece of hardware that you put out in your environment, you need to test. So in our case, AT&T is a very large company as you know. And there's many, many policies and procedures that have to be adhered to before a piece of hardware can go from the identification stage to getting out into production. And this translates to a very, very long cycle time. And even just getting the required approvals takes significant time. So the idea behind this team was simple. And that was we wanted to shorten the cycle time from when the hardware was identified to when we can get that hardware implemented in production. And you wanna get the hardware into production as soon as possible to take advantage of all the improvements that come out with each iteration of the hardware. I mean, each iteration offers enhancements in performance, functionality, scalability and all these things they translate to increase workloads, savings in power consumption and operating costs and increased manageability. So the key output from our team was to trigger placement of the order early in the cycle. And when the order was placed in parallel, we can hand the hardware off to the subsequent development teams who could do their testing and they could do the automation to help us deploy this new hardware into production. So note that there was additional outputs that come out of this team as well. We also provide configuration and installation information, component information that could be referred to by developers and management. And we also provide some high level performance testing so we could see how vendor A performs as compared to vendor B. So useful stuff. So when a new piece of hardware is identified, we have a team of engineers at the enterprise level who do an excellent job of testing that with a base operating system. For example, Linux. And what we do is we take that handoff from our enterprise team and now we add the open stack layer on top of that. So our main focus here is to introduce a new piece of server hardware as a compute node and then validate the storage arrays that will ultimately connect to those compute nodes. These compute nodes are in the form of one U and two U servers, again, from top shell vendors. We also concentrate on network adapters as we move from 10 gig to 25 gig to 40 gig and beyond. And that allows us to better take advantage of the high performance virtual network functions that we have in our environment. When it comes to storage arrays, we look at arrays that use both legacy hard disk technology and newer flash technology. So one thing that we hear quite often is when can we swap in top shell vendor hardware as generic white box? And we found that a compute is close but we see that the hardware from different vendors will inevitably be different and unique. Certainly there are bio settings and firmware that need to be accounted for, but even beyond that, there are plenty of deltas that need to be taken into account. For example, on vendor A's hardware, our pixie boot process might need to pixie boot off of embedded NIC-1. Whereas if we're using vendor B, it might be using embedded NIC-3. Now certainly these are things that could be handled in the software, but the point is that unless you identify these differences, your deployment scripts can fail. So again, the key point here is that all hardware really needs to be tested. So it's impossible for vendors to test every scenario that their hardware will be used in. So it's really up to you to test your specific use cases on that hardware. And I'll give you an example of what we ran into. We ordered some storage arrays, and the storage arrays would come with hard drives with a six gigabyte transfer rate. Well, sometime when placing the order and the order shipping, those drives were no longer available. So the vendor swapped in drives with a 12 gig transfer rate. So everybody said, great, we're gonna get faster, better performing drives. But guess what happened? When we got these storage units into production, everything failed. Turned out that our Linux kernel was a little bit older, and it didn't support the newer technology in the drives. So we ended up with a bunch of storage arrays that we had to find new homes for. The complexity of your environment can also be tied to the performance, which can differ from vendor benchmarks, which are most likely sunny day scenarios. And not only can your performance numbers be different, but they can be significantly lower. And this doesn't necessarily mean that there's a problem, but by incorporating the overhead of your application, you can get a better idea of what the performance will be that you'll see in production. Another thing that we noticed is that sometimes the installation and configuration steps provided by the vendor may not be for your specific implementation. We have seen scenarios where, for example, MPIO and iSCSI instructions were written for a generic Linux implementation and not our specific implementation. And these deviations all take time to troubleshoot. So again, each environment is unique, and you have to understand that and treat it that way. So we still find that many folks think that you could just grab a piece of hardware, plug it in, and it's gonna work. So then the question is, well, what is that a test? Well, in the past, as I said before, we focused on the base OS on bare metal and we tested that all the core functionality existed and performed as expected. This included creating NIC bonds, testing NIC failover, et cetera. But now by putting OpenStack on there, you've added another layer of complexity. So now we still do the same tests, but now we do it with OpenStack installed. So now we generate heat stacks and we pull the cables to test high availability. And we try to leverage whatever open source tools that are available first, but if we can't find a tool that fits our needs, we'll develop it in-house, so it'll meet our specific use case. And this allows us greater flexibility to target specific areas and to enhance the scripts as we find new and better ways to test. And this includes scripts that we've created to test the common OpenStack API endpoints, as well as other scripts that we could spin up heat stacks and we could kind of do a high level performance test. Just note that our testing is not really to determine maximums, but really just to kind of see what kind of performance we could expect to see in a certain period of time. And to talk about these scripts in more detail, I'm gonna introduce Michael Connor. Thanks, Tony. So you may have your IT manager or your CTO come out to you and say, hey, what's this OpenStack all about? We need to start certifying this new application on your OpenStack instance. So for a conventional hardware team, it might be a little intimidating at first, trying to understand what NOVA is, where your Cinder Volume service is, and all these new services that are gonna be introduced into your OpenStack environment. And you might not have a lot of experience with an OpenStack deployment. So for us, we started out with setting up some DevStack environments. We followed the OpenStack instructions on the website to go through and really get an understanding of how OpenStack works, how the APIs talk to each other, where the log files are, and make sure you turn on debug mode to see the tracebacks, see what's going on, to really have an understanding of what's going on behind the scenes and not just from the Horizon UI or the CLI. So it's very important to understand that. You might not even be able to rely on your vendors all the time for support because you might have a scenario in the past where you just call up a vendor or VMware and say, hey, I've got this issue. Can we get a patch or a new executable to fix this issue? You get a new patch and apply it in a couple of weeks, whatever, and you're up and running. So I think from the hardware point of view, certification teams need to think about getting involved and going out to GitHub, looking at the driver files, see what's actually changing under the covers, what changes are occurring, what feature sets are being implemented in a particular Cinder driver, or are your Nova Libert stuff. So I think that really requires a mindset change for a hardware certification team. On the screen here now, you can see that these are some of the high level testing items that we complete. We were able to leverage a lot of the knowledge that we have used in the past for our certifications, but it helped out a lot being able to introduce the heat, as Tony mentioned, and using the heat stacks for stressing our environment to make sure everything's working properly. We also validate a lot of the cables, to make sure we spin up a heat stack, and we can make sure it's running and see volumes created, instances created, and therefore we will go and pool cables on a storage array and we'll see, hey, does that heat stack keep going, or are REST calls still be meeting to the storage API to see that those volumes are still being created, being attached, and also the iSCSI traffic's traversing the appropriate network. And when we pull those cables, we wanna make sure that that fails over properly, and if it doesn't, obviously we have either an issue with the sender driver, and we usually work with our vendors to try to resolve these issues. So we needed a way to test a lot of our hardware with some new tools, because not everybody on our team was developers or had in-depth scripting knowledge. So we came up with a way to, we wrote some Python scripts that are leveraged with YAML configuration files that actually drive our test cases against our hardware. Then the outputs put in a CSV file, and then we can make comparisons. But we're just trying to see times with how long it takes to create an instance, and how long it takes to attach a volume, or something like a live migration or post-aggregates in anything from sender retypes to unmanaged managed stuff. But the point is we're able to do this effectively and efficiently because we were using these YAML files that anybody can go in there. It's easy to read and modify quickly. When we did start out first, we had to do an inventory of the OpenStack APIs on dev.opensack.org to get an understanding and a full list of all the APIs that we'll be testing. But the thing we want to take account for is the fact that the command line interface will vary a lot of times from what's actually on the documentation. So we would always turn on the debug command. When you're making your CLI call, you can see the actual debug, the curl request that's actually being made as well. So we take that back and we interrogate that response and make sure that we see a pass or a fail during our testing. So again, I would really recommend using that debug feature when you're doing anything from the CLI. And again, you'll see a lot of variances sometimes between something that's documented for a sender driver, we've seen differences with managers and retypes or anything that's actually listed on the docs or the dev.opensack.org website. So our next slide here, it's important to understand and set your test cases for what you're trying to achieve with your environment. For our lab, we were trying to achieve a heat stack that would execute for 60 minute timeframe. We want to see how many instances we can create various flavor sizes and how many volumes we can actually attach to that particular instance during that 60 minute timeframe. When we first started out, we quickly identified a lot of restrictions, I shouldn't say restrictions, but just lower default values within OpenStack that we had to modify for our use case to be able to achieve the 60 minute timeframe. This includes anything from the nova.conf file, the sender.conf file, the RPC response timeouts, a lot of the HTTP retries and anything from the base OS, the multi-path packages, the iSCSI and making sure that the blacklists are set appropriately in the multi-path. You have appropriate timeouts around Robin, whatever type of queuing policies you have to find with your MPIO. So again, it's important to identify that and set those values properly to achieve your goal and what you're trying to do with your environment. So we quickly found out, we were hitting limits with MySQL connection, so we would have to bump that out for particular storage arrays or we would have to configure a multi-path because we would hit FDS limitations, so we'd have to update, so how many volumes we can actually physically attach to a particular KVM host. So these were the kind of things that we just ran into during our testing that we would have to modify to meet our goal of a 60 minute instance heat stack run. So definitely do your homework, make sure you check if you're working with a particular vendor, make sure you have any patches that they recommend applied to Nova and Cinder and any iSCSI, whatever type of storage you're using to make sure you have the appropriate patches applied. And also go upstream, go see what's going on again, see if there's any patches you should be pulling in or cherry picking or back porting to an older version of Obusac that you might be using. And another thing is that we did find out sometimes we might get conflicting values, you might have a storage vendor that says, hey, set your RPC response timeout to X, but then you might have an OS vendor say, no, no, don't do that, set it to this. So a lot of times we'll see conflicting values, you don't want to leave it open too long because you can run into out of memory exceptions because you're having way too many connections open and they're waiting for a response, so the RPC timeout just gets all fumbled up. So again, it's just important to test that environment, to find your use cases and analyze the settings that are in your environment to make sure that you're not running into any issues there. So another thing we did, we were doing a lot of manual work on our team to go out, spin up a heat stack, go out and install the packages manually, whatever type of packages we're using to validate an environment. And we would do some synthetic eye-op testing just to be able to understand that the plumbing in our environment's working properly, that the configuration files are working good. So we built an house web app that enabled us to do this. So it's gonna go out and create a heat stack, we built a Node.js REST API that will deploy the heat stack, spin up how many instances you like, attach the volumes, then therefore start up your storage performance testing or if you had a particular application that you want to test out, that it will go out and install those packages and bring us back that data automatically. And therefore we can start to get just an understanding of how things are working. Again, we're not using this for maximum identification of the storage array just because, I mean, there's so many variations with the storage testing with, you know, if your array has a DDD duplication or if there's advanced features on that storage array that you're looking to test, you know, it really takes somebody that storage guy that does that all the time to really get a clear understanding of maximum throughput for that environment. So again, we're just trying to make sure that that configuration is working properly and we didn't break anything with that new storage array being introduced. So hopefully you see my charts here. Here's an example of some volume attachment times between two storage vendors. I think it's roughly a 30% difference between the two. But you can see that this is the type of information that we're trying to understand how quickly these storage arrays can attach volumes and also detach. Another thing is that we wanna make sure that that heat stack can be deleted and abandoned properly because we would notice issues where after that heat stack gets created, you know, it might be all good in the rising UI but you go in there and you still might have a multi-path or ice-cozy connections that are still in an orphan state that if you don't clean them up properly, we just ran into issues where we're trying to create a new heat stack and therefore just have further issues with our testing. So, you know, it's good to go in there and manually check those settings out as you know, the ice-cozy connections as well. And this also kind of helps us to give an idea if there's a change in firmware and we have to go out and reevaluate a storage array. You know, we can see if there's any variations between that, between the previous runs. We can do comparisons in these attached times. We can see if a particular patch that we pulled in for again, MPIO or ice-cozy was an issue and identify if that's slowing down our attached times. I'm gonna hand it off to Tony again here and he's gonna talk about some of our open-stack instances. Thanks, Mike. So, another technique that we found to be quite useful is to test in a vanilla open-stack environment in addition to testing in our Maranthus fuel-deployed open-stack environment. And we found that this could actually help us identify whether an issue was inherent to Linux, open-stack, or we introduced that issue in our more complex code. Any of us who's had to troubleshoot an issue in a multi-layered environment such as open-stack could admit that it could be quite challenging when doing that troubleshooting. So, for example, we had an issue where we ran the test in our vanilla environment and we ran it in our fuel-deployed Maranthus environment. We noticed that the vanilla environment was significantly outperforming the Maranthus environment. This issue, we might not have even seen this issue, but it was only by able to draw the comparison between the two environments that we were able to isolate that it was an issue with HA proxy. And certainly in our environment, with high availability being so important, it's very important that we found this issue when we made the configuration changes to HA proxy to fix it. So, some people might think that this too lab a testing technique is overkill, but I mention it to you so you can keep it in your back pocket because there are gonna be scenarios where it might be useful to you. So now what we're gonna do is we're gonna talk about some issues that we've uncovered in our testing. And what we found is that the issues can be unknown issues, they could be issues that are undocumented, or it could be issues that just have not been and shared with anyone. So, Mike? Yeah, thanks, Tony. So, when we're doing our testing, we ran into some issues with some of our vendor, with some of the storage arrays we were testing where we would, again, kind of going back to what I said before, create a heat stack, make sure it's running, go to the array, pull the primary NIC E0, and see if that will fail over properly if your heat stack's still being created and the fact that we would see volume creation errors and the heat stack would fail, that's obviously a big concern for us. So we would go back, work with the vendor, and to overcome that issue. Another issue we saw, we saw the fact that we had issues with a particular storage array that would traverse the same physical network for both data and storage. So, obviously we're very security-minded here at AT&T, and we wanted to make sure that we always have a separate network, excuse me, for data in the storage network. So, we came across this issue, and again, we worked with the vendor to resolve this. But this is why, again, it's just important to test these environments to... Yeah, and that's an interesting one, because even after we did our testing, we still found an issue with that, because what happened was, when they rolled this storage array out into production, because the data was management and data traffic were traversing the same uplink, what happened was the MTU size on the switch was still set to the default of 1500, and jumbo frames were never enabled. Right, so then it just obviously led to performance application issues, performance issues with the application, and just calls more headaches for operation folks. Right, now in a typical scenario, that issue probably never would have happened, because everybody, you know, the team would have been aware downstream that the data, jumbo frames, had to be enabled on the switch that the data was going through. That's right. I guess the takeaway from that is, again, I keep repeating it over and over again, you really need to test everything. You know, this might be straightforward for a lot of folks, but definitely when you're starting out, go to your OpenStack support matrix for Cinder, go see what a particular Cinder driver supported in a feature, or the support matrix for Cinder, and see what actual features are enabled for that Cinder driver, and in the particular release of OpenStack that you're running with. You know, the feature sets definitely vary between instances of OpenStack, so if you're paying for a certain feature on an array, you wanna make sure your OpenStack instance can accommodate that. So go out there and check that Cinder support matrix out, and alongside with the Ubuntu canonical website for the hardware certification, validation work that they do to see if your particular kernel and operating system versions supported with your hardware that you're trying to run with there. Right, and again, we've seen issues that if you're running a little bit older version of the software, and you bring in a new piece of hardware, there's a good chance that new hardware just will not be supported properly. I mean, it might be supported, but you're not gonna be able to take advantage of all the features. So as Mike said, it really is in your best interest to go out there and check these compatibility matrices. And last, I'd like to remind people that AT&T is a proud member of the large contributing OpenStack operators team. It's a working group where we're really trying to improve the OpenStack environment for everyone. An exciting thing with OpenStack is the Cinder team, they actually have some third party certifications for the Cinder drivers. But what we've experienced and what you've heard from Mike and Tony on the stage is, they're focused at the most basic level of Cinder certification because they set up a certification suite to just meet the most basic criteria, can you work and can you bring it up? They were not focused in their initial certification on can you deal with all the failure, reliability scenarios at large operators? And I think even large enterprises are gonna be very interested in. So we're interested in working with others, either in the large LCOO or the Cinder working team or any other forums to help define what are some additional tests or maybe even additional certification level that would be appropriate, that really focus on building on the basics but validating that the reliability failover characteristics that AT&T and many others in this industry are really looking for when they're buying more expensive storage arrays. So once again, thank you very much for coming. We have a few minutes for any questions that people may have. Thank you. Just a quick question. Is there any chance you guys are an open source? The web reporting tool around the heat stacks and the heat in particular of the FIO reporting. I think you mentioned that you do some FIO tests and you report back to the UI and the results of those tests. I guess Rally can replace some of the other stuff but I don't think Rally does any sort of in depth FIO stuff like that. Yeah, we know it's something we just came up with kind of earlier this year. So we're still kind of buttoning up all the little fixes still in that tool and going through some testing but we have talked about that and it's something we'll probably end up doing here in the near future. You know, we did look at Tempest and Rally but I don't know if they're great for stressing I think a full cloud infrastructure but we're really focused on testing, you know, a particular single server and really stressing that particular server with however many volume. So that's kind of why that tool kind of came out and helped us out quite a bit there. Okay, so when you say single server, do you mean you've kind of deployed all in one stack on that server? No, so we still have a, you know, we have an entire open stack infrastructure, you know, with multiple controllers, you know, Cinder running and Horizon Keystone but our main goal is to test that singular piece of hardware with as a compute node. So that's why that tool kind of helped out and really we can just focus into that particular. So when you're doing your timings of instance launch, you're basically using an aggregate or something else to isolate launching onto that single compute node? Yeah, well that actually gets defined via our heat template so when we spin that up we're making sure that we're picking that particular compute host that we wanna have via the heat template. So that heat template just gets pulled from our repository, the parameters from the web app get injected into that and then the, like I said, the instances get spun up and the packages get pulled down to the instances. So this gives us the ability to also test our various images that we have within OpenStack so we can test a new Ubuntu or CentOS or Red Hat as well. So we don't have to install an agent or anything, it just does it all with that REST API. That clarifies things, thank you. Sure. Any other questions? If not, thank you very much for coming and we'll be available if you have any questions you want to ask afterwards, thank you.