 OK, first off, I'd like to thank the OpenStack Foundation. This is our third year attending this event, and we hope there'll be many more events like this in the future. It's really a great place to learn and network and advance the innovation in the industry as a whole. I'd also like to thank Susan. Susan is a great example of how someone has partnered with some of the largest service providers in the world, and how we take requirements from our service providers and use them to drive the innovation and feature set of our products. I think Susan has already covered a pretty unique example of how Sub-1's logging solution can be applied to help manage an OpenStack-based infrastructure. I just want to talk a little bit more, pull the conversation back, talk a little bit more holistically about how Sub-1 manages OpenStack, and it's really based on a two-pillar approach. Our first pillar, OpenStack Assurance, basically allows us to make sure that OpenStack as an application is functioning properly, that OpenStack can instantiate workloads as requested by the end users, and I think Susan gave a pretty good example of how the assurance functionality works in the case of logs. We also have a approach called OpenStack Performance. OpenStack Performance's responsibility is to manage the workloads that are created by OpenStack, and we really believe that it's through this two-pillar approach that both our service providers and the customers of service providers can ultimately assure a high-quality experience, and I think that's what all our customers are trying to achieve as they migrate off their physical platforms through their virtual platforms. The ultimate goal is it to be invisible to the end user. I did want to talk about one additional example, and I don't have any slides. I'm just going to talk through it. This is a scenario we've seen at multiple customer accounts, and we believe it's a component of this concept of rushing towards commodity hardware or white box hardware. Certainly, we've seen capex savings by doing this, but as we move away from proprietary or purposeful hardware, we've also seen somewhat a loss of reliability, and this concept of a compute blade down is a very common scenario that we've seen and we built our service assurance solution around detecting and reacting to. The particular example we've seen in the field, and Susan actually showed the impact of it in one of her slides, has to do with a hard drive array controller failure. When the array controller fails, the compute blade basically goes out of service. So how does SevOne's solution help you manage this type of condition? First off, through our unique and industry leading data collection methodology, we've integrated with the OpenStack platform through almost every available interface possible in order to collect both symptoms and root cause. An example of some of the symptoms we'd collect here, we integrate with the Nova API. We take an alert when availability to the compute blade is lost. We integrate with the Solometer and Niochi data, and we understand when the workload's running on that compute blade to go down. We manage the SNMP agent on the compute blade itself. We also manage ICMP reachability to the network interface on the agent. And in this scenario where the compute blade is lost, we'd effectively take an alert or an alarm on all four of those symptom conditions. We then allow you to visualize this data through a combination of dashboards, status maps, and event browsers. And what we really provide value in is allowing you to organize and correlate this data to separate the symptoms from the root cause. In this case, the ultimate root cause was identified by a trap coming out of the out of band interface on the compute blade saying the hard drive controller failed. Now from a customer perspective, we were able to reduce the meantime to identification to under five minutes. There was not a lot of fumbling and searching. When a compute blade goes down, they knew almost immediately what the root cause was. We also helped reduce MTTR. In this case, they had to actually send a physical body to the site to replace the hardware. We were able to expedite that dispatch as soon as possible. But where we believe the future lies is kind of closing this loop of automation. And I'm sure you've heard a lot about closed loop and automation. And we think this is a prime example where someone can help drive the maturity of the solution. What we're striving to is to actually self heal this compute blade so that short of sending a body out to be dispatched and to replace the hardware, which took almost 12 hours, considering we had to wait for a maintenance window for the activity to be performed. That someone can actually work within the OpenStack API framework and work within the orchestration platform of the customer, identify what instances were running on the compute blade that went down, issue API calls to re-instantiate that instance on a new compute blade, and then push out configuration through the orchestrator in order to let the application at the VNF layer know where the new instances were located. And in this manner, with this kind of mature closed loop approach, hopefully one day we can prevent things like geographic site fellovers, which is actually required in this particular case in order to keep service reliable for the customer. So if you saw it, we appreciate your time here. If you wanna hear more about closed loop automation and some of the advanced features that someone is working on with our service assurance solution, we are at booth B27 in the marketplace. I hope to see you there and thank you for your time.