 OK, welcome everybody. This is a session on an overview of Cloud Audit Support for OpenStack. My name's Brad Topol. I'm a Distinguished Engineer at IBM. And I've got several friends up here with me. Gordon Chong is a corn salameter. And we have Matt Rakowski, who does some of our standards work. He's a DMTF co-chair for our Cloud Audit Group. And we've got Rob Basham, who's on our Cloud System Software side. So here's a quick overview of what we're going to cover. Why Cloud Auditing is important. We're then going to talk about some standards work that we lead in this area. It's called CADF. And we've gotten it integrated into OpenStack. And so we'll cover that. We're then going to show a demo of how this stuff can be utilized today, benefiting from a standardized format. And then we're going to finish off with some future directions. So why is Cloud Auditing important? Raise your hand if you already feel Cloud Auditing is important. All right, that's a lot of hands. And did you hear today with Glenn Ferguson? He was at the keynote talking about Wells Fargo. And he mentioned compliance, compliance, compliance. How you get compliance is by having good auditing mechanisms. And so there's a lot of reasons why auditing is important. If you look at enterprise customers, they expect to be able to audit things. It's just sort of table stakes. It needs to be there. And if you look at your customers, you need to prove to them that unauthorized people aren't getting access to their resources and their data. And finally, of course, again, like what Glenn mentioned, compliance regulation and all these real scary terms where people go to jail, HIPAA and FISMA and all kinds of things. So you really need to look out for that. So to build that trust with the customers to want to use these clouds, we've got to be able to do trust and verify. We've got to be able to show validation. And that all comes down to having good auditing records and having the ability that automated tools can leverage those auditing records and be able to do great things. And if you look over here, OpenStack, there's so much going on even at the lower layers, usage of resources, authentication, role authorization, and then at the business application layer, lots of interesting things happening as well. We need a model that supports auditing at all those layers. And things get even more interesting when you start thinking hybrid cloud, right? So now you're running and you're running in your private cloud and then you're going to say burst up to a public cloud. You really don't want to hear that, well, you can keep track of the stuff in your enterprise, but you can't keep track of the stuff up there in the sky because they're all using totally different auditing formats. That would be bad. So it's critical that for all the things that you want to audit, and there's lots of stuff listed here, that we have a standard format. And there's a lot of great reasons to have a standard format. Obviously, being able to work in all these different mixed cloud environments is one reason. And this needs to be something that benefits the customers. The customers should just expect that there's a standardized audit format and they don't get vendor lock-in. They should be able to use one OpenStack environment and the auditing's there. And it works with their tools that absorb the auditing and create a dashboard and what have you. And we're going to show a demo of that. You don't come and say, come to my cloud because I've got better auditing. It's really a place where an OpenStandard plays a role. And from an IBM perspective, I want to talk to you about our strategy for how we do standards. What we do is we have somebody who's influential on the standard, like Matt Rakowski, who's a vice chair. And then we have somebody who's in the OpenStack world who's influential, like Gordon, who's a kilometer core. And we get these guys working together. So we bring in the developers, and we've done this for a lot of standards, and we get them throwing tomatoes at the standard. This is what sucks. This is what stinks. This is what's bad. Then the standard teams, after they stop crying and put down all the Kleenex, they then come back with a new iteration of the standard. And over a while, what happens is the standard becomes incredibly relevant because it's already been validated by the OpenStack community. So that's really why we look at it, and having a standard format makes a lot of sense. To go into more detail on this standard that we've got permeated through OpenStack, which is called CADF, I'm going to bring up Matt to come talk about it. So all these problems outlined in compliance, even before IBM became heavily involved in OpenStack, we saw this coming. We saw from our customers that they wouldn't move over to cloud unless we provided them some standards to rely upon when they went to cloud, whether it be IBM's cloud or another cloud. And it just so happened that the DMTF and the memberships there had a great understanding of managing large data centers. They did management of all the way down to low-level devices and things, and large data centers. And they were exploring the cloud space, and how do we carry this forward in cloud. And so we went there, and we started a standard called CADF, Cloud Auditing Data Federation Standard, because this data has to be federated. We understand that people are going to have these models that Brad described, hybrid models. People want to change clouds or pick the cloud best for their workload application. And this data needs to be shared and normalized in a way so no matter what source it comes from, it can be brought together, aggregated, and analyzed to perform things like security intelligence, but things more advanced than that. We're going to talk about later on with Rob Basham, some future directions for CADF and OpenStack to do some very things with analysis and intelligence, business intelligence, workload intelligence, things like that to do corrective actions. So this is not just something that happens when things go wrong. This is to analyze what's going on in your environment to optimize it as well. In terms of work products and resources we've created, we have a specification that when we became involved in OpenStack part of Havana, we were actually getting ready to release a 1.0 standard. But we were called upon by our product team saying, hey, this CADF stuff looks like a good fit in OpenStack. So we kind of paused the standard and we learned from our experience in implementing CADF in OpenStack and some of the requirements and we actually are now ready to publish after two releases of OpenStack Havana and Ice House of experience with CADF, a final CADF specification 1.0, which will be out in June. And also in conjunction with that, we've created a profile. So there's actually a draft profile that describes all the things we did in terms of mapping events in OpenStack from every project from Nova to Keystone and how they appear in CADF format. We also have created an open source library called PyCadF. It's actually bound into Aslan messaging. You'll see the architecture is chart thrown up later on, but it's reusable in place. Anytime you wanna use CADF, even outside OpenStack, you have a library of OpenStars to draw upon to create that standard. Oops, that's the wrong direction. So why is CADF important? CADF's important because it's probably an event model. So people think of auditing, they think of logs. They think of, I'm gonna toss a timestamp here, I'm gonna throw an ID here, I'm gonna put an IP address in this thing. What do you get when you try to combine logs of different people creating from different projects? You get spaghetti, you get a big mess. It's not an overall problem anymore. We need to create better data. How do we do that? To tell people, I'll give them a framework and a model for supplying the data when they put it into a format, and that's what we're creating CADF. So we have a conceptual model where we talk about and define things like the actual event, and we tell people how to record actors that play a role in creating the event. Who initiated it? What was the target? Who is the observer? And that's the key feature is that there's a lot of confusion about an agent that observes the event versus the actual thing that the event was against. So we clarify those things in CADF and give prescriptive ways to fill out the data. And we do it in a way also that allows you to do very precise things. You can fill out some very basic things if that's all you have, but in the future you can do some very sophisticated things. You can do ISO geolocation information. You can record XYZ coordinate systems. You can do regional capture for ICANN region codes and things like that. We've always had an eye forward to doing things at a national level, NIST standard, and an initial level at ISO. In fact, after the June 19th standardization meeting, we planned to go full force at getting CADF adopted by NIST on their list of approved standards for use in audit compliance. And then in the future, looking at ISO, IC38, they're just getting over the vocabulary definitions for cloud and things, but we have many companies looking out for us there to make sure CADF is first and foremost in terms of auditing at an ISO level as well. So what's cool about the CADF event model, if everyone's familiar with crime shows, CSI, we say CADF provides a CSI for clouds. Long gone are the days back in the early days of police work where a police officer goes to a scene of a crime and throws some evidence in a box and tosses it away and throws it over the wall. That's like logging today. What they need to do is they have a guidebook. They bring in crime scene investigators who know how to fill out the data. They know when they pick up a piece of evidence, where it goes in the data, so that when somebody down the line gets that data, they know how to make heads or tails of it. And we call it the 7Ws. It answers the seven questions for investigators like appliances or auditors need to know. What happened? When did it happen? Who did what? Where was the target of the event? On what was the resource that was the object of the action being committed? And from where and to where? What are the hosts involved? Are they coming from some location or some application to initiate or go through some application to target? So all these things are prescribed in CADF again from you can provide some very basic things or very precise things. If you look at this as the architecture chart, not to go too deeply. You'll see this in action in Gordon's demonstration. But this shows an instance of what we do in terms of the WSGI framework. In Havana, we actually missed the design summit. So we went to Gordon who was in Solometer and we understood how to do this non-invasive. We know we want to store things and log them. And we know Solometer is great at doing notifications and recording and monitoring events. And how do we do this? So we decided to design it as a WSGI middleware filter and we actually have a configuration file. So any component non-invasive really can add this to their pipeline, configure it through a simple configuration file to tell how they want to map precisely their events if they wanted to change our default mappings, which we have default mappings for all components. They can tailor that wood. They can actually change how we do different time stamping, how we annotate some of the IDs and things like that. Very nice way of doing it. In terms of the bottom part of it, you can see we can go to the traditional Solometer supported data stores as metadata. So we actually have people on IBM who use that data out of the database directly. But more importantly, in our demo you'll see we actually use a dispatcher. So again, in Havana, we had a different blueprint where we actually can dispatch certain types of events to a different location. So we actually can send CADF JSON messages over HTTP to a security intelligence product in the case IBM is Q-Radar. And we actually can send CADF events directly to an intelligence product to make customer-defined analytical analysis of the data coming in. So with that, well, actually I'll show you so you can actually peruse this later. We'll post the slides, likely on the OpenStack website, under Solometer probably, so you can see these things. But you can see how the data is, how we answered the seven W's is color-coded. But we also have extensibility. So we actually, you can actually add tags to your data. You can actually add additional attributes if you want to. And we tell you exactly where to add them. But the things that are prescriptive are the things that match the seven W's on the side. So with that, you can see that's the actual kind of data that we produced from the demo. And with that, I'll turn it over to Gordon to tell you what we're gonna show. So at IBM, we created this demo for a hybrid cloud security intelligence using CADF. So what we did was we took a security information and event management tool. In IBM's case, we used Q-Radar and we had it set up to track and receive events from multiple cloud offerings, including OpenStack. So regarding OpenStack specifically, what we did was we took the audit events that we, the CADF audit events that are generated by OpenStack and we collected it using Solometer and we dispatched it to Q-Radar. And now I'm gonna show you a quick video of one of the scenarios, I think. Just to highlight the scenario, in the video, you're gonna see a developer who has permission to deploy applications to their company's cloud and that person's terminated. And as their final act, the user, the developer with valid credentials will attempt to destroy the company's application instances in OpenStack and what we did was we set up Q-Radar to detect suspicious activities such as this. So this is a video of a user logging into OpenStack and deleting multiple instances. So every event or these actions when you trigger them in OpenStack, they actually generate CADF events which are tracked and received by a Q-Radar or whatever security information and event management tool you use. You don't need sound. Yeah, thanks. So yeah, this is just deleting multiple instances and we're gonna jump over to Q-Radar. Here's a view of the log activities and you can see a bunch of logs that we track. Those ones we're highlighting right now are delete events. We track and audit over 100 events in OpenStack and this is just a collection of them. And if we jump over to the dashboard, it gives us an aggregated view of the events from not just OpenStack but also AWS or any other cloud offerings you're tracking that just called out some high interest events that you can set up. You can also set up certain offenses to throw out warnings or trigger actions based on if certain conditions are met. So in this case, we set up offense to track the condition where multiple instances are deleted and you can dive in and see what users triggered these actions and what type of events that kind of cause the offense to be triggered. This is Q-Radar. Yeah, it's our standard. It works with any OpenStack distribution that's generating CADF. So there's no vendor lock in and just if you have something from Red Hat or somewhere else and then you just flip the config switch in the file, it generates the events and you can, to our tool or other people's tools. I just wanted to point out here that people think of auditing in terms of things that go wrong, things that fail. In this demo, we're actually looking at things that are working the way we're supposed to from the surface that a user has the correct permission, they have access control to terminate images but during some period of time, they know they're gonna get fired and they decide to terminate in rapid succession instances that they have the full access right to do. So they wouldn't appear as a failure, they wouldn't have a big red flag around them but in terms of intelligence, we can look at successful messages and say, well, if a single user terminates this many instances within a given period of time, that's suspicious. We wanna look at that and flag that. So that's what you see here. It's built, actually PI-CADF is actually built in is a library to Ozil messaging so it's in Icehouse, it's built in anywhere. So this audit filter, it can be activated out of OpenStack, it ships today. Just add it to your middleware filter as be a config file. Right, yeah, so you can pull in the PI-CADF library and just kind of build your own events wherever project you're using. So I guess talking to this slide again, relating back to the 7Ws that Matt was talking about, one of the things I noticed while working with the QR team is when I gave them a list of the OpenStack events, they were easily able to map the OpenStack events to their own internal format just because of the way the CADF model was set up it answers a lot of the same questions that a security tool, a security intelligent tool needed, like who triggered the event, like when it happened or what target the event was triggered on. And we gave them a whole list of events and they were easily able to map those events to QRadar. For this slide, this just kind of reiterates some of the stuff I was talking about before. When we did the demo we were tracking not just OpenStack but also Amazon and a bunch of other cloud offerings like VMware and you can kind of aggregate all those results into one view and kind of pick out the values of high interest to you. And now I'll pass it on to Rob. So I'm gonna talk a little bit about the future and what I see happening. I had a chance to go to Monetorama last week and there were a lot of ops people there. And one of the things that was commented on is we're stressed. A lot of the ops people were saying, hey, this is a stressful job. It's hard and did I miss something on my monitor? So they're watching a screen and am I sure what this means? There's just, it's not always clear in all cases what to do. So this diagram kind of shows maybe some of the different roles that an ops person needs to take. They do security, they keep OpenStack going kind of like a butler. So I've got my soldier and butler here and all sorts of different things they need to do. And one of the things we need to start to think about is how do we make this less stressful for our ops people and how can we turn over some of the ops work over to computers so where they're not having to do it all. I will tell you that I must confess I'm not a big security person. I'm not an auditing expert like these guys are. But I did become acquainted with the CADF standard about the middle of last year, maybe third quarter. And I immediately saw, hey, there's some really nice things about this that applied a more than just security intelligence that like was said earlier for operational intelligence, for operations, this is a terrific standard. And if you think across all the tiers that an ops person needs to play, you can see all the way down from the hardware up through the apps and the different layers, what's going on? How do I know what's going on with all these different events and all these heterogeneous formats? And what do I do about it? What's actionable on this? I think one of the key benefits that I envisioned in this is instead of having this cacophony of heterogeneous events on all these different tiers in all these different formats, what if we could start to standardize on something more common across all these tiers? I'm not saying that we're going to get our hardware many times soon to re-change IPMI, that's not the point. And I don't think you need to actually do the normalization in the layer that's originating the event in all cases, you can normalize it elsewhere. But the point is, as you normalize all this, two things happen. One is, if you are a human being and you get used to a very logical taxonomy which this has, CADF has, it was something that I picked up in in a couple of weeks, I understood it, it was nice. But then the second thing that I understood was, came to understand is, wait a minute, this is written in such a way that I can turn some of this work over to something other than a human being, I can feed this back up into some analysis loop and now autonomically take care of some of these things much easier to do than if you have a whole bunch of events in different formats that are missing pieces of data. And here's my experience coming out of this. Here's my experience coming out of this. As we look at all the different things we need to do in our space, from being the telescope, high-speed camera setting, the wide-angle lens, the microscope, we have all these different views we need and we need to build this up. As we start to impose CADF on people, they realize they're missing data. They realize they weren't disciplined in generating the event in the first place. So point number one I want to make is, CADF as a standard is a great discipline even if you don't, let's say you don't even follow the standard. If you just read this and think about, did I answer all these questions and I don't care what format it is, you've done a really good thing right there by using the standard to discipline and what we've found is, as we've gone through with various groups that have already generated events or who are thinking about generating events, we have found consistently problems where they haven't been doing the seven W's and they get it. They totally get it and they say, you're right, we're missing data and they fix things. So that's point number one. Point number two, once they do start going to CADF format, it's just like Gordon and Matt was saying, as soon as you go to CADF and you start doing this, you start finding that it's much easier to incorporate other people that also are doing CADF either from the top or the bottom. So I'm a different product, not OpenStack, but related to OpenStack and I wanted to integrate with QRadar and the QRadar guy told me this week, he said, you're CADF? No problem, it's covered. And I said, well, what's the sizing? Do I need to go to your planner and integrate it? He said, no, it's a no-brainer, it's free. So that was great news for me from a product adoption standpoint to be able to go to guys like QRadar and get in there free. From the bottom side, where I'm on the bottom, have somebody on the bottom the same thing. If, as they look at adopting CADF underneath me, I find, again, much easier to adopt in. So on multiple axes, it just makes the job easier. And then the last point I wanted to make is in terms of just versatility. What I found with this standard is I've been able to apply it across a broad degree of disciplines. I did this in terms of camera settings, but you know that when you're scaling up and out, you're looking at certain things in terms of what you're monitoring and what you're looking for. When you're down there looking at dynamics, it's a different set of metrics or things you're looking for. And then orchestration across multiple tiers. And then also down focusing on these issues. We're working on all these problems right now. And I'm able to apply this across all those tiers. The advantage of doing this is, is that as you start to apply CADF across all these tiers, you see interactions and integration points that were just basically, if they weren't literally impossible, they were practically impossible because of the barriers of not having a standard. So my point is, is that I see a bright future for CADF. I see a broad and deep future for CADF. And I think I see some of the answers to the problems we need to solve for our opposite administrators to make their job of understanding things better and then to being able to autonomically offload. Any questions? So what we did is, if you looked at the architecture chart, is we're basically printing a notification channel. So it's a named channel for audit events. And what you do is like, if you want to use Solometer, you can actually, we actually can change Solometer to dispatch, just those audit tagged or labeled events in that channel to your product of choice or to your log of choice. Or you can actually just let Solometer do its job and the CADF format will be added to the metadata in the Solometer database. So you can choose to do it through there. And we have plans to do things also. I know that we have another slide. I don't know if we're going to show it, are we? So we think that in terms of monitoring and future use cases and do some of these autonomous things, we're going to use CADF as a normative format for StackTac and so we can actually go to a normative log. So that actually CADF is actually indexed as part of a database. So you can do some really cool API things against it and we'd like to have, you know, CADF is not going to be the only format for StackTac, but it'll be the normative format that you can get data out and in by. And we're going to actually hopefully look at the storage format, the indexing and tell people to construct queries that are based on CADF semantics. Yeah, so kind of, and thanks for bringing this up. I'm out, I got carried away. No, thank you. What I wanted to show here is, is, you know, when we're talking about enterprise monitoring, there's some, you know, characteristics of enterprise monitoring here. And I'm not saying that an enterprise monitoring solution has to have all of these characteristics, but it certainly has, will have some of these. So as you go through here and you look at these enterprise characteristics, we feel like we need the capability of both of these together in a complementary fashion in order to take care of our use cases and back to your point about LogStash, because I like LogStash too. What I'd like to see, and maybe we can release, I don't know, talk about afterwards is, talk about after the session is releasing a CADF, LogStash, a standard query set that anybody could use who uses LogStash. And yeah, he's not in his head. I like that notion too. And, you know, it's something that, you know, now once you get that, then you're not really worried about the path at all. And you can start right there in LogStash at all the autonomics and all the hooks you want for your admin too. Questions? Adam Young with Red Hat. I was wondering if you guys are working with the base platform folks in coming up with better emission for CADF from, I'm thinking specifically ABCs from SC Linux and from, you know, the base platform type of events so that we can get, you know, unified audit from top to bottom. So I have had some discussions with folks in both Intel and then our power platforms, but I think that you're on a good point that we probably need to move that to a broader community than just the, you know, that narrow set to more like something you say like SC Linux to, again, cover, if you look at all these tiers, there's a lot of different people we need to talk to. And we're, I'm kind of spiraling out from OpenStack, which is kind of where this has started and we're spiraling our way out and meeting it. But I will tell you that the interesting thing is, is I talk to people in general, the acceptance of this is a solid event model is consistent. In other words, like me, which I saw this about six months ago, as I talk to people about this and I say, look this over and come back to me. They say, yeah, Rob, this is a good solid standard. It's something we can build on. And your point taken and we will be doing that. We're not there yet, but it's on the plans. Okay, yes. Did you guys consider including the policy that was enforced when that operate, when this log event was generated? It's in the slides, so there, you know, people think of things in terms of activity events. We also have things called control events. And if you have a control event, you actually have a place to normally put your policies, even reference like a policy, if we go to like some policy standard and Keystone and we want to manage against those policies that we have rules that are actually programmatically executed in those policies, we have places to put the policy data as well. So think about this, you have an autonomic engine. It's got your policies and controls going into it and then the results coming out the back. Think about it. Yeah, I was asking about the policies that being embedded in the results so that when I look at a log message or when I do analytics, I know what should be, what the enforcement was as opposed to I now have to go look at another tool to say, you know, if this something was denied, for example, or an operation failed, did it fail because of policy or did it fail because I have some other issues? It's a very good question. I mean, the problem, we want cat up to be lean and mean to provide just the essential data. We didn't want event bloat, okay? So our placeholders for now for policy, there's a place to place policy IDs and rule IDs, things like that. So you would have to go to another tool but we expect that with an open stack, those IDs will be easily accessible and be able to look up those policies within a keystone or something like that in the future. But if you want to, if you want to add policies or other things directly, we have a way to extend the data. You can add extra attributes. You can add those things in too. If you want to create an aggregation tool, you create something on that dispatcher on the back end. It gets the cat up events with the ID. You can write your own little tool that goes off to a keystone or wherever, looks at the policy and embeds it in if you want before you send it off to someplace else. Yeah, so you should be able to go into log stash, right? And you should be able to, if you're smart about it, be able to identify all the events associated with the policy. So in other words, if you have all kinds of different multi-tenants and different policies for different tenants, you can kind of have your policy there and pull it together with your log stash without having to append it to every event, should be doable. So if you have pointers or it's, let's just say it's referenceable elsewhere, do you also include any level of integrity protection for any deep references that takes place? Well, that's beyond the scope of the catf specification right now. And we talked about integrity levels. That's an ambiguous thing unless there is something to match it against because different people try to define integrity levels and those are kind of all relative to whatever body's defining those things. So that's something we didn't want to go into because that's something we couldn't be prescriptive about. But there's place, if you have some measurement of those things, you want to add them to your catf, to the catf events that come out of OpenStack as custom data where you can evaluate, you can tell your customers, we have this integrity scale or risk-based scale or whatever, you can add those tags, you can add those things, you can create those views using catf as a way to create those views. One question, a question to you guys, you guys are collecting lots of data for audit data. So how does audit data can lead to some sort of compliance? For example, such against HIPAA or ISO, how do I use this data to achieve compliance? Well, basically you feed into different compliance frameworks. So catf was designed to support any compliance framework. So it goes back into the tags. So if you deem something to be critical like an authentication event from Keystone or an event from Nova, and you deem that as part of a compliance regimen that you know have some ID that maps to some compliance framework of some kind, you can add that tag in. That's the goal. So you can flag it so when it goes to LogsDash or whatever else, that tag is there. This'll be a yes, but if I Google for catf, I'll find these types of slides and stuff, right? Cool, and you probably said this in the beginning, but I wasn't paying enough attention then. What does catf stand for? Well, it's a long story. We just wanted to be cloud auditing working group, cloud auditing standard, but somebody else had dibs on that name. So by group decision, they decided to make us cloud audit data federation because what we really were working at was a use case that Brad described at the onset of today, which is, this is federated data. We want people to be mindful that this data was being merged and aggregated from hybrid sources. So we added the DF, so catf, which are not nice. So can this work with OpenStack Havana or is it something we need to upgrade to Icehouse for? All at Gordon answer. How much is in Havana? How much support in Havana? So I think in Havana right now, there's support for Nova. And then in Icehouse, we expanded that to Keystone a little bit, but we're still kind of looking to expand the support beyond the core projects. What's the core, I guess, what needs to happen in each project to support it? So I mean, if it's just kind of generically interpreting API requests and sending them to a topic somewhere. Yeah, so you just have to pull in the PyCataF library and then you can, there's a, I think, event factory, you can build whatever event you need and then it'll send it to the message bus and whatever. Thanks. I wanted to build on that. So I mentioned early on, my got miss is that we were publishing a KTF OpenStack profile at DMTF. The majority of that profile is an appendix that has a mapping for every API and Cinder, Glantz, Keystone, whatever, in the back of the appendix. So we actually have anticipated doing have the KTF library and things set up to do all these things and how we would do it. I think it's more a matter of testing and adoption on product by product basis. You can turn us on. We would rather re-sanction and be built in and part of every project. So. What I'm hearing is that we're getting more data from these events, right? So is the overhead, is that in any way significant in terms of what needs to be stored or network performance or something like that? Well, you know, there's also, you know, Gordon's done some great work in terms of configuration files. You can actually control which events you want to turn on or off. You can turn them on or off for components. You can turn them on or off for on an API type of basis. But, you know, you have a separate channel. That's why we use dispatcher. So we actually don't have to flood the salameter database who's doing metering and billing nicely with tens of thousands of events that we're gonna do monitoring at that level. We actually dispatch just those channel events to a separate product if we want to as well. So there are different knobs and levers. You can turn it off to control the events that get generated and where they get sent. Okay, thank you. Just wondering if that draft profile is available yet or is that coming soon? I can make it available to anybody. It'll be posted probably on the DMTF website this month. I think that with OpenStack Summit coming up, we kind of delayed it. So I'm hoping to see this month we'll have it posted on the DMTF website. All right, thank you for all the questions and interest and just any of us who run into us, we'd be happy to talk to you more personally about this as long as you wanna talk. It's a subject I like talking about. Thank you.