 Okay. Welcome, everyone. Good morning. We're going to start the presentation now. So we're here to talk about OpenStack Feed Management. It's a very new project called Creton. So we're hoping to introduce it to the community and also get some feedback. So I am Solochon. I'm a software developer at Rackspace. Interested in OpenStack automation, serverless technologies, et cetera. So if you guys are interested in any of those, feel free to reach out to me, IRC, Twitter, all the usual stuff. What we're going to talk about today in this presentation is we're going to introduce the Creton project to the community. We're going to give a little bit of background on what we have in terms of experience from our public cloud, how we use something similar to Creton project in our public cloud, and how we draw upon that experience into this project. We're going to discuss the project architecture, a little bit more technical detail. We're going to do a little bit of deep diving into that, and then we're going to do a Q&A at the end. So why this project? What are we trying to solve here? There was a user survey published by the OpenStack UX user experience team, and one of the points highlighted in that survey was that maintaining consistency and minimizing complexities are the primary challenge confronted by the OpenStack community right now. What does that really mean for us? For operators in particular, what does that mean? It is a problem of how we should minimize complexity for operators in a way that they find it easy to run their OpenStack fleet, and that's what we're trying to solve here. How can we make it easy for you, for us, so that we can have a very easy way to run an OpenStack, maintain it, how do we manage all the different moving parts within the OpenStack fleet, and how can you make it easy to do that? In that, we have different state, different things. We find that maintaining inventory, for example, is a very sort of like a pertinent point to discuss within the project. How do we do that? We can have flat files maintaining inventory. People have different ways of doing this. We have GitHub, like you can put it in GitHub, you have files, but there is no consistent way of doing this within the community right now. How do we handle metadata? It doesn't have to be metadata within the service, but it's a fleet metadata. How do we handle fleet metadata so that it's easier for operators to sort of call on that to do whatever we do downstream to automate our fleet or to manage our fleet? That's one of the other points. Moving away from metadata and inventory a little bit, how do we make it easy? How can we make the environment a self-healing environment? I think that's a very powerful sort of like a point to have within the operators circle as to how can we not only do, how can we minimize the repetitive tasks? How can we make sure that we can automate some of them so that something happens next time we know there is a very standard way of handling that? We can draw up on that to maybe log it, maybe have some other processes, call upon that to give you more information about what was done, something like that. By doing that, we also make sure that we provide a secure and consistent way of doing that repeatedly, and that's very important as well. We might have processes where each of us does it differently. We solve one particular process, I do it one way, he might do it something like a different way, and it's in our heads. It's not on a process or workflow, and that's what we're trying to solve here as well. And last but not least, how do we, I know everyone has monitoring set up, everyone does their monitoring, feeds that back and say, hey, okay, something alerted, something is broken, how do we fix that, but how do we step out of that and say before it's broken, or before I get an alert, can I audit it? Is there a way that I can know what's happening within my environment and then take actions based upon that? Maybe feed that into some other analytics tools and then have some sort of decision making power on what's happening within the environment, and can we do something like that? And those are the problems that we're trying to solve here. And before we come to the creation project itself, we in Rack Space Public Cloud have been doing this for a little bit of time now, and we're going to draw up on that experience, talk about that a little bit to give us a picture of what happened within Rack Space Public Cloud that we did something like we take these steps that we had a tool to automate things, to have audits done, how we sort of like gathered all the monitoring data into one. So we're going to talk about that a little bit. So I think everyone will agree that when you have like a very small cloud with a public cloud or private cloud, it's easier to sort of like not need automation, right? It's easy to just say, hey, I'm going to put my stuff in files because I only have like one cell or one region, have 20 servers. It's easier to maintain that. And it's easier for your ops or easier for your DevOps to actually work to that because it's only like one small region or one small cell. But what happens when that grows into 50 cells or five regions, when you have 50,000 boxes or when you have 200 boxes, right? It's not feasible to just work from fat files or just to keep committing stuff into Git because all the times, as you can see, servers are not always green. You have failures. How do we make sure that we're tracking this all the time? I know we have alerts. I know we have monitoring. But how do we make sure that these alarms, these alerts and monitoring is presented in a way that it's consistent, whether it's five servers or 500 servers, right? That you didn't need to change your processes in order to tackle five versus 500. And this is what we found in Rackspace as well. So when I started, long time back when we just started OpenStack Cloud, the first maybe couple of months was fine because we had like maybe one region of a few hundred boxes. As that grew into multiple regions, many, many boxes of servers, it became hard to just operate on that, no matter how big the team, right? So we had to have a process where we sort of drew out all those alerts into one. We were using Nagios. I'm sure everyone here knows Nagios is not very good with clustering. So we also was looking at when we're doing it, when we're trying to remediate some alerts, we're making sure we're going into each Nagios page, making sure what's happening. You got emails, but then emails were per person, right? There was no central place to say, hey, this is what happened. And everyone had the same view. If I got an alert email, and if you got an alert email, there was no coordination between someone else and myself to say, hey, this is who is working on it. So the only way to do it was to aggregate those alerts into one place. And what that really looked like was something like this. And what it basically did was it aggregated all the alerts into one. It told us, hey, we have these many servers or these many regions or cells. And I'm having this many alerts, whether it's warning or critical alerts. And as you can see, some of them are in red. Some are just zeros and some are warning alerts. And this was good. This was good for the first couple of months when the infrastructure was, I don't know, let's say 1,000 boxes, right? But what happened when it was 50,000 boxes, for example, is that it became almost impossible to manage that even. Because there was too many alerts, too many alerts coming in, too many failures, just to manage that dynamically to make sure, for example, take an example of doing a code deploy. When you do a code deploy, you want to make sure that the broken boxes do not break your deploy, right? So you want to have a dynamic way of filtering this out during the deploy process such that a new code deploy doesn't get stuck on a broken box. How do you achieve that? So we had to figure out a way to sort of automate the process where, first, before we get to the alert, let's find a way that we automate the process of fixing a broken service, a broken box. So the previous UI that was aggregating the alerts now had a secondary service where it forwarded all the alerts. And that service automatically tried to remediate these alerts, these alarms, to try to fix the box. And it only came to an operations sort of like few when the automation process failed. And that was a huge win. That got us, that was eliminating a lot of these repetitive alerts where you would see like a broken service. The process was pretty straightforward. You would go in, find out what's wrong. Let's give an example. Let's say a computer service was down, right? The usual process of handling this, if I'm an ops, is I would log into the box, see what's wrong with the service, try to start it, look at the logs, see if there was any exceptions, see if there was any errors, make a note of that, and then try to start it again if it starts all good. I'm sure this is how we do it normally within the operations circle, right? But it's the same for you. It's the same for me. It's the same for some other one, someone else, right? We all follow the very, we all follow, maybe slightly different, but we all follow the same pattern. So if we can automate this, we have a very steady way of saying every time this happens, make sure we do these things. Make sure we follow these steps. Put it in a center place somewhere. Make sure we're not in a cycle or we're not in a loop where we keep starting a service. So we make sure we exit it if the alert was like more than five times a day or more than a few times an hour or something. So it had all these logic built into the service to say, hey, okay, if something happened, try to do this. If it happened more than this time, exit alert escalated to a person. If not, if we can fix it, awesome. And this is what we did. And that got us quite far until we hit the problem that I mentioned before where we're doing a deploy and a broken box was causing us problem. So how do we fix that? The only way to fix that is to have some audit mechanism in place that goes around, checks, let's say if a host is up or not, and then notifies some central place saying, hey, this host is broken, we need to do a deploy, skip it. In order to do that, we needed a couple of things. One, we needed some audit service in place. And two, we needed a central place where we can go back to say, hey, this box is broken, so make sure it's not, it skipped during a deploy process. And this is what we're trying to show here. We have an auditor in place that goes in, audits various things, whether it's services, whether it's just about a broken box, whether it's some security role, whatever you have, it's sort of like a workflow that it goes through, audits, and then comes back to say, hey, this is good, put it in some central place, happy face, no, you need to fix it. Put a workflow in place, sort of like an alarm in place for the resolving service, and that will go in and try to resolve. If it failed, escalate. Here's a quick example of something that I'm talking about. For example, here, there's a task called deploy catch up, and what it would try to do was if you had a code version pushed out to your fleet, and a couple of boxes missed that, for whatever reason, the boxes were down at that time, and we skipped it. Make sure the next time the audit process runs, we catch the fact that these code versions were different from what we should have, and then go back and create a task that said, go back, make sure that the code version is right again. And it's, as you can see, it's created by the auditory process here. And the same logic for fixing or remediating alerts can be applied to doing workflows. What does that mean? For example, you have a scenario where you have, you need to migrate a box or migrate customers, migrate version machines from one server to another server. You can do that by hand. You can say, you know, follow a certain steps in order to do that, or you can completely automate it. You can, for example, in this one, you can see there's a workflow process in place that says, when I have a process where I need to vacate a node, for example, disable the compute, disable the compute service, make sure you migrate, there's some pre-migration checks done, make sure you disable monitoring, put that data back into some central sort of repository where we're tracking all this stuff, and so on and so forth. As you can see, it does a whole bunch of different things. It might go through, it might fail. In this case, I'm just showing something that failed. If it fails, it's fine. Just let the operations know that something failed. If it went through, then fine. In this case, what we're doing is we're migrating everything out of a box, killing that box, re-bootstrapping it. Maybe you want to put a fresh code into it. Maybe you want to do a fresh install of the operating system. You do all that stuff, make sure everything looks good, passes the QA, all automated, and then you put it back into production. So this is the type of stuff that we did within Rackspace Public Cloud and we're doing with Project Cretone. And it's worth pointing out that this process, this workflow that you just saw there, we're able to go and scale that out currently in Rackspace Public Cloud to running with tens of thousands of physical nodes, doing things like that exact workflow. You want to pull that back. Can you bring back the previous slide? Previous, previous. There we go. This exact workflow we would use to go and say we need to go and upgrade our hypervisor because we have an embargoed patch. And this weekend, we need to go and get that patch onto every single machine, make sure that we have the mitigation in place so that by the time the embargo is over, we're all set. And this used to be an expensive time-consuming process in terms of manual labor. And now we have it something that's basically completely automated unless there are some failures on a given box. There are always failures at thousands of nodes scale, but most of them go through. Absolutely. I mean, think of it like this. If you had to do this on a thousand box and you were one person doing this, it would be pretty much impossible to track thousand boxes at the same time, right? But having a process that tracks it for you and only notifies you when there is something wrong is a much nicer way to handle it than having to track thousand boxes on command line or through UI, whatever it is. So this is what we sort of draw experience upon to bring it into the project. Jim is going to talk a little bit more about sort of like the technical deep dive into Crayton and give us a little bit more info on that. Thank you. So sure. I'm Jim Baker and I work on Rackspace Private Cloud, also part of the OpenStack Innovation Center and a variety of interests and here are some opportunities to contact me if you're interested. So looking back post-mortem, you might say, what do we do right? We did a lot of things right, right? I mean, I think that we were able to, we have an internal asset management system at Rackspace, Rackspace Core that we've been able to augment with some inventory that the existing fleet management was able to utilize. We're able to go and systematically make changes to that environment and also determine what is actually going on. So if there was some conflict between what asset management knew or did not have enough information about, we could audit that. We're using Ansible playbooks to do the actual configuration, which meant that we were able to build on something that worked well. And we had this integration API that allowed us to build up the dashboards that you were seeing, what was missing. Why couldn't you just use what we use in Rackspace public cloud today? Why can't we just use it in Rackspace private cloud today? Well, in part, the inventory is tightly coupled to our asset management. There isn't something that says, here's a freestanding inventory system that can work with other sources of truth. The playbooks were very much tied with how we manage Rackspace public cloud. And it was a multi-tenant. And maybe not everyone here wants to manage multiple clouds, but it is a use case for some. So let me introduce Kraton. So the first thing is that we're really big fans of trying to find what are the best and current OpenStack practices for developing OpenStack projects. So that means using the great infrastructure out there. I can highlight, I shouldn't point a laser on Josh right now. He's sitting in the front seat for those watching on video, but we're able to, let's see here, there we go, we're able to take advantage of tooling like Task Low, for example. We're utilizing SQL Alchemy. We're utilizing new functionality that's available in MySQL and Postgres, namely support for JSON columns. Another goal of ours was to simplify the underlying, make it really, I mean, I don't know, it was fairly simple, but we really have come up with, I think, a fairly simple Python object model for going and working with all of our internal representations that is then backed by SQL Alchemy. One thing that's of note here for those who are in the development side, oops, can I just do something? There we go. We use synchronous coding, where possible, because if we do things like hand off asynchronous work to say Taskflow to work in worker pools, then if we're going and otherwise modifying representations of objects with our REST API, if we do that synchronously compared to using, say, EventLit and all the issues that come up with that, far simpler code, just saying. Another thing that I think we've done here that's pretty nice is that we are using a map-reduced paradigm based on, again, Taskflow and how we can basically construct a TaskGraph that has things running independently. And most importantly, this is a second system. We've already implemented something that works, right? So the usual thing here is to just go and say, well, wouldn't it be nice to add this and add that and so on and so forth? And what we really try to do is keep it as simple as possible, really come up with a set of core concepts that we can then build an ecosystem around. More importantly, what all of us here, if you're interested in working with us as a community, could build an ecosystem around. So I'll highlight one user story just to kind of give you an idea of what we're looking at. So we were given this by some people who were looking at, you know, what is really necessary to make operational work with OpenStack much better. And so we have this as a cloud operator, I need to check my deployed physical resources against a set of policies and rules so they can meet security, availability and other requirements. Obviously, this is something that we need to be able to do. And yet at the same time, it can be really challenging to go and keep one of these user stories always up to date, consistent in terms of actually achieving this as a cloud operator. So I'll highlight here policies and rules. Well, what types of policies and rules? Where is this coming from? Well, one selection might be, I'm utilizing, say, OpenStack Ansible and the security role that's available with OSA. It might be something else. Okay. Some playbook, some chef recipe, some puppet book of some kind. I forget what it's called in puppet, sorry. But something along those lines, right? And if I'm going and continuing the OSA theme here, I need to be able to configure it with variables according to the policy I need in place. The OSA security role, for example, is pretty, has a lot of configuration options for reason. And I want to be able to go and apply that playbook against my inventory. So what does that mean? Well, I'm going to be maintaining, I need to maintain my inventory host, right? And I don't want to do that in some flat file, even if it's version controlled or anything like that. I want to be able to go and maintain that ideally in a database. And I also want to be able to capture the administrative aspects. How are these devices actually being managed in terms of layout, in terms of regions and cells and things like that? Lastly, we have to consider the operational aspects. This thing needs to scale out, all right? And we need to, again, consider multi-tenancy. So I'm just saying that unpacking this one user story, which seems pretty simple, gives us a lot of input in terms of what this Creton project should be about. It's very much aligned with what we had previously done in public cloud, but it also gave us some insight into what we should be doing next. So just as a sort of quick overview of what we're building here with Creton, this is our logical architecture. We have a, I think, a fairly straightforward core set of services here that we're building out. Not all of this is, we are still very much a work-in-progress product. But we have here an overlying Python object model connecting together inventory through SQL Alchemy that's actually doing the modeling. You could plug in your own modeling, but we're not trying to build every possible plug-in here. We're just trying to go and show at least that one back end would work. And we're also going and utilizing Taskflow and this job board to actually do our audit and remediations. And as much as possible, support a variety of different ways of accessing the same underlying REST API, including a command line client and a Python client. I should also mention that this code base in terms of the core is all Python 3. The Python client is 2.7 or higher. So for those who are interested in working on Python 3 without having to go and deal with packed ports and six and things like that, not that that's difficult, but it's just more extra engineering work. We think Python 3 is the way to go. Next idea. The fundamental thing that you're working with in terms of your inventory of what you need to manage in your cloud is some device. It's a host. It's a top of rack switch. It's a container for your management plane. Devices are key to what we're building out. We take devices and go from there to building out a specific your cloud. Okay. Devices also have relationships. There's containment. So a host is part of the cabinet. A host can contain those containers for the management plane, which is that what we use, for example, with OpenStack Ansible. There's also the network topology. How do we go and link up at the cabinet level through interfaces to our network? OpenStack specific administrative stuff, as well as the ability to do arbitrary tagging. So I can say that the reason I have this node is to go, this node's purpose is to be a compute node. This node's purpose is to be a storage node with Swift, so on and so forth. All right. Sillow was talking about earlier, managing configuration is a really hard problem, as we all know. Okay. So who here loves going and managing DoverConf? Oh, yeah. So we got one person here who's just like, yeah, that's not true. So it's like, how many config settings do you have to go and manage in your cloud? Well, as you multiply that out, as you build your cloud out, it just becomes more and more and more, right? At some point, you're building this out in a database. Rackspace public cloud, we're using MongoDB in conjunction with the Rackspace core asset management system. You might be using a database system or you might, because you're running a smaller cloud, be using a lot of text files, but regardless, you've got a whole management headache in front of you. All right. And even if you are using scripts to try to template this and whatnot, and again, I look at something like OpenStack Ansible in terms of how it manages things, it becomes very difficult to manage over time. In part, where is the traceability on that, right? Oops. How do you know that this configuration setting, where did it come from? It's really hard to know what's going on, especially over time. So what we've built out here is a fairly flexible variable system. We wanted to make it feasible for, we know that we couldn't figure out every inventory configuration setting in advance in our schema. That would have been possible, right? There's such a variety of things. And of course, we also understand that if you look at existing configuration management tools like Ansible again, they have a fairly nice variables model to make that a little bit simpler. What we've done is taken that and basically built it out, put it, I guess, a little bit on steroids, you might say, such that it becomes almost like what you see in a directory management system, including the ability to specify things at a top level and have them percolate all the way down, but with the opportunity to override. We also allow variables to work on most entities. And I'll just show one example of this because I think it's interesting to look at the underlying code just to see how simple it can be from a user perspective. So we have here for our variable support, we have a simple to use mixing class. It should work well with caching. We haven't really tried it that way because you don't optimize unless you're actually seeing the need to do that optimization, but let's just say that it should. And it allows us to readily identify what happened. Where did this setting come from? Who made that setting? As we build out our back, what was the whole workflow associated with making that change? Very important with a large cloud to understand changes over time. I'll just highlight this one piece of code here in case you're interested. So we take advantage of, as I mentioned, Python 3 has this really nice thing called chain map, which allows you to go and implement scoping logic for variables similar to what you see in Python's local versus global scope resolution. Here we're using it to just say, and I'm ignoring whether or not you have cells or not, I'm able to say, all right, in my device tree, resolve all my ancestors and make that part of a chain. Also, let me look up for my cell, let me look up for my region. So if I want to configure it at the highest level, the region, I can just do it there and you can see it's just one more line in the underlying Python object model. So very simple code. And I just bring this up as an example of this is a project if you're interested in working on as a potential craton developer. Does that look that bad? No, okay. So we're really trying to go and make it as simple as possible. And I have to say there's a little bit of logic necessary to make that actually all work. So it's all beautiful and it's looks like, you know, just a standard dictionary. But when we do that, then from a usability perspective in terms of someone who's consuming that as a developer, it becomes much easier to work with. Okay. And now we have a screencast. Yeah. Next slide. Yeah, I'll let you. Um, the screencast is basically what it's doing is it's showing what Jim just talked about, as in what this variable looks like. So if you're if you're an user or if you're a consumer of craton service and you want to see how you can resolve these variables. So if you're managing Nova Conf and you want to see what what it actually means to manage that through craton, right? And this is sort of like an a see if I can play it first. So you should be able to one time it doesn't work. And we actually prepared the screencast so that the demo would just work. Yeah. Are you not able to move the mouse? Maybe it's just not on the screen for some reason. Oh, yeah, there you go. Here we go. This is great. Yeah. Okay, so what we see here is basically a query I can before there's a way to I want to pause that. Sorry. So if there's a way to so before we start, I want to say we have a craton client in progress. The reason we're not using it is it's not usable yet in some of the things that we're trying to show, which is why it's not there. But we do have a client. So what we see is the rest API at this point because that is the one standard that we're all working against. Yeah. So what we see here is basically a rest API call to the service to say give me all the regions that I have configured in my in my craton service, right? When we make that call, you will we will be given a list of your regions without too much metadata at this point. And one thing to notice is when you call upon the regions, a specific region, it gives you a little bit more sort of like a data on on that region. I'm going to pause it here just to highlight a few things. For example, for example, if you can see that when you call upon the region, it's giving you not only some information about the regions, but it's the fact that we've prepopulated this with some metadata, some variable information. For example, you can see that quota RAM has been set. Similarly, there's something called NovaCon for one and NovaCon for two with some random values. Now, if I go ahead and set a different value for that, you can see that that value has now changed to whatever we set it to. But this has been done on a region level. So when you're setting up your config variables, the way you want to do it is you have all the common configuration that applies to the whole of your regions set at a regional level. And when you pull that down, your hosts need to be able to get that info as well. And as you can see here, when I query on, I'm about to query on a host saying, hey, give me information about a specific host. The variables that I set at the regional level should resolve to that host level. And what that really means is if you were managing configs per host level, let's say you had many different configs that you were maintaining on a particular host, you could do this to say, every time I query upon this host, give me a set of configs that's true for this host only. And that's a big flexibility over having to manage this manually. And that's what we're trying to show you. And obviously, you can override like we're showing here. NovaConferred2, if you override it on a host level, it stays. So when you're managing configs like this, it becomes a very easy way to make sure that you have everything common set on a region level. You can do the same on a cell level. This is purely on the compute part of things. And you can resolve it down to host levels. And it's not just one configuration value, but you can do the same for many different types of variables that you would usually track on a host level. So eventually what we plan on doing with this is going and extending this to an inventory fabric. Being able to chain information like what we've done here, not only extends to going and saying it can come from administrative things like configuring at the top level or my given region or possibly project or whatnot. It also allows us to bring other information, say from Nova or sender, about that same device. So we can link in this information together. Pretty straightforward. Chainmap enables chaining. Great. That's exactly what you would expect. So we're running a little bit of time so I just wanted to go and quickly get into the other really important piece here which is our support for workflows. As we saw with the workflows that we used in Rackspace Public Hub today, the ability to run some sequence of tasks with these assumptions really gives you a lot of power. So if your individual tasks are convergent on some desired state, thereby allowing retry. And those workflows themselves are also convergent. You can just keep on running them again and again, assuming your configuration hasn't changed, they'll converge on, again, that state. And those tasks themselves are independent of each other on different devices. Then you can really map this out in a true map-reduced architecture. One nice thing about a map-reduced architecture is it means that you can go and distinguish the actions that you're doing. I'm applying these tasks to make some change in desired state on this given host such as to go and migrate all of its VMs to some other host. It can be run independent of considerations about, well, what is the data center layout? Which is great because now I can go and think about the scheduling that should be responsive to that data center layout. Be data center aware. And if you look at what we do with Rackspace Public Cloud today, exactly what we do. So we're not going and just trying to migrate all hosts off of all cabinets at the same time or some random subset of it. But instead, we're systematically doing that work. And so we can use this map-reduced architecture. We can map this workflow to some device. And we're able to take advantage of Taskflow's job support to actually go and run the desired multi-level TaskScraft that corresponds to this. We can probably skip this. So do you want to go and do the- We probably skip this because we're running out of time. Yeah, I think we're out of time on this. So let's just, so just in general, if you're interested in what you saw here about Cretan, here are some contact information. And at this point, if we have any questions, if we have any time for questions, please address us. Thank you. I don't know if we have time for questions. Is this on? Yeah. If you do, I have a question. So it seems like the work- No one's kicking us off yet, so. Okay, good. Seems like the most interesting part of this is the workflows. Can you talk a little bit about how those are actually executed and how you scale them? Do you have to run an agent on each of the boxes that you're managing? How does that work? It's, that was the big question that jumped out of me from this. Sure, sure. So we can talk about what we do in public cloud today and- Yeah, so the process right now is to, is what we were using Celery as sort of like the backend to execute all the Python processes. Everything is sort of like encapsulated as a plugin. So you execute a plugin. Currently, we're looking at Taskflow. And we can probably expand upon that. And we're also looking at sort of like sandboxing every plugin within a container. So everything runs inside a container to make sure that it's sandboxed. So we're looking at that as well. But it's within the Taskflow sort of like environment. Right, and so this is different from running containers on those hosts. It's actually running containers on workers. So that we can go and ensure that the workers have all the necessary environment, including any associated software installed to do changes on the target devices. Hello, I work on Kameleon Cloud, which is a test bed for computer science that was built using OpenStack. And for us, what was really important is to be able to do an inventory, not just of how many computer nodes we had, but of all the devices inside those computer nodes, the amount of RAM, the number of disks, the revision number, BIOS and firmware, version numbers and so on. Is that something that Crayton supports? In a way, yes. So what we're looking to do is have audits that would go in and sort of like read all these data out and put it in the inventory for you, right? So if you were running these audits in a cycle, you would always have, there needs to be some sort of synchronization between what's happening in your environment and let's say with your asset management system and Crayton, right? So Crayton being sort of like the front-end service for all your fleet, you would wanna make sure that we're syncing that all the time. So we have a workflow, sort of like audit workflow in place where we're reading these values and populating the Crayton inventory service so that when the downstream services that consume Crayton want to know about them, you can always talk to it. You have sort of like a central one place to go to ask questions about what do I have in my environment. So having this extensible inventory that you can go and one, attach variables to say arbitrary devices, arbitrary variables to arbitrary devices gives you a lot of flexibility along the lines you're describing and two, having the opportunity to not only do that in say a script that's updating things from other sources of truth but is actually utilizing the workflow system itself, collect that information, gives you a lot of power to go and augment what you know about your environment. Okay, thank you. And just a quick another question. Are you planning to interface with Aronic Inspector to gather the data that's produced by these two? Yeah, we are. We're looking at, so there's, even within the OpenStack ecosystem, there's so many different like watcher projects, for example, right? There's a whole bunch of different projects in place right now that will do a whole bunch of different things that we would probably consume. So we will work together with, for example, with Water to get data back from that or maybe it's someone ask a service that's running. You'd have to feed some downstream analytics service as well. So eventually I think we'll do all of that, yes. Thanks a lot. All right, I think we're good. Thanks very much. Yep. Sorry guys.