 Hi everybody, my name is Jacob Walsik and this is Sandy Walsh. We're going to be talking about a introduction to Nova. So hopefully you're all in the correct room. Sandy and I both work for Rackspace but we have very different roles. I'm a cloud architect and I work with our customers that are building private clouds, both pre-sales and then also post-sales to help them with things like application architecture and planning for growth. And I'm just a geek. I write code a whole lot and generally don't talk to the customers a whole lot. So I guess we're sort of flexible on how this talk goes. We've got more than enough material to talk about but just as a quick show of hands, how many people are at the code level or want to get into it from a... Okay, good. So how many people have a pretty good understanding of the architecture of OpenStack, would you say? So okay, we'll dive around a little bit. We can play it by ear. So throw questions out and stuff. We've got time. We don't need to go through the slides. But if there's something that just doesn't make sense, just scream out. Here's a little bit about what we have planned to talk about. Like what Sandy said, we can jump around here. We have more than enough material to fill the 40 minutes. But if you have questions, just let us know. So to start out, Sandy's going to talk a little bit about the tenets of open design. Yeah, so OpenStack always gets lumped in with just as open source, but really there are four aspects to it. It's open source, obviously a patchy license, but it's open design. If you want to participate in OpenStack and you're doing something that's relatively big, you can go through the blueprint process and put a proposal out and that'll give people a chance to look at it, the key players, to look at it and see what you're trying to do and give you some guidance as to how to architect it. So there is no ivory tower. There's no grand poo bus sitting somewhere that says, this is NEA verily show, we'll build it this way. Everyone gets to give their input on it and we listen to all the different suggestions. Open development, if you're doing something small and easy, you don't need to go through the blueprint process. You can just branch the code, create a git branch, make some revision, make sure you do your tests and submit it, and then you can become a contributor to OpenStack as simple as that. And then it's open community, of course. We have, anyone can be a leader. You don't have to work for one of the big companies. If you're a domain expert in some aspect of OpenStack, you can participate. And we have, because it is a big project, we've got a lot of different people that head up different portions of it as our tech leads. They don't get to decree how things are done, but they're sort of the sounding board for all the different pieces that are going on. So if I'm working on a piece on the scheduler, then I know I can talk to the tech lead for it. And they say, oh, you should talk to someone on this other person. So if there's a part you're looking at getting involved with, find out who the tech lead is in that area and work with them on it. So the question of what is Nova? Nova is the compute project within OpenStack. OpenStack consists of many different projects. I talked to a lot of customers who, for which, Nova is synonymous with the name OpenStack, but there's also a lot of other pieces. These are bits that Nova relies on. Nova is designed to be highly scalable up to, you know, thousands and thousands of hypervisors. And it's also designed to be hypervisor agnostic. You know, the private clouds that we build for customers are based on one hypervisor while our public cloud is based on another. There are other platforms out there that use para-virtualization instead of full-blown virtualization. The other projects that Nova works with are things that we're going to touch on during the presentation. Kind of the names are Keystone, Glance, Quantum, and Cinder. We'll talk a little bit about what roles they play. But primarily we're here to talk about Nova. Nova was one of the projects that was part of the initial release of OpenStack. OpenStack was initially put out there in July of 2010. So we're a little over two years old at this point in time. Nova's been there all along. Some of those other projects, however, weren't released until more recent releases. And the Diablo release Keystone came about. And now in the Folsom release, Quantum and Cinder have kind of joined the party. As for where Nova is going, that's part of what we're here to talk about this week. There's a whole bunch of folks upstairs, sequestered in rooms, shouting at each other, praising each other, probably some crying, probably some screaming, trying to decide where Nova is going to go in the future. Some of the work going on towards things like provisioning of bare metal servers and starting to build an idea of really federating cloud platforms are themes that have carried over through multiple Nova releases. Some of the shorter term work, like getting Nova moved over to use Keystone for authentication, for instance, are things that were completed in a single development cycle. So here is what the Nova architecture has largely looked up, looked like through Diablo and Essex. We have these different components that all work together. And in a few minutes, Sandy is going to take us through kind of a detailed journey of what the communication looks like as it moves through these various components. But some of these components got swapped out in Folsom. As I mentioned, the Cinder and Quantum projects have come about. Cinder is replacing the internal block storage Nova volume service, and Quantum has replaced Nova Network to provide a plugable architecture for doing software-defined networking and a number of other interesting pieces. When we start looking at the full stack of OpenStack projects that combine to make up a Nova cloud, we also bring in Keystone, which is our unified authentication and authorization, as well as Glantz, which is the image repository and the base images that make up all of your compute instances will come from. Okay. These are some URLs. We'll make these slides available to people, but if you're getting started, these are some good jumping off points to get you into all the different pieces. Obviously, Launchpad is where the code, the project sort of centers around GitHub, of course, is where the code lives. The Doc stuff is kind of interesting. There's lots of good information in there, but the Nova client that you're going to be issuing your commands with, so that's one that you want to get familiar with, and you can include that because it's Python. You can include that client as a client library in your own Python programs if you want to control OpenStack that way. DevStack, the Nova portion of it, and the last one is... The last one is the Rackspace private cloud software. That is Rackspace's opinionated installer. You can go and download it for free, or you can go to GitHub and pull down the source code for all of it. It is our installer to go out and deploy Nova. It's designed to work on bare metal. It'll work on VMs, a little bit of modification, but it's a great way to get your feet wet with Nova, with a collection of software that's designed to all work together and some chef recipes that make it all happen. We can kind of go down two different paths here. It looks like we are a bit ahead of our planned time talk-wise, so we might have time to cover both. Sandy has some material we can cover that will dive down into the Nova source code, how it's architected, as well as what the API calls look like or what a call looks like as it moves through the architecture that we highlighted a few minutes ago. Then we also have something that's a bit more Nova consumer-oriented where we talk through API calls themselves and see what the curl commands would look like to interact with the API. Do you guys want to try and cover both? Is there one that the room would definitely prefer to see? Raise your hand for code structure. Code stuff? Or as user, consumer side of it? Some more consumer. We'll talk through the API stuff first then. The Nova API, just like everything within OpenStack, it's RESTful. The first command we're going to look at is actually not the Nova API. It's going to be an auth command to go out and authenticate against Keystone and get an authentication token. Any API conversation that you have with an OpenStack cloud is generally going to start with a call to Keystone to authenticate and get a token back. In subsequent calls, you'll then take that token and present it to whatever service you're talking to to identify yourself and to say that you're authorized to use the service. Nova also has some commands that are pass-through. So you can make a call to the Nova API, for instance, to find out what images are available in your glance repository. The first piece here is authenticating. You know, this is just a crawl command. Obviously, you can do this from your choice of programming languages. You're going to do a post. You're going to provide some credentials, and you're going to send it to the, whatever, that slash v2.0 slash tokens. This is saying, hey, I want an authentication token back. What you're going to get back is going to be a big pile of JSON. It's going to include your authentication token, and it's also going to include your service catalog. There may be services in your cloud other than Nova, or there may be multiple Nova clouds that make up your cloud. And you may want to be able to send commands to different clouds based on, you know, availability zones for performance, different environments for production versus test and dev. There may be a Swift environment that's attached to the same cloud as well. Once you've authenticated, now you actually want to figure out what you can do. So these two commands, they look all but identical, because they are all but identical. You'll find that, you know, there's a very elegant simplicity to the Nova API. When you want something, it has a very common English name, and you can go out and find it pretty quickly. The two commands here return a list of flavors that are available, which will be the virtual machine sizes. It's going to be a combination of the base disk image, the amount of CPUs, and the amount of memory that would be assigned to that virtual machine. The second is going to return a list of images. These are the base images that you would use to boot from. For, you know, for something like the Rackspace public cloud, it's going to be a fixed list, and it's going to say, I'm going to boot from Ubuntu and CentOS and Red Hat and Windows. For a private cloud, these are going to be the images that you create yourself. So if you've built your own Nova cloud on your laptop or on some hardware or wherever, you can go out and build whatever kind of base images that you would like to use to run your applications. One of the best practices that we recommend around building images is to kind of limit the number of them that you have. There are some great talks this week around using both Chef and Puppet Automation, using something along those lines to automate deployment of your applications and limit the number of images will simplify your life. When you actually are ready to boot an instance up, this is the full-blown command that you would send in. This command identifies a image reference in that list from what gets returned here when you call for a list of images. Every image in that list will have a unique identifier, same for the list of flavors. And then here I'm pointing to a specific API endpoint in our public cloud in Chicago. And just to be clear on that, the flavors are your combination of... it's your server's information, like how much RAM, how many CPUs, all that information. So you can bundle all that stuff up beforehand and say I want 512 megs of RAM and so much disk and so much bandwidth or whatever and manage all that stuff yourself. Questions? What's that? Yeah, you can control bandwidth and flavor, I think. You cannot. Not without an appropriate quantum plugin. Okay. If you're just using Nova Network in the traditional sense, that's not something that you can control there. It's just the flavor is sharing, whatever, every instance in that model is sharing the network connection of the hypervisor. So if you have 100 instances, it's not guaranteed 100th, it's you're all competing for that same pipe. So the question is around this specific URL with the Rackspace cloud servers, the way that it's set up, that is my tenant ID. And so that's the format that we use in cloud servers in a more vanilla setup. That's going to be a UUID. But yeah, I'm saying that, okay, I want to... Nova allows me to have a user account that belongs to more than one tenant slash project. So I'm saying I want to boot this instance and have it be a member of this specific project for cloud servers that's your account number. There it is. When you boot it either through the Horizon dashboard, which is the web-based UI for OpenSack, or through the API, you can pass in data that is either going to be led or fed to cloud in it, or it might be data that's part of some sort of injection that you have set up to, like on a Windows VM, to set an administrator password the first-time machine boots. There's a couple of different ways you can do it too. So that's the most common way, the most important way, is being able to assign metadata to your images and things and have that pass down, like when you boot the image. But also, you can pass in hints to the scheduler. So we'll go over that in a little bit. But the scheduler is the part that sort of looks at all the servers that are out there and all the hosts and says, where do I want to place this thing? So you can pass hints into that to say, I'd like a certain geographic region, or I'd like to have something with a GPU, or I'd like to have something with lots of RAM, and you can also give hints that way. That's a good question. So the scheduler allows you to hook in, the scheduler works by weighing, and then filtering. So actually, it's filtering and then weighing. So it just looks at all the attributes that you're looking for, and it filters out all the hosts that can't support that, and then it weighs the rest of them, and then it sort of picks one that way. So it's a good question. And it's a good question. So I think that's a good question. So I think that's a good question. So I'm going to go to the scheduler. And then it sort of picks one that way. Actually, that's not going to help you, actually, because that will help you pick the host, but it's not going to change your priority queue. So no, you can't do that. Yes, so the question is, can I specify some affinity rules? The stocks that have filters that come with NOVA, S of the Essex release, allow you to specify both same host or different host affinity. So if I do have two VMs that I want, you know, same application, I want to ensure that they're on separate pieces of hardware, I can pass in a scheduler hint, and it's just going to be a string of host IDs. I'm going to say, I don't want this VM on the same hypervisor as any of these other VMs. Likewise, I can also say, I do want this host on the same hypervisor as another particular instance. There's also, I think, a concept known as host aggregates. So you can make collections of hosts that have certain similar attributes. Not if you use the host aggregates, because then you can just specify the aggregate name. To use the host filter, yes, the scheduler hint, you would eventually have a really long list, and you may actually have more items in that list than you have hypervisors. The filtering, the scheduler hints for filtering, or for whatever affinity, those were available S of Essex. Yeah, so those are, just about any OpenStack cloud you're consuming today, you should be able to use those. Yeah, the scheduler is made so that you can drop in your own filters and weighing functions and all that sort of stuff relatively easily. Any other questions about that? Good question. The affinity rules are not. By default, the scheduler in the Rackspace public cloud will always try and put every new instance a customer creates on a different hypervisor. So for most folks, that's the behavior that they're looking for. If you have a use case where you're trying to keep all of your virtual machines on the same hypervisor for whatever reason, you can specify that in an API call, but the API server will ignore it. I'm gonna jump over to... So once you've got an instance back from that curl command, you'll get a UUID instance ID, and that's your unique identifier for your instance, and then from there you can do all the normal operations, just like you saw with those examples. You can reboot a server, resize it. Migration is generally one that's... So migration is another one, so you resize an instance to a larger, more memory or more disk or whatever. Generally, that'll move to another host as well if it needs to. Oh, is that right? We're gonna try messing around with this. Stay tuned. Gotta love that. I mean, really? No, I don't think I'm gonna do it. Oh, wow. So when Olmstack was first started, some of the design tenants were established on how we should build this thing. Scalability and elasticity are the main goals, so obviously we need to... In the early days, I think there was a mandate of a million... I'm not gonna mess with it. Build the cloud can't operate presentation software. It really is, I tell ya. Anyway, I'll go through this, and hopefully we can finagle it while we figure it out. So anything that's not against those first two things had to be optional. So you'll see that a lot in the Olmstack design. Inside Nova, there's the core functionality, which is scheduling, the compute nodes, volumes, that sort of stuff. There's a whole bunch of other stuff that's available in there, and those are just optional components. You can deploy those, if you like. We do everything asynchronously, so we try not to block on our calls. So when the API needs to talk to the scheduler, the scheduler needs to talk to a compute node, it sends a request out, and then it just goes away and does some other work, and then eventually the call will come back. So you have to design your software that way as well, because blocking will kill. Horizontally scalable, and that's when we get into cells and those discussions. So what we do is we have a core deployment that we'll put out there, which is our scheduler and our compute nodes and whatnot, and then we sort of cookie-cutter that out for a larger deployment, so we can actually nest them into a hierarchy. So a call will come in from the top and then propagate down to the other child cells, and each cell will contain a database and a scheduler and a RabbitMQ and all that. So we don't need to build the big honkin database and the big honkin RabbitQ system. We can just do little cookie-cutters of them and then pass messages around. Sheared nothing. State management is a big thing with anything of this size, so we always try to keep the state close to where the logic is working on it, and don't repeat yourself. We don't copy the state around. So when you're writing software for it, you've got any questions about some of these sort of design decisions and how you're implementing stuff? That's when you go out to the IRC channels or you go out and you talk to the tech leads on those different areas and say, how should I tackle this? Because very quickly, once you scale up to thousands of servers, then you've got to start thinking about how many socket connections am I opening up to a database if I do this, or how many notifications am I putting on the queue? In terms of running one of these clouds, queue management is the thing that keeps you up at night. The database is an important thing, but really when you've got a lot of activity going on and you start to see your queues run up, then you start to panic and you say, well, what do we do about it? Fortunately, the way that OpenStack is designed is that you just throw more workers at the problem. So you can fire up more and have them process data off the queue. So that's the whole idea behind distributing everything. Eventual consistency is another very important part of it. You may not get the exact right answer about what sort of state your instance is in currently, but give it a couple of seconds and it'll get there eventually. So you have to be a little bit tolerant about that. And as a developer, you have to test everything. If you're submitting code, there has to be unit tests for us. There are integration tests that you can do on top of it to really give yourself some peace of mind and be able to sleep at night, but at least unit tests on everything. So that was, like I said, one of the very first documents that were written around architecture of OpenStack and it all still holds very true today. Yep. Right, so the question is, are there certain parts of OpenStack that don't do shared nothing and there are. So when we get into cells, the idea behind having a cell is that everything is its own little island. But there are pieces like Keystone, which is our authentication layer that we need to share across cells. But they can be different databases, but we want to have user profiles and tendency and stuff like that in there. And that's something that we'd like to solve. We just don't have a solution for it yet, but it is. Yeah, that's a tough one. I'm going to fight with this one more time. So let's walk through a flow here, what's happening when, so just like we saw here one of these curl commands, what's happening? So when we do a curl command, we're doing an HTTP command, so we're sending a request into a web server. So we've got, you've probably got Apache, you might have Nginx or something as your web server, and that's something that you want to make. So you want to make that highly available. So you're going to put your load balancers on it, you're going to have lots of them. And if you get a lot of client requests coming through, very simple, right? Scaling is a very simple thing to do. But those calls come in very rapidly. And just like we saw, the first thing that you're going to hit is the off layer, Keystone, and then from there, you're going to get your service catalog back, and then the service catalog is going to point you to one of those API servers. So from the client, it knows, okay, I get that and then I go over and talk to here. I don't know how to talk to that one directly because that could change. Web servers are going to come and go and you're going to have a lot of them. So OpenStack supports two APIs currently. There's the OpenStack API, which is a descendant of the Rackspace API. And then there's the EC2, Amazon EC2 API. So whichever infrastructure you like, you can call each one accordingly. So that's, a call just came in. And we don't want to do the work in the web server because the web server is something that we just want to do it and get out of there. But then we've got all these services on the back end that actually had to do the work. So we've got the compute nodes that talk to the hypervisors. Depending on the hypervisor you're using, we'll generally run the compute software right on the hypervisor in a little dummy instance. And then you've got the volume managers and the network managers and stuff, as other services out there. And there's a bunch of other stuff that you can put in there. But somehow they've got to talk. We've got to get that call that just came in and we've got to get it out to that service. We've got to get into, oh, and of course there's part of the pluggable architecture of it. Everything is optional. These are all plug-in based. So I can change the drivers for any of these services if I'm using a different hypervisor or if I'm using a different networking solution or whatever. So, sorry. Yeah, so the question is what's paste. So paste is a pain. It's really powerful, but it is daunting to look at. So paste, in Python, when you create a web server, there's a thing called the whiskey stack, which is a web server gateway interface. And what it allows you to do is intercept the call as it's coming in and put code in at every stage along the way. So you want middleware in there that's going to check an auth token or it's going to deal with an error that happens. And all that can be done in the pipeline coming into it. Normally that's something that you would code by hand, but everyone has got different deployments. So paste is a config tool that lets you define that stack in a config file. So if you want to have Keystone as your auth or you want to have another check in there, you can put that all right in your whiskey pipeline and intercept a lot of the calls coming in. So it's very powerful that way, but it is a bit of a thing to configure. There's not a whole lot of other parts of it that actually use an HTTP interface. And I think they might... Someone could correct me on this. I think they're paste-based. But some of them use... So Celiometer uses Flask. And there's different ones. No. So that call comes in. Any other questions about that? So the call comes in. We have to make them talk. So that's when Rabbit comes in here. And so Rabbit, people familiar with MQP, Rabbit Cuing Systems? Okay. For those who aren't, Cuing Systems are like radio stations. So all the services out there are listening to these radio stations. And they tune in to what they're interested in. And there's topics that they're interested in. They tune into it. And when messages come in, they just go, oh, look, that's one that I want. And one of them can grab it and do work on it. So it doesn't go out to all of them. They don't all listen to it. And if that service can process it, it says, I got it. You carry on. But if it can't, if it fails, if it crashes, then that message is not acknowledged, and the Rabbit server will say, okay, I'm going to try someone else. I know there's another worker out there, and it's going to take care of it for me. So that's how we horizontally scale this thing, is we've got Rabbit in there, and all the messages go through. And we can just hang more services off of that bus as the calls come through. So now we need to get them connected. So every service has a corresponding API. And when I say an API, don't think of it like a REST interface. It's just a Python file, which is a place that you can go and talk to it. So if you look in the code, you'll look at the compute directory where the compute code sits, and there'll be an API file in there. And that's the thing that you talk to. So if I'm in the web server and I want to talk to the volume node, I import volume.api, and I make a call on that. And it'll take care of all that magic stuff about getting it on to Rabbit and marshaling up the parameters and sending it in and dealing with all that stuff. So every service, when you look through the directories, look for that API file, and you'll see what you can do on those services. And it's a real simple thing. So behind the scenes, when you implement it, that's where the drivers come in. The drivers can deal with that differently. But at the API level, when someone wants to consume those resources, the API side imports that API file for all those different services, and that takes care of putting it on the bus. So one of the common questions we get about this is, well, why put Rabbit in there? It's an RPC call. Why do you need all that heavy lifting in there? And what it gives you is it gives you a buffer under heavy load. So it's an RPC call with this incredible buffer behind it. So if things get busy, or there's not enough of them, the messages will just build up in the queue, and then eventually the workers will pick away at them. So you can handle those spikes. And if you were architecting this using another binary protocol to try and get stuff on the wire, you'd have to make every service be able to scale and deal with those calls separately. So you get into heavy threading operations and a whole bunch of synchronization issues, and it gets a little wonky. With this, even though it seems unwieldy up front, it gives you that... When you see a pipe of water going down a hill, and they have that big tower coming out of it, so when they close the gates, the water can shoot up into the pipe. That's the same analogy here. You've got a lot of calls coming through. You can't deal with them. They block up, and then they'll flow through eventually. And it keeps them in mostly the same order. Obviously, there's just some little differences in there. Right, so the question is, how do you do a priority queue if you want something to jump ahead? You can't do any queues as you want. So if you wanted to... I mean, if that was something that was important... There's some places we do that, but generally what you would do is you make another queue and say, if something comes in on that, I'll deal with that first. But we try and keep everything relatively balanced. Right, yeah, that would be a neat thing to do. So now we... So we want to create an instance. Is that cool so far? You can just import them all into one file. I mean, you wouldn't physically combine them, but you would import them all in one place, if you needed to. So how are we doing on time? Are we cool? We've got 11 minutes. Okay, I'll fly through this relatively quickly. So let's go through that boot command. So we wanted to create an instance now. So the call came in. We know we want to send it off to the compute node. We probably need to set up our network first. We probably need to set up our volume. We need to pick a host. We need to do all that sort of decision-making beforehand. That's not something the API should be doing, because the API should just be dealing with HTTP. So this is where we bring in the scheduler. So the scheduler sits up there, and the first place for a create instance call is to go up to the scheduler. And one thing that the scheduler does is it listens to all the hosts that are out there, and it says, well, how are you? You know, how much memory do you have free? How much disk do you have available? Are you busy right now? Because they can be doing resizes. They can be doing a whole bunch of stuff. They can be doing that because the hypervisor will complain then. So the scheduler keeps track of all that information about how the compute nodes are. And then it says, okay, you're the best candidate, and that's where we get into the wane and filtering function that we just talked about a few minutes ago. And it makes that decision, and it just sticks it back on the queue again the same way as it imports the API, and then it makes it out to the compute node and it does the work. The compute node actually goes out to volume, goes out to network, and says, I need this, I need that, and gets all those resources. So when we get into a cell deployment, all that happens is a scheduler, there's a cell scheduler as well, so when we get into a hierarchy of these deployments, the cell scheduler says, I can't deal with this, and passes it down to the most appropriate cell, and the exact same dense happens again until it eventually makes its way to a host and the instance gets created. So all throughout this operation, there's any service can write onto a particular queue called a notification queue, and it puts an event out there, and it says, here's what I just did. I changed the state of this instance. It's building, or it's rebuilding, or whatever. Or this operation failed or anything. So those notifications will make it onto the queue, and then you can have consumers that will pick it up. One of them is a thing called Yaggy, and that's not part of Core OpenStack. It's something that, it's a side project that you can add to it, and what Yaggy will do is it will take all those notifications and it will put them in a PubSubHubub system, familiar with PubSubHubub. So PubSubHubub will take data and then turn it into RSS feeds. So you can have multiple consumers coming off that, so you don't have to have your consumers directly to the notifications. Instead, you can get an RSS feed of what's happening in your system. Multiple consumers can come in and listen to that and pick stuff off at different rates, which is pretty cool. Celiometer, I have to say it's correct, operate the same way. So Celiometer will consume those notifications and do billing and usage and all that other stuff as well. So you can write those notifications to multiple queues. If you have another service that you have that you need to watch what's happening out there to make important decisions, then you can use the notifications to do that. So it's a handy little hook to have. That's pretty well it. That's how you boot an instance. If there's... We'll take questions, but I mean, also, if you like, we can dive into the code a little bit more. It's all pretty well the same. The API is the same. I mean, there might be some slight changes in there, but what happens is there's one plug-in driver for it that now instead goes out and talks to Cinder or goes out and talks to Quantum or whatever. It'll go through Rabbit, but it can... From there, it will proxy out, you know, forward the call out to the other service. Exactly. Exactly. It's like a protocol converter. Yep. Yet another hop. Right. And there's discussions about that. Do we want to use a REST interface to it or is there some other thing we should do to be faster? Because, you know, web servers are nice, but you had to scale them, right? You've got to buy a big load balancer and you've got to do all that stuff. It gives you flexibility, but there's a trade-off in terms of how you scale it. Most just use one. So this is the architectural deployment decision that you have to make is, do I want to cluster my Rabbit and do all that and make one big honking cell, or do I do a rabbit per cell and just forward down to the next cell? Right. It's actually not really a function of Rabbit. The limitation on a cell usually comes out from the switch. How many MAC addresses can you store in the switch? And it's usually a network limitation. So you can partition it up however you like. You know, a couple hundred hosts, maybe two or three hundred hosts per cell is probably a good metric. All of the tools that you might use to interact with a Nova Cloud are all going to be talking to the Nova API. So if you use the Python Nova client, if you use Curl, if you use Horizon, they're all simply talking to the Nova API. The database is authoritative, so any of those things will come through and they'll talk to the database, and that's where it gets its... That's right. So the state is managed in the compute node itself, and if a conflicting command comes in the compute node, we'll say, no, you can't do that right now. Correct. Yes. Just what it is, or... So REST is just HTTP, but it's just that you... it's stateless. The REST is just that you can't, you know, run any of the tools that go up and you can't do anything around. And every call is a single atomic operation. Right? Yeah. I think that's it for our session here, but if you want to talk, we'll be just outside if you want to ask you more questions or anything. Thanks. Thanks. Thank you.