 Okay. Good morning or good afternoon, everyone. My name is Stephen Gordon. I'm a Senior Technical Product Manager at Red Hat for OpenStack Compute. So I'm hoping I know something about that and can share it with you today. Hopefully, we all learn something. I should highlight this is more of a community-focused presentation, so I'm not talking about product-specific bits here. I'm just talking about the general OpenStack upstream community bits. So today, I'm going to give an overview of what Compute is and where it fits into the OpenStack infrastructure offering. I'm going to go through the instance lifecycle, so creating an instance, what is actually happening behind the scenes to make that happen. I'm going to talk a little bit about the different Compute drivers that are available within the Compute project and how we're going to go about exposing some information that people can use to choose between drivers as they put their environment together. I'm going to talk about scaling Compute resources and segregating Compute resources, and I'm going to talk about those as separate topics, which I hopefully will be able to explain why when I get to it. And finally, I'm going to talk about a couple of new things in Kilo, so a couple of big-ticket items that I want to focus on. For those who caught my talk yesterday about the KVM driver, some of the first section of this is going to be kind of a little bit of a repeat, but we will go slightly deeper in other areas that we didn't touch on yesterday as well. So what is OpenStack? Hopefully, we have a bit of an understanding of this, but it's a group of related projects that when combined form an open-source infrastructure as a cloud platform, and I'm going to talk a little bit more about that on the next slide. It's intended to be massively scalable, so most if not all of the OpenStack projects scale horizontally. When you want to scale up, you add more effectively. And with that in mind, we also have a modular architecture, which allows us to use a variety of back-ends. So in the compute case, obviously, we're talking here about virtualization back-ends, container back-ends, and also bare metal for that matter. So when I divide these up in my way of thinking, I think of the infrastructure as a service layer on the bottom there as being primarily the compute service itself or the compute project itself. Some combination of storage, depending on your needs, so not necessarily always volumes or always images, and then networking. I also have bare metal down there at the moment. So Ironic is a separate project from Nova, but I still see that very much as part of the compute layer in terms of what I deal with on a day-to-day basis. And what I classify here is infrastructures of service plus. There are a number of projects within OpenStack that build on top of those level-level APIs, so things like salometer for monitoring, Sahara for spinning up, Hadoop instances quickly, Heat for orchestration, and so on. So Compute was one of the original OpenStack projects, along with object storage. It exposes a rich API for dividing compute instances and managing their life cycle. It is pluggable. There are many back-ends supported, and I want to highlight that it's what I would refer to as relatively solution agnostic. What I mean by that is the Compute API, as it was originally envisaged, is still somewhat VM-centric, but we do have back-ends for things like LXC, bare metal, obviously, and other solutions like that. So it's not specifically a virtualization abstraction. It can be used for containers and bare metal as well. As we saw in the keynotes this morning, though, there are some things particularly around containers where you want to expose additional functionality in the API or different functionality in the API. So that's why we see efforts in that space. So zooming down within the Compute project is actually made up of a number of services, not a single service. So at the top level, we have our API service, and over API. So that's exposing a restful interface for other components or users to talk with. And there's also a provided Python client for interacting with that. All of our communication within the project is via an RPC message bus. Most often at the moment, that will be RabbitMQ. And that's used for communications between the different services represented here. We have Nova Conductor, which is responsible for taking the instance build request, requesting the scheduler to find where it's going to put it, and also for interacting with the database and the Compute nodes. The reason we have the Conductor or the reason it was originally put in is so that you didn't need to have necessarily the database credentials on every single Compute node. So they talk to the database through the Conductor. And Nova Compute, on the right-hand side there, acts as our Compute agent. So it is either in the case of the LibBert driver on the hypervisor talking directly to LibBert on that machine, or in case it's something like the vCenter driver, it's talking to vCenter through its own native APIs. There's kind of a grab bag of other services within the Nova project or even related to the Nova project that I wanted to touch on briefly. So we have the metadata service. People in the room for the talk prior to this would have heard a lot about cloud in it. So the metadata service is at least partially responsible for exposing that information to the guest so that you can configure your cloud instance on launch. We also still have the Nova network service, so the traditional legacy networking model in OpenStack. If you are using Neutron, which is kind of moving more in the direction of a software-defined network solution or abstraction layer for them at least, they'll typically be on your compute node and a layer two agent. So an agent for handling the layer two traffic and set up a connectivity around that. So for example, the OpenVswitch agent, the Linux bridge agent, many of the Neutron plugins will have a similar or equivalent agent to those. There's also a Solometer agent which will sometimes run on the compute nodes. So if you're trying to collect monitoring information from the compute nodes using Solometer. I highlight here also there is and has been basically for its entire existence an implementation of the EC2 API or the Amazon EC2 API in Tree. I'll talk a little bit in the QLA feature section about some efforts that are underway to try and come up with a new way to expose that or a new way to build that so it's more maintainable. As it currently stands, it does have a number of gaps in parity with the EC2 API as we know it. Finally, there's a number of console authentication and proxy services to allow users to connect to guests using VNC, Spice, and also RDP. So I'm going to try and walk through the instance lifecycle here just in terms of creating a guest from the command line and what I have to do to get that going. So in the first instance, in an OpenStack environment, so this is just a deployment I've run up and I get my Keystone credentials file. So in particular of interest here, I have the username of demo. I have a password which is auto-generated in my case by deployment tool. And I have the authentication URL. So that URL is the endpoint that it's going to use for Keystone. It's also the only endpoint the client actually needs to know to find the rest of them. So it will authenticate me when in a minute with the Keystone server, get me a token, retrieve the list of available services in my cloud, retrieve the list of endpoints associated with those, and then it will find my Nova Compute endpoint, that API endpoint we talked about before from that list. So in terms of a minimal instance creation, which is what I'm going to focus on today, you actually only need to specify a flavor in an image and for most people's needs, optionally a network interface and also a name for your instance. So a flavor is effectively determining the size of your instance. So from a number of virtual CPUs, the amount of virtual RAM, the size of the disk it will be given, and also some other information that we can give in the flavor as well, which I'll touch on in a moment. The image determines the disk image that we'll be using to boot the instance. You can also use a Cinder volume. We won't be touching on that so much today, but that is an option to be aware of. So if I do my Glance image list, I have two images in my Glance repository, and there is, I should mention, so Nova, the Nova command also exposes a Nova image list command. That's actually simply proxying for the Glance API, so the information is coming from the same place. So in this particular case, I've decided to do an image show on my REL 7.1 server image. The only or the most important thing from a user's perspective to note here is the minimum disk size. So what it's telling me is that this particular image has a minimum disk size of 10 gig. So when I try and deploy this, it's going to expand to have a root partition of 10 gig, and that's important when we come to the flavor sizing, because if I choose a flavor with less space than that, it's going to fail. So talking about flavor selection, the original idea is that it simplifies the process of packing version machine instances into physical hosts. So the largest flavor is typically twice the size in CPU, RAM, and disk of the next largest flavor and so on. Administrators may want to customize this depending on the workload patterns they're seeing. So if you're seeing that you're leaving a lot of CPU or RAM on the table because of the types of workloads you're running, you may want to customize the out-of-the-box flavors. I also, because I want to put the slides up on slide share, the link there goes to the design architecture guide that was worked on by a number of us with the foundation, and it has a broader section on this topic. Flavor selection. So looking at the list of flavors I have in my cloud, you'll notice I talked about how M1.large is half the size of extra-large and medium is half the size of large and so on. You'll notice that M1.tiny breaks this model a little bit. So instead of being half the size of M1.small, it's actually a quarter of the size. It also has only the one gig disk requirement for really small images like the CRC image that we use for testing. When I do a show on that flavor, I can find out some more details about it. The one I want to highlight is there's also an extra specifications key in there or extra specs. So what that allows us to do is enable, as a cloud administrator, some of the more advanced features we have in Nova. So for instance, things like CPU pinning, large pages, other special configuration options are implemented by changing the extra specs and exposing the change to users that way. So you might want to create, for example, special high-performance flavor that allows those instances to use special hardware. Finally, I'm just going to do a neutron network list. I'm going to grab the ID for the private network in this particular case. And then I'm going to do my Nova boot. So I'm specifying on the boot command line my M1 dot small flavor, my roll 7.1 server image, and the network ID I got from the previous screen. Now, almost instantly, the Python client will give me a response that says task state of scheduling, VM state of building. That doesn't mean that your VM is, you know, done yet. Basically, all that's been created at this point is a database entry, saying you're trying to create one. So what just happened? The client, transparently, retrieves the token from the Keystone API and the list of endpoints for my available services. The client also actually, as it's currently implemented, does a client-side check of the image identifier and flavor identifier I provided. That will also actually be checked API-side in a second. And then the client generates a user request for the compute endpoint in JSON format. There did also used to be the option if you were building an application against the Nova API to use XML. It is actually a JSON-only implementation now. So in the JSON output that the client is sending to the API server, we see the name of my instance, my image reference, my flavor reference. So it's actually converted that to the identifier. So I provided the name m1.small. That's been converted to the ID of 2. It also provides a max count and min count. So in this specific instance, I was just creating one virtual machine. You can actually request in a single boot command to create multiples. And finally, also the network ID. So that's sent off to the Nova API service. Nova API then extracts the parameters for basic validation. So that's that server-side validation I talked about. Recreates a reference to the flavor, the boot media, be it an image or a volume. And then it saves the instance state to the database. So that's where that database record is first created to say, someone's creating an instance. It then puts a message on that RPC queue, so RabbitMQ typically, for the conductor to pick up to actually build the instance. So the API call returns at that point, and that's when I get my output. So the conductor asks the scheduler where to build the instance. So to select a host. The default scheduler implementation is called the filter scheduler. There are a couple of other implementations like the chance scheduler currently included, and also a lot of work going on in the scheduler at the moment to make it easier to customize the scheduler in different ways and abstract the scheduler out a little bit with cleaner APIs. So the filter scheduler works by applying a series of filters and weights to the hosts in the environment. So an example of a filter would be something like is the host on? Does it have enough VCPUs? Does it have enough RAM and so on? And then we get into some more advanced filters like the pneumatopology filter which does some special handling to try and align our guests with the pneumatopology of the host and that kind of thing. Weight examples. There's actually only currently a couple of weighters or weighters in the codebase. The most common one being the RAM weighter whereas an administrator I can determine whether I want my scheduler to either try and pack all of my instances as tightly as possible onto a smaller group of hosts or to spread them out widely across the environment. So in terms of an example, so in this particular case I have three hosts, one, two and three. I apply a series of filters in the scheduler. As a result of that host two is dropped. So host two didn't have enough VCPUs or host two wasn't on. As a result of that, I have two hosts going to the wayer. So host one and three. The wayer determines that actually host three is the preferable host to run on here. And as a result, the order is flipped. So host three is the first selection. Host one is the second selection. One of the most common areas that people kind of run into and ask questions on ask.openstack.org about is the no valid host error. So when they've pumped in an instance request and it's failed because we couldn't find a host in the scheduler. And typically it's for a good reason, not necessarily a bug. But it can be non-obvious because you don't get output as the user as to what filter failed. So if I enable debug equals true in my Nova configuration, I do now get a lot more output about what exactly is going on. In this particular example, we see I start with three hosts on the first line. The retry filter runs, I still have three hosts. The availability zone filter runs, I still have three hosts. And then finally the RAM filter kicks one of my hosts out of the list. So at that point I know that the reason that that particular host was eliminated was because it didn't have enough free RAM. Finally, the instance state is updated in the database. The conductor then places a message on the queue for the OpenStack Nova compute. So the compute agent that actually runs on the hypervisor on the selected compute node. So the compute agent is ready to prepare our instance for launch. So it actually calls Glance and Cinder again this time to retrieve the boot media and actually mount it if necessary. It calls Neutron and Nova network to get the network port and also the security group information. So the security group information is basically the firewall rules as to what we're going to allow into our instance. Typically implemented using IP tables currently. Attaches the Cinder volume and sets up the configuration drive if necessary. So the configuration drive is an alternate way to provide metadata into an instance instead of using the metadata server in the network. You can actually have this little drive that you populate with some JSON scripts or some cloud config information and it's attached to the instance as it boots. Finally, we use the hypervisor APIs to create the virtual machine. So at this point, this is where the driver code becomes more important. So the driver is responsible for talking to either LibVert or vCenter or whatever API it is for the hypervisor backend to create the virtual machine. And finally, hopefully, if everything went well, we update the virtual machine state in the database using the conductor. So I mentioned compute drivers and the large number of backends we have. So I wanted to focus on the tools we have available in terms of helping people decide what drivers to use. So we have two, really. So we have the driver testing status. So whether or not there are tests in the OpenStack gate. So when a commit comes in, we run a series of checks in the gate. And if everything goes well, we say plus one, let's merge that code. It looks good. To do that, of course, we have to have actual valid tests. So what I'm referring to here is the driver, one of the ones that's tested in that system and gating. So it has to vote plus one before code is merged. We also have the hypervisor support matrix. So does this driver support particular actions, x, y, and z? And we have a long list of those. So the driver testing status is multi-tiered. There's group A fully supported. So it's got both unit and functional tests. They're gating on code commits. We have a middle ground. So there is some test coverage in the gate, but it's not fully complete or it's functionally tested by an external system that is not in the gate but does comment on patches. So it's not necessarily gating. Code can get in for that driver or that affects that driver without being plus one, plus one by the gate. But it still has some functional tests that live elsewhere. And finally, we have group C. So drivers that have limited testing don't get commits at all and have no public CI that we're aware of. There is also the potential for like a hidden group here which is where a driver exists completely out of tree and kind of unknown to the community which may or may not happen. There is a wiki page where this is explained in a little bit more detail and of course you get the full listing of the drivers and where they fall on that matrix. We also have the hypervisor support matrix. So it lists mandatory and optional driver capabilities. So I mentioned that we have kind of a wide range of technologies between virtual machines, containers and bare metal that are supported by NOVA. Not all of the actions the NOVA API supports make sense for all of those drivers. So as a result, some of the actions are optional. So for instance, launching an instance for obvious reasons is mandatory but attaching a block volume is not as optional. Some people even with the normal KVM driver or something like that may not want to use volumes at all. In terms of the number of drivers represented there, we have 11 plus entry drivers including a lot of various incantations of the libvert driver for different architectures and platforms. We have a series of out-of-tree drivers that live on StackForge. A lot of people have probably heard of the NOVA Docker driver, for example, it is on StackForge there. And then as I said before, others may exist in vendor repositories and things like that. So talking now about scaling compute, I mentioned at the start that it's horizontally scalable. So in this particular diagram what I'm trying to illustrate is that I've added a load balancer in front of the API service and then I've simply created more instances of each of the services. Add more instances of the API service, more instances of the conductor, more instances of the compute hypervisors themselves, and more instances of the scheduler. Our message queue and database, we scale using the clustering mechanisms that they support natively. The one thing I wanted to notice that the scheduler does need to be scaled up a little more carefully. So there's some things you have to tweak to do that well. The reason for that is that if you have two or more independent instances of the scheduler running and they get requested to boot an instance at the same time or to schedule an instance at the same time, they're looking at the same state of the environment, so they're going to make the same decision. And the result of that, of course, is that they're both going to put their virtual machines on the same host and potentially overload it. So what you can do there instead is have the schedulers have a range of hosts they pick. So in my example before, instead of just returning host three, it would return host three and host one, for example. And typically, you'd use a much wider range. You'd return something like 50 hosts and then have it pick randomly from those to try and avoid clashing with each other. You also can scale up the scheduler at a much lower rate of change to the other services. So you don't typically want one or more scheduler instances or less than five, say, can handle the throughput and the volume to schedule and handle a lot of instances. Once we get beyond that scale, we talk about cells. So in particular in that previous diagram, when I look at the message queue and the database here, they're kind of the odd ones out because they're not horizontally scaling quite the same way that my OpenStack services are. So eventually, you typically find, particularly the message queue gets pretty busy. NovaCells is intended to try and help divide that workload up a little bit. So you divide the compute installations up into cells. So you're still behind that one top-level API endpoint in the API cell, but each of those cells has within it a separate message queue and database. So we're spreading that load out a little bit. So the API cell handles incoming requests and then schedules to a compute cell. So that's done by this extra service we run called NovaCells. So the pros of this, we maintain a single compute endpoint unlike switching to something like regions, which I'll talk about in a minute. We relieve pressure on the queues and databases at scale because we're spreading them out across the compute infrastructure. And we do introduce an additional layer of scheduling so we can make some smarter choices there. The cons, so there's a lack of cell awareness in all of the other projects. This typically first becomes an issue when you deal with the networking side of things because that's the next kind of scaling bottleneck. But none of the other projects really know much about cells. It's a very compute specific concept at the moment. There's minimal test coverage in the gate although this has improved somewhat of late. And there has been typically, historically, some standard functionality that was broken. Primarily because cells was implemented as kind of a bolt-on to the rest of the compute. So we have these duplicated code paths where there'll be one path for the normal what most people are running and then there'll be the path for cells. Cells version two, I wish a lot of development was done on a kilo cycle and is expected to be a focus for Liberty as well. Currently under development, aims to try and fix some of these issues. The core idea is to make every compute installation what we call a cell of one and then those users who want to scale up add additional cells. So what we're talking about there is collapsing these code paths so that everyone's running the same thing and we're testing that in the gate. Segregation of compute resources, so exposing logical groupings of compute hosts based on geographical region, data center and so on is one reason to do it. We can also do it to expose special capabilities. So high performance nicks, storage, special devices, etc. Basically, when it comes down to it, the divisions mean whatever the admin wants them to mean. So one way of doing this is using regions. So in this case, we share as few or as many services as we want to between those regions, typically just Keystone and Horizon. They implement their own targetable API endpoints. What that means is that as a user, I actually have to specify what region I want to talk to when I'm trying to schedule instances. So if we look at this, I have both, you know, on the command line, I have to specify my region name or I can use the dropdown in the Horizon dashboard. Host aggregates are logical groupings of hosts based on metadata. So typically, the metadata exposes capabilities those hosts expose. So SSD disks, PCI devices, that kind of thing. Hosts can be in multiple host aggregates. So I can have my host aggregate for SSD storage and my host aggregate for 40 gig interfaces. And some of my hosts may have both of those things so they end up in both groups. Then what I refer to as implicitly user targetable in that unlike, say, the region, you don't as a user go in and specify I want the aggregate with this name for my host, not typically anyway. Instead, the admin creates a flavor with some metadata associated with it and matches that to the aggregate metadata. So in my particular example here on the first line, I create an aggregate hypervisors with SSD. I set the metadata. So my aggregate create got the idea of one. I set metadata, which is completely arbitrary. It can be whatever the admin wants. But in this case, I use SSDs as a keyword. True is the value. And I add my host to that new aggregate. And then finally, the flavor key. I set the aggregate instance extra specs SSDs equal true. So what it's going to try and do when it comes to the scheduler is use that aggregate instance extra specs filter to match my request, my flavor request with an aggregate that has that metadata. Availability zones are slightly different. But as not as different as you would think on face value, particularly coming from something like Amazon and the concepts they have there. So availability zones are logical groupings of hosts based on arbitrary factors. Particularly location, network layout, power source, some kind of level of failure domain within your compute installation or even within a region. What I call explicitly user targetable. So I can explicitly say on the boot command line, I want availability zone rack one. If I don't specify an availability zone, the admin does have the ability to specify a default. Although that can cause some interesting balancing issues because obviously if users aren't specifying where they want their instance to go, they all end up there by default. What's a little confusing about this is that an availability zone actually is a host aggregate. It's just a host aggregate with a special marker on it effectively. So to create an availability zone, I do an aggregate create just like I did before. I specify the aggregate name and then the availability zone name that I'm exposing that as. This causes some interesting difference in the way it's handled. So in particular, unlike a host aggregate, a host cannot be in multiple availability zones. They have to be in one or the other. So I'm just going to walk through a quick diagrammatic example to try and illustrate that. So in this particular case, I have region A and region B. So I've carved up my compute installation into two regions, sharing a Qstone and horizon instance in this case. And I have six hosts in each of these regions. I'm going to dump some AZs on top of those. So I'm deciding that, for instance, let's say region A and region B are both data centers. And then within those data centers, let's say I have two separate power sources and all my boxes within those AZs are on a separate power source. Within that, I may have some hosts that I want to expose for performance reasons that they have SSDs. So I create a host aggregate and a flavor around that. And then let's say I also have some with 10-gig nicks. And then finally, just because I'm really spending my money here, I also have most of the hosts in that region have GPUs available for pass-through. So as you can see, you know, the hosts can be in both multiple host aggregates. They can be in an AZ. And then they can also have all of that within a region. And I could have also just as easily done the same. So I could have had the same host aggregates across in the other region as well. So I want to zoom in on a couple of things that are new in Kiehler or were worked on significantly in Kiehler. API micro versions is a big one. So we've had version two of the Compute API, which is the one that pretty much everyone uses now for quite some time. There was a plan for it to be superseded by a new version of the API, so version three. It was ultimately determined that doing that would be too difficult, given the impact on existing users and also the developer overhead in trying to do that and do kind of a big all-in-one switch. But we still have problems with v2. In particular, it's extended by adding extensions to the base API and even at this point, extensions on extensions. So, you know, we have things where there's an extension to add one particular bit of information to say an overshow and then another extension on top of that to do the next bit and so on. The aim of micro versions is to make it possible to evolve the API incrementally still, but do it in a much cleaner way to make it easier to do it properly. We also want to provide, obviously, because one of the reasons not to do the Big Bang v3 in the first place, backwards compatibility for existing REST API users. So the idea is that we'll have a versioning system in the API of the form x.y, where x is basically not intended to change, so only due to significantly backwards incompatible API changes, which we basically don't want to do at all, but it's there as a safety. And y will be changed whenever making a change to the API. So for example, at the bottom there, we have the header. So a client will be able to express using that header the Novery API version it supports. Obviously, existing clients aren't going to do that, but we know where the API was at when we left off, so we can assume that they're below that baseline. So in the case where we specified this version 2.1.1.4, we're assuming as a client that when we get that response that not only do you support whatever feature was implemented in 1.1.4, but you support the ones that came before that, so 1.1.3, 1.1.2, 1.1.1, and so on. So we do have an initial implementation available in Qo, but it's kind of a separate code path at the moment. So the v2 API code will still be used to serve the v2 format API request, or the ones without that header effectively. The plan is that in Liberty, the v2.1 API will actually, and the one that uses microversions will actually serve the request for both the existing clients that use v2 and also for new clients that say they support 2.1. The version 2 API is actually now frozen, so new features are being added using these microversions now. And the Nova client does not yet support v2.1, but will in the Liberty cycle at this stage. We added also support for vCPUs pinning, so I talked quite a bit about this yesterday in the Liberty KVM driver update, but basically we're allowing assignment of vCPU cores and the associated emulator threads to dedicated physical CPU cores for performance reasons in environments that want to do that. So the administrator defines a set of hosts that accept dedicated resource requests, so they want to allow pinning on those hosts, keeping in mind that the trade-off is that you're not allowing over-committer of memory on those hosts or CPU for that matter. They'll reserve some cores for the guests to run on using a combination of the existing IsoL-CPUs kernel parameter, which basically says take these cores out of the pool that the kernel scheduler uses for placing processors and only allow processors that are explicitly pinned to go to those cores. Similarly, Nova has the vCPU pin set config variable, which tells it to kind of do the opposite of what we just told the kernel, which is you must pin all your guests to these cores. And then we create a flavor in matching host aggregates to set all that up and wire it together from exposing it to the user point of view. We also added support for huge pages, so the ability to use on the X86 architecture at least two megabyte or one gigabyte huge pages to back the memory of your guests. Again, over-commit is not possible in the case that you do this, so we use kind of segregated hosts for this purpose, typically in a separate host aggregate. The administrator has to reserve the large pages to provide for guests, so you can do this either at system boot time using the kernel arguments or you can also allocate it at runtime. The problem at runtime, of course, is that there's particularly at the one gig pages quite a large chance you won't be able to allocate those pages if they're already in use. And finally, once they've set that up, the hardware or the mem page size can be set in the flavor extra specifications as to what size page we use to back the guests. IO-based NUMA scheduling is a fairly minor tweak to the NUMA topology filter. So NUMA topology filter is already responsible for trying to, in a smart way, align guests with the NUMA topology of the host the way the CPU's memory and so on are laid out. Modern chipsets also have the ability to assign or align a PCIe lane with a specific socket on the server. And in that case, if you're running on a different socket and trying to access that IO device, you're going to get worse performance. So the idea of this change is where we're passing through an SROV device in particular, so a networking device. We want to make sure that we put the guest on the socket that's related to that particular device lane. I mentioned in passing earlier the EC2 API so the fact that there is an implementation in the NOVA tree. There's been an effort underway now for a little while to provide an implementation of the EC2 API on Stackforge to basically start over with that. So the intent is to provide it as a standalone service to also extend the NOVA EC2 implementation as it currently exists. So currently it only covers the EC2 API itself and kind of a subset of that at best. The intent is also to add support for the Virtual Private Cloud API and a number of other things. So towards the end of the QoCycle there was kind of an early O.1 release of that that people can try and set up and play with. It does include coverage of the EC2 API that's at least equivalent to what was in NOVA to begin with as well as the VPC API filtering tags and paging as other features of the API that it now supports. It's kind of a grab bag of storage enhancements I just wanted to highlight. So the ability to do consistent snapshots using QM and guest agent by freezing and thawing the guest file system. So that's using the agent inside the guest to do that. You have to set some image properties to set this up properly. But once it's there it's done implicitly as part of the snapshot command where available. We've got driver support for using the built-in ISCSI initiator. So this allows direct attachment of volumes to guests. It avoids some issues with the way we're doing it before we're using the hosts ISCSI initiator. And what that means is that they're all basically in what you would refer to as kind of the same namespace or the same storage space. So this cleans up the direct restructural in the host, the way things are logged, what happens during failures on one particular guest's ISCSI connection. Basically all in all makes primarily for easier troubleshooting when something goes wrong. The vCenter driver has been extended to support vStand data stores. Also ephemeral disks. And then there's Libert and Hyper-V support for SAMBA volumes as well. So the ones common in Windows environments in particular. Finally, there is some new entry driver support. So when I listed out the drivers before I didn't really highlight this but they were in the list. So we now have in the entry driver support for KVM on IBM System Z and also for the Parallels Cloud Server. So that's the end of what I had prepared today. Thank you all for coming and I'll take any questions people have since I think we have some time. So the question was, is there any way to expose the VT bits to any instances creating using the Nova API? I think you mean nested virtualization, right? Yeah. I do it on my systems. It's not really done through the Nova API so much as you enable it on the host. And once it's there, it's there. Yeah. No, no. Yeah, because it. Yeah. So in the nova.conf, there's a Vert type configuration variable for the Libert driver at least. And it can be the KVM or QMU, but that's a host-wide setting, right? So in the nested case, you would set it to KVM, but it's for everything. Yeah. Anyone else? OK. Thanks a lot for your time.