 So I guess time is on. I'm going to get started. Hang on a second. Let me introduce you real quick. So this is, expectably, a very, very popular talk, not just here at the OpenStack Summit. Dave has given various variations of this talk about four times. Last time I saw him do a variation of this talk was at LinuxCon for you in January in Australia. So this is about provisioning bare metal with OpenStack. So please give Dave a very warm welcome. So first of all, this is not a talk about razor, cobbler, or any of the other things which provision bare metal but are not OpenStack. This is about the NOVA bare metal driver using that to manage bare metal. So I'm going to go into a lot of technical detail about what the current status of the driver is. It's in the Grizzly release. I'll cover kind of the background and all of that, how deployment works, and then if you want to develop or test it or do a proof of concept, how to get involved, how to set that up yourself. A quick background of who I am. I began working with OpenStack about 15, maybe 17 months ago before that. I was a database consultant working with MySQL and Precona and so on for a long time. And kind of just in the whole scaling and high availability and performance world for about the past nine years. The bare metal driver was kind of... There was something in trunk a long time ago that kind of fell by the wayside before the Grizzly Summit. Sorry. Teams from NTTDocomo and USC-ISI were working on a new driver. They did some work on it. It looked pretty interesting but was kind of hard to digest. So a group of us from HP, Cloud Services started playing with it and dug into it and with their blessing also took it over, refactored it, got it into the trunk. I've been working a lot with the NOVA team over the last six months to do that. Also been working with the OpenStack Infra team to start thinking about testing and how to test bare metal or use it to help facilitate their test infrastructure. We kind of saw potential with the bare metal driver. It's interesting in and of itself but we saw some potential to do some other things with it, triple low in particular, deploying OpenStack with OpenStack that I'm not going to talk too much about. I want to talk for a few slides about why someone might have implemented yet another way to provision bare metal. Because you already have mass and cobbler and crowbar and razor and so on and a lot of people have used just their own home rolled pixie environments. So what's different about this? Or why do people do it in the first place? There's already an infrastructure, sorry, an API for deploying your infrastructure and sometimes that doesn't fit really well, the things you want to deploy don't fit really well inside virtual machines. Maybe you want hyper-runs compute workloads or PCI pass through for specialized PCI devices that don't virtualize yet or you want to do databases and turns out databases like having direct access to physical disks for performance and consistency. And so, or you're just a big hosting company and your customers want to own physical machines and not run in virtual machines and share resources. So this is probably why people began writing this in OpenStack to give the same API to control physical machines. Why do I find it interesting? One of my things is I'm tired of reinventing wheels. We all seem to do that a lot and have for a long time and as an IT community, we all move forward when we come together on something and stop reinventing the same thing in our own little corner and share and that's what OpenStack has been doing for a lot of infrastructure management. So I'd like to see the bare metal driver as that coalescing for all these other projects to have a standard API to control physical and virtual machines in the same way. And there's a lot of work been done by different groups for the last 15 or 20 years how to bootstrap an infrastructure, how to scale a pixie boot environment and deploy images instead of installers on every node. And this is what we're doing in the driver, in the bare metal driver and it opens up some new possibilities like I said, triple O, installing OpenStack by using OpenStack which Robert Collins talked about on Monday and there have been a few other sessions on it so I won't talk too much about that today. So what is bare metal? Like I said, it's a driver for Nova, for OpenStack Nova and it fills this kind of same space oh that slide didn't show up very well. This is actually a cut out from a much larger slide of the fulsome design. It's just not showing up on the projector. Lovely. So it's a hypervisor driver for Nova Compute so it fits in the same code space as all these other drivers, Livert and Zen and so on. But it's fundamentally different from those other hypervisor drivers because there is no hypervisor with bare metal. It is just a Nova Compute process that is talking pixie and IPMI to control physical machines and deploy images on them. So roughly a normal Nova Compute host might look like this. You've got your physical machine and you've got an operating system and a Nova Compute agent and it's talking to the hypervisor to control a bunch of virtual machines and this is a well-known quantity with lots of well-known issues and it works great for a lot of things and it's not great for some things. But that's understood. This is different. We've got one host up here that's relatively lightweight running the Nova Compute process and whatever operating system. It's talking pixie and IPMI in the same way that another hypervisor driver would be powering things on and off. Start VM, stop VM. IPMI starts, stop, restart. Instead of copying an image from glance and then using that as the base image, base disk sort of VM, we just use pixie to deploy the whole machine image, operating system and applications, directly onto hardware and then boot that. So this slide might give you the impression that with one single compute host up here you can control a whole lot of hardware and all those little machines down there and that's kind of true. You could right now deploy, let's say, a whole rack from one node, one compute node at the top or even just one VM with its own little handy open stack, but that doesn't give you HA. So it would be a little bit wrong to say you should. It might work, but one compute host should not rule the whole infrastructure. And I'll talk a little bit more about that later towards the end, I think, because right now we don't have HA support for the bare metal compute host. In Grizzly, it's on the roadmap, hopefully for Havana. Oh, also, there's a bottleneck right now. If you were to try and do concurrent deployment to, say, 100 nodes at once, the single compute host that is doing that deployment would probably be someone of a bottleneck because it's doing a lot of work on local disk, copying and files and whatnot over the network. So if you were to deploy this at scale right now with what's in Grizzly, you'd probably want to have some ratio of bare metal compute hosts to nodes to spread the load out. That's not to say that bare metal compute hosts are doing anything after provisioning. They're basically idle after provisioning. But they're very busy while provisioning a host. So to get all this into NOVA, during the Grizzly cycle, we had to change some things. And I'm not really happy about them, but some architectural things just didn't fit right. Particularly, the concept of one compute process was one host. And it had a bunch of resources, and it would say to the scheduler, I've got 100 gigs of RAM and 16 cores, and you could allocate a few VMs of small ones, a lot of small VMs, a few big VMs. One really big VM take up all the resources. But with bare metal, you can't ever allocate part of a host because it's the whole host or nothing. So we had one of the changes in Grizzly was that the scheduler and the host manager, sorry, the scheduler host manager and the resource manager now track things as host comma node or host comma hypervisor, host name, synonymous. All the other hypervisor drivers that I'm aware of just use a placeholder for node, zero or empty string or something. Whereas with bare metal, that's actually the UUID of the physical machine that we assigned to it. And if you were to look at the list of compute nodes in a running bare metal cloud right now, you would see one compute node for every physical host that was being managed. We also added a new table, sorry, a new schema to Nova. Those of you who are familiar with Nova Conductor, they moved all the database access out of Nova compute. Around the same time, we added database access for the Nova BM schema. So if you're running bare metal and Conductor, not local Conductor, there's also the other connection to Nova BM coming from the compute process, but it shouldn't make a difference because there's no security implication because your tenants are never running on the compute host. They're running on the bare metal nodes over there. Just to be aware that there is a different database and if you have to do migrations or you want to have HA, there's another schema. We also had to do some work on database migration testing in the CI process to support having multiple database backends inside Nova. So that took a little work. There's a lot of different types of hardware in the world, whether it's HP hardware or Dell hardware or anything else, super micro, whatever. So the bare metal driver is implemented as a set of pluggable sub-drivers and some glue code that puts it all together and coordinates when you build the pre-boot environment, when you actually start the machine, how to roll back and deploy things like that. So we've got right now in trunk, in grizzly, I mean in a pluggable power, driver, pluggable image, driver, pluggable volume and network drivers. The volume and network is only one for each of those and it's kind of just a stub, but it's there. Power and imaging is where the real work has gone on. So I've had a lot of requests from people, will we support hardware discovery or BIOS management, firmware management? I mean, do firmware upgrades using the bare metal driver? What about talking directly to ILO or DRAC? Will we have add-ons for vendor-specific interfaces? I want to. This is not in grizzly, but I really want all this, so it's on the roadmap for soon. Can't say when. There are some external dependencies that are not shared with the rest of NOVA, that if you're running bare metal compute, these things also have to get installed on the compute host. There's a conflict right now with quantum, or the product formerly known as quantum. We have to run our own DNS mask process to do pixie booting because it doesn't yet know how. There have been some sessions on that, I think we've sorted it out, and probably trunk very soon. We'll be able to stop running our own DNS mask process and let open-section networking do that for us. We need IPMI tool, of course, for IPMI power, if you're using the IPMI power sub-driver, open-ice-guzzy, if you're using the pixie image deployment driver, and syslinux also for those things. I'm going to spend a couple slides talking about some of the details of those IPMI and pixie drivers right now, because that's what's in Grizzly and works. Particularly the power sub-driver, IPMI, this requires that you enroll the hardware, sorry, you have to enroll the hardware with bare metal so it knows what's out there. You have to inform it what the IPMI information is so it can reach out and turn machines on and off. It doesn't know how to discover that itself currently, and it doesn't do anything like resetting IPMI passwords. You have to just know those and inform it for now. We also have a virtual power driver, which is great for testing. So we can test all of this with virtual machines that don't actually have IPMI, and it just fits in the same plug inside the bare metal driver. And then the, I think it was the NTT document team that's been wanting Tulara support, so it's in trunk right now, just not in Grizzly. There is Tulara PDU support. The deployment driver, this is what does the actual imaging. The Pixie driver is the only one in Grizzly. There is also the Tulara driver in trunk right now, and I'll talk in some length about the Pixie driver and how it works in a few more slides. We have some integration with open-site networking. Like I mentioned, that's used for allocating IP addresses to ports, but not yet for the actual Pixie boot. That's coming pretty soon. And there's another API extension to add and delete nodes and to add interfaces and associate the interface with the node. And then you can also show status, what bare metal nodes are provisioned, what are active, any deployers that are in process. You can see the status of them. If you were deploying the driver, this is basically what you'd have to add to an OvoConfig file to turn it on. You have to change the compute driver, of course. You have to change the scheduler host manager, and I guess Vish has told me that we shouldn't have done that. We should have used a filter, but that worked. You have to change the firewall driver to no op, because, again, there's no tenants running on this host, so you don't need a firewall on the compute host to manage your tenants' networks. They're over there on physical machines, not where the compute driver is running. And you also have to change these last two bits. RAM allocation ratio and reserved host memory, because, again, there's no VMs running on the host. You don't need to reserve memory for it, because there's nothing taking memory away. And the RAM allocation ratio, if it's not one, the default is 1.5, by the way. It messes up the scheduler trying to find physical nodes. You tell the scheduler, find me a node with 16 gigs of RAM, and it says, I should find one with 24. And, of course, there isn't one with 24, because your physical machine only had 16, and it fails. These settings are also for the RAM driver itself, again, in NovaConfig, but in a different config option group. I've said a couple of times that the OpenStack networking doesn't support Pixiboot yet, so we have to do static IP, sorry, static IP file injection as soon as it does, this will change back to default and that can be removed. That's default power manager, IPMI, or virtual or PDU, instance type extra specs. This is how you tell the scheduler what that compute host is managing. Is it managing a bunch of I-386? Is it managing 64-bit hardware? Is it managing ARM hardware? We're not testing on ARM yet, but as far as I'm aware, the driver should fully well support ARM, so if someone else really wants to work on that, that would be awesome. And you have to tell Nova where the NovaBM schema is, how to connect to it, what database it is. Oh, and it doesn't have to be minus the trailer. It could be Postgres or whatever. That's just a SQL Alchemy connection string. So next, I'm going to talk about deployment and what the process is of throwing a whole bunch of images out there onto a bunch of machines, turning them on, and letting them run, whatever it might be, that you're deploying. You know, you're deploying a Hodu cluster, or deploying your own infrastructure images, or deploying a base Ubuntu image that has, let's say, Assault Minion in there or Chef, and letting those take over and do all the configuration after deployment. I'm not going to talk about that stuff. That's up to you. Just the act of writing the images out to the machines. What's it look like? Before you can do that, you need an open-sat cloud, right? The bare-metal driver is part of Nova, so you need a running cloud with glance and Swift and Nova and Keystone to be able to deploy onto physical machines. Now, there's roughly three different ways to get there. One is DevStack. How many people have used DevStack? Awesome. Wonderful. So then you should all know that if you restart DevStack, you lose all your data. It's probably not the best thing if you want to be managing your hardware infrastructure, but it's good for testing. So we, as in the Triple-O team at HP, part of HP Cloud, have this little product up on Stack Forge called Disk Image Builder, and there's a related one called Triple-O Image Elements that basically builds disk images from cheroots and using runparts.d type scripting, hash, pretty easy to work with. And one of the elements is just called BootStack because it's not DevStack, it's BootStack, but it builds a VM, builds an image from a certain VM that contains all the parts of an open-sat cloud that you need, including the bare-metal driver. And so you can just take that image and use that to manage your hardware. Or if you have your own cloud or if you have a tool in to build your own small cloud, you can just do that and enable the bare-metal driver that way. And that might be good if you want to run this long-term, not just as a proof of concept. There's some operational requirements. You need to map your hardware to the flavors that you define in Nova. So, like I said, if you've got a machine with 16 gigs of RAM and four CPUs, relatively small machine, you need to inform Nova of those specifications and the scheduler filter, when you're booting a node, will match that physical node to the flavor you requested. You can't find any left. It won't boot, right? You also need to load some images in glance. There are a couple different types of images that need to be in glance for bare-metal deploys to work. Particularly, you need to deploy kernel and RAM disk. Excuse me. Which are some... I'll talk more about what those do in a bit. Suffice to say, we can provide them, particularly if your hardware has different needs. And then, of course, you need a machine image, which could just be an Ubuntu cloud image you download. From Ubuntu, you can customize it. It can be anything, pretty much. Yeah, customizing is pretty good. Current prerequisites. I have a spec up to change all of this. You have to have the hardware already configured with whatever you want to run. Bare-metal's not going to do the hardware configuration for you. You need the hardware inventory. You currently know the MAC address, number of CPUs, RAM, disks. And you need all the IPMI information, IP address, username, password, to control the machine. And you enroll that with Bare-metal using the API extension I mentioned earlier. And, of course, you could script that. You can do whatever you want. You could even just insert it into the database and the Bare-metal driver would auto-detect that there was new information in the database and go from that. This is the API. So, yeah, here's the commands to actually do that. So this is the example of the API extension. If you're creating a node, right, you have to specify the host name here. This is actually... That's the wrong word. That's just been a host name. That's the name of the compute node which will manage that physical machine. So if you had a larger infrastructure, say multiple racks and one compute node, one Bare-metal compute host per rack managing the rack, when you enroll a Bare-metal node, you can specify which of those compute nodes manages it with that field. And then, you know, CPU RAM disk, right? And then for every... for every NIC, for every interface on that node, you have to enroll that interface as well. When you do that, when you enroll a node like this, it takes a few minutes to propagate that information all the way through NOVA because it's propagated via periodic tasks which run every 60 seconds, and there's a couple of them. So it can take, you know, between 10 or 20 seconds at the best to maybe three minutes before you could actually boot that machine. And we can fix that. It's just propagating through NOVA compute, scheduler, resource manager, takes a few cycles. The actual deploy process looks roughly like this. Once everything's enrolled and set up and you have all the images and the flavors defined, it starts with NOVA boot. And, you know, whatever other things you might say there about the boot, the flavor, the image, your SSH key's name, and so on. The scheduler will select the compute host and which node managed by that host matches your request. The NOVA compute will prepare the boot loader, do all these things, power it on, and reboot it into the image. So there are two power cycles. One to turn it on and give out the deploy image and after the image is deployed, reboot into it. The whole thing looks like this. So I'm going to walk through that for a minute. Starting out with NOVA boot up there, message to the API. That goes through the message queue, whatever, scheduler. The node filters finds a node. From there, it gets passed down to NOVA compute and in particular, it calls driver.spawn. This is inside the vert layer in NOVA. That is going to reach out to... Oops, that should be removed. That opens up networking up there. Plug virtual interfaces, fetch images from glance, pull the images down to the local disk. Oh, I'm sorry, first down here. It gets the physical information about the node in the bare middle database and reserves it. So it's been reserved at that point. Downloads images, prepares the pre-boot environment, activates the bootloader, sends the IPMI message, power on. DHCP request comes back. Talk about all that in a minute. When it's done deploying, it reboots. And this process down here, I should have mentioned that too. In addition to running the NOVA compute process, you also need to run NOVA bare mental deploy helper. This is responsible for actually copying the image over ISCSI to the physical machine. When it's all done, it updates the status of the node, which in turn updates the instance state. And, you know, you can see via NOVA API that your deploy completed and the node is available. The instance is available to you. Now, inside this process here that I kind of glossed over, what that looks like, more like that. And the blue up here is stuff happening inside the NOVA compute process. And the red down here is bare mental deploy helper, separate process. Driver.spawn fetches the images from glance and fetches both the deploy images and the user-specified image. Builds the TFTP config and sends the power-on message. You know, DHCP boot request, deploy kernel and RAM disk. That is some specialized code in there that mounts the local disks via ISCSI and exposes them back to the bare-metal deploy helper and sends back the message here as your ISCSI endpoint. The deploy helper mounts them, does some minimal partitioning, copies an image, and sends a message back saying, I'm done, reboot. And a little daemon over here reboots itself. The last DHCP request, user kernel and RAM disk that match the image the user wanted to boot. And then, of course, CloudNet kicks in because this machine is now running in a cloud. And all the normal things CloudNet can do happen. And so one of the benefits to using our disk image builder is embedding all the CloudNet stuff and more stuff, so you can use heat, for example. So if heat drives the bare-metal deployment, then at this last stage here, heat takes over and can configure the machine. It's pretty cool. So let's move on to talk about the development process. And if you want to get the full details or ideally get involved with us and also work on it yourself, there's a full walkthrough where that link resolves too, which is our GitHub project. www.tripleo.com slash incubator slash notes. This is where we just shove all of our notes from our developers of how we set up our own development environments. It starts out with a minimal opensack cloud with the bare-metal driver enabled as I mentioned. One option is dev stack, and that should work. Another option, you could download it from here. I don't guarantee that link will work. Or rather, the link should work. I don't guarantee what you download will work. You can also build it yourself. This is under heavy, heavy development, so it may change rapidly. It may break at any time. This is our development environment. But the result of this last script here, boot elements, that builds a disk image containing an opensack cloud with bare-metal driver enabled and some of our code in there to help facilitate starting additional things. And also, we have our own CI process, our own Jenkins, so when we change what's in disk image builder, it rebuilds, and the build's available at that link. The last successful build of boot stack at that link. So after you have that downloaded or built locally, you need some VMs, some virtual machines, which will mock the hardware. They'll act like they're physical resources, but you can do all this on your laptop. I have it all on my laptop, and I don't have a whole bunch of other machines that I carry around with me, but I want to test it right here. So I can make some virtual machines by hand, and in our notes, we kind of explain what the requirements are using the E1000 network device, this much memory, whatever. We also have this cheesy little Python script that can build them for you. It's pretty small, but, you know, it's useful to automate things, right, instead of moving up a hand every time. And then you don't actually need to build your own deploy RAM disk. Both DevStack and the boot stack image will build a RAM disk for you on the fly when they start up, but you do need a cloud image, and it's... I actually have not recently tested just using a bunch of cloud image. We always customize it a little bit so you can build it, you can do the customization yourself using disk image builder, specify what architecture you want, 386 or 64-bit, and a file name, or download it, and then you put it all together. Start the boot stack VM, load the images that you downloaded or built, enroll the virtual machine with the virtual driver, that means getting its MAC address. Oh, and this test path doesn't use IPMI, right? This uses the virtual power driver, which is actually going to SSH out of the first virtual machine to your host and issue verse commands to start on top of the VMs. What if you want to test on real hardware? Well, the boot stack element or DevStack, either one, can control real hardware. Maybe not the best way, but it'll do it, good proof of concept. The only difference is going to be the networking, because both DevStack and boot stack element are using a local bridge on your laptop with pre-specified IP ranges that we know about, because they're just standard and easy to work with. But if you're using real hardware, you may have a different physical network that you need to set up in OpenStack networking and bare metal. That shouldn't be too much. But sometimes, actually, other things are different, too. Maybe you have a Mellanox card in your hardware, or a cloud image, a precise cloud image. My point, too, doesn't have Mellanox driver enabled. So you have to customize it a bit. Great. You can use Discommitter Builder to customize that and enable other drivers. Yay. So I'll talk a little bit about our plans for the Havana Cycle, just at a very high level. One of them that is really interesting, people keep asking me about, is automatic discovery. If I have a bunch of hardware, can I just use bare metal to discover it? Or if I have a running bare metal cloud and I add more hardware, could it just automatically figure it out and enroll itself? Wouldn't that be awesome? We want to do that. I'd really like to be able to do firmware management, whether that's a BIOS upgrade or do some RAID settings or whatever. Also, maybe add some plugins to support vendor add-ons. We're nowhere near that yet, but I want to do it. And lastly, high availability for the bare metal deploy node itself. Right now, there's a one-to-many relationship where one bare metal deployment node manages all the physical hardware. And if the bare metal manager goes away, no one is managing those nodes. So we want to add some HA for that. And we'll probably do that in the next few months. And it's also possible NTT is already working on it because we've talked with him a little about it. Here's some links to further reading. I will put the slides up on my Twitter feed and I have a few minutes for questions, I think. Not yet. Is there any SAN or direct SAN networking support? Not yet. There's a lot of interest. A lot of people keep asking about it. I don't think I'm going to do it. You're welcome to contribute it. Yes. Is there any way to get around stuffing my credentials somewhere else? Oh, I would love to do that. Yes. That would be easy to do right now if you were to implement a different power driver. The power driver is pluggable. So you could have an IPMI remote power driver that does that. So the question was, why are we using iSCSI to write the image instead of just having the deploy RAM disk pull it down from glance? The answer is we've thought about that and have plans to do that. And we might, in fact, do that. But even better than that, let's use BitTorrent to pull it down. 10,000 nodes at once. You're not copying the image 10,000 times from glance. It can trickle out. And they can all share with each other. So these are optimizations we want to get to. Just haven't gotten to them yet. Does glance need to have the exact hardware spectrum? Glance doesn't build the images. No, no. So we build the image. Basically it's an Ubuntu cloud image that has been slightly customized for your environment. And that gets loaded into glance. And that will... Yes, there are some hardware specificity, for example, the CPU architecture or any particular drivers that may or may not autoload, like Mellanox or some special RAID driver or PCI driver. But things like number of CPUs shouldn't make a difference. How much RAM? That should be easily portable. So the question is, why does it matter if the configuration and the hardware are mismatched? It actually doesn't matter about the hardware, but it does matter what you enrolled. So when you enroll a node, you're telling Nova this machine with this Mac address and IPMI information has these hardware specs. This much RAM, this much disk, this many CPUs. That's all that actually matters is what you enrolled. The information you informed Nova. That has to match the flavor because the Nova scheduler is going to compare those two and will not match the node. So the question was, could bare metal support snapshots in the way virtual machines do? And then having taken a snapshot used that for further deployments. The simple answer, I think, is there's no hypervisor. How would we take a snapshot? The longer answer is, I could actually reboot into a different RAM disk which would clone the disk and load it to glance. That is not a running snapshot. That's a stop the machine in snapshot. Yes. So the question was, I had said that the compute nodes, the machines running Nova compute bare metal, could there be many of those in the same zone, each managing different hardware? And the answer is, I'm pretty confident yes, but I haven't tested it recently. In principle, yes. The important thing to know is at the moment what's in Grizzly is there's no sharing of what they manage. And if you've got 500 machines and each compute host is managing, let's say 100 of those, if that compute host goes down, no one is left managing those 100 machines. One of the features we're going to be adding hopefully soon in Havana is takeover. Yeah. Not yet. People keep asking, please add it. I'll welcome the code. The question is, when you're issuing Nova boot, can you target a specific physical machine? That would be really easy to add in a Nova API extension with a tiny little addition to the scheduler filters. Instead of saying force host, because there is already an extension for force host. But in this case, one host manages many nodes. We just have to change the force host hint to also say force host and node. It could be done. So the question was, have we tried using this to deploy an image which contains a hypervisor like KVM and then adding that back into OpenStack to deploy virtual machines into it? The answer to that is, that's exactly what we're doing. That is triple O. Using OpenStack bare metal to deploy machine images which themselves contain OpenStack compute KVM. Now right now there's a separation of clouds. One cloud can't have both bare metal and virtualized hypervisors. There are some discussions about making that support, but what you can do is use bare metal to manage the infrastructure and deploy a cloud. And then just have to talk to the second cloud through a different endpoint. Any more questions? Yes, inception, bootstrapping. I'm going to pull myself up by bootstraps. Yeah, so the goal of this is actually continuous deployment. Once you've started with the bootstrap node and that has deployed to bare metal compute nodes, those then deploy a virtualized cloud and you can just keep rolling and redeploying yourself for upgrades. Deploy a new one, take over. Deploy a new one, take over, and so on. That's the goal, yes. Right now it's a small team from HP NTTDocomo and USCISI and I welcome everybody to get involved with this. It might create a huge headache if everyone does, but come on, yeah. More than a barrier. And I think I'm just about out of time. Thank you.