 With this deployment, we actually cover of the two nodes, the controller and the compute node, we containerize the compute node. So going through this, here's a little bit of overview of Triple O. So the first piece of this of the two products I'm going to cover. So really, you can think of Triple O a lot of this diagram is open stack on open stack. When you look at this piece, you see the bottom piece. This is what we call the undercloud or the install cloud. This is really a general use case for installing open stack that we find that you have a very specific cloud that you know what it's going to end up as. You know what you're going to install. And you install that cloud so that you can really leverage the pieces of open stack, the APIs, so that you can actually install a more flexible cloud that you can configure so that you can hand off to a user. And so that second cloud I'm talking about, the one on top, would be called the overcloud. So you can really just think of that as the user cloud. That's the flexible one that you are configuring, and that's your end goal. That's your target. So the COLA project. The COLA project is actually fairly new project relative to open stack. It's probably only adapted into open stack maybe five, six months ago or so. And the COLA project is an Ansel-based deployment tool that deploys containerized open stack services. So COLA has had a lot of success within the open stack community, a lot of community growth recently. And the way it kind of works is it goes through two pieces. There's the configuration and deployment of these containers, just like any open stack service. You deal with config. So Ansel does both in this case. And the reason I'm pointing that out, because we're going to pull out pieces kind of from COLA, and we're going to combine them with triple O. And they're each going to have a part to play in this. Specifically, the history of COLA, they used to use Kubernetes in the past. COLA was a, it started off using Kubernetes. And we reached a point where it wasn't really feasible for us to continue for really the project to be a success. Kubernetes didn't quite have enough support for us to even deploy all the containers, like specifically Nova Compute, just wasn't going to happen. It did not have net equals host, pit equals host. Things like that wasn't available for us at that point in time, very early on. So we actually moved to an Ansible-based deployment method. And that's how we've proceeded then on. That's something that I'll probably come back to and mention Kubernetes later on, because it's something that COLA could, in fact, come back to. OK, so now for earlier, all the Docker folks in the room are going to really recognize this. And for those who are not familiar, this is the Dockerfile. This is just a sample Dockerfile from COLA that we're going to use in this demonstration. Specifically, I want to draw your attention to the top line there. What's kind of important about this is that there's three layers, really, that are going to go into just a single OpenStack service. So this is the Nova Compute Dockerfile. So what's going to happen is layer two levels above this, we're going to have the OpenStack-based container, which is going to have a series of packages that is going to be common among all the OpenStack services. So it's going to be, you know, any generic packages that are going to be, that we can store in this layer, which we can layer it all the way down, so we get the very simple Dockerfile by the end. So the second layer is the base layer. So you can see that on the top of the center-west Nova base layer. That specifically is going to be a Nova service specific container that's going to have packages that's going to go across things like that is going to be common among Nova API, Nova Livert, Nova Compute, all that stuff is all going to be stored in that Nova base container. So now we get to this point. So now we're at the third level here, this third layer. And now you can see how it's much more simplified. We're installing the OpenStack-Nova Compute service. You can see there's OpenV switch and stuff like that, that's all here. So as we move further down here, we get to Nova Compute seduers. This is really a, there's a way that Kola changed very recently. We actually wanted to run in the user space. So specifically we actually run in the Nova user space. Originally we'd run these containers with root inside. So that's something we've recently changed. And then next is the ExtendStart script. This is a very important script. This is something that really makes Kola consumable by external projects. Really, this is kind of draws to the attention of the two models that come with when you want to configure containers. So the first one is config internal. This was an old model that Kola used to use. It's a very interesting one, actually, that Kola used to have. And what we do is we would, we'd bake the config files, we bake the config files into the containers. And so when you pulled them down, these services would come configured so that when you start them, they'd start up exactly the same every time. Very interesting model. But it runs in some issues, especially when you get to Neutron. Like you can imagine, Neutron has a lot of different configurations. So the number can piece into OpenStack. You can do it all different ways. It's highly customizable. So you can really run into some problems there. So how do you deal with that? Well, you're going to have to have lots and lots of containers with baked-in configs to handle all the possible configurations for this. So really it became difficult for Kola to kind of manage this. So we figured we'd move to a different model. So the config external model. So right now this is what Kola currently does. So what this requires is that you have a config generation mechanism externally. So this could be Puppet, in the case of Kola, it's Ansible. It really can be anything. As long as you have a config file present on your host and you make Kola aware of it, you can mount in that location of the config file into the container. And Kola will pick it up and use it. So this really allows a lot more customization. It really allows Kola to branch out quite a bit, because now projects can come in and be like, OK, well, you have your containers. All I need is my config file. Let me try and run with this. So this leads to the next point. Now we can combine both these products that I mentioned. So what you get out of this is you now have an undercloud with triple O. You have those opens like APIs. You have baremail provisioning with Ironic. You have Neutron, the networking there. Now you pull out the containers from Kola and you add those into the overcloud or your user cloud now. What do you get? Now you can leverage those bits of the containers and get the benefits of them and put them in your user cloud. And then on top of it, you still have those bits of the undercloud underneath to really leverage those APIs and to really manage the whole stack. OK, so now we're going to look at the heat aspect of it. So the Docker folks, you might not be familiar with this piece. Sorry, no, this is the Docker. This is the, let me rephrase that. So this is actually a Docker file. The next one will be a heat specific thing. This is one more Docker file. And this is actually the marriage of what you get when you get the two projects combining. Because now we have two projects. And how exactly is opensack and tell Docker to do something? How is this going to work? So what we have is we actually get a container out of this. And the reason is because we specifically run our containers on Atomic. So you're not going to be going to run that command OS collect and fig at the bottom on Atomic. It's just not going to happen. So how do we get around this? So we're going to put it in the container. So what this container does is it orchestrates the communication between heat and Docker so that we can actually bring up the containers that we need on this compute node. And the second thing that I want to mention here is that when I go back to the configuration part, because that's the one piece we're now missing. So the configuration is also going to be handled through here. So right now what we do is we actually run puppet in this container. So puppet is going to grab any sort of metadata. It needs some heat. It's going to run. It's going to generate configs. It's going to place those configs in a location on the host. And then we mount in those directories from the host into the containers. And now, voila, we have those configs on those containers. So that's kind of the whole process, how this container really bridges the gap. So that way you can understand the flow from one end to the other. OK. So here's the heat template that I was talking about. So even with Docker folks in here, you should be able to recognize at least a little bit. Like just doing Docker run commands, you can at least recognize things like Docker namespace, Docker compute image, things like that. Like those are less your namespace. That's where you're getting your container from. And then you look at things like the net equals host, these are just flags in Docker. Provis is true. Restart always. And then we get into volumes here, things like that. So this becomes very familiar. Although it's in a templated format, it's really just like almost like a Docker run command. So looking at the volume specifically, this is Nova compute. So we have remounting slash run, remounting lib modules. And then we go down to that third one. This directory var lib, scdata, JSON config. So this is a very interesting one because what this is, is the way that the cola container will be able to figure out what configs that you're giving it and where they should go. And so this specific JSON file I'm going to look into next. So in order, just as I mentioned, so the JSON file is going to handle any sort of direction. And it's where all this stuff ends up. And then that fourth one, I'll go over following this. OK, so here's an example of JSON file. There's actually two. The bottom one's small because I just wanted to show a more complicated one because the Nova compute one's a little less complicated. But it's easier to see. So take your pick. So in the example we're talking about Nova compute. So you can see the top there. I reference command. So command, this is just a general command to start Nova compute. And referencing the config file that we want to actually use. So also what we have here is we have the destination of where this is going to end up. So at synova.com. We also can set the owners of Nova and the user Nova is going to own this and the permissions of it and the source of where this is going to find it. So when we're mounting in that container from the host, which is varlib.sdata, and we're mounting it to varlib.configfiles.nova.conf. So you can see how the connection actually occurs now. So the container is going to run. The script will execute. It will look at this source. It'll grab that. It'll move it to etsynova.conf. Give it the permissions. Chone it. Give it the Nova user. And then it will execute this command to actually run with this config file. So if you can see the bottom part, you can see how this gets more complicated with neutron. And how we can have many different config files. And so you can imagine any kind of example, but we can really scale this out to be any sort of amount of config files you need. We can just mount in the container, and we can move the configs around as necessary. OK, so what I wanted to point out next was that fourth volume there, varlib.sdata.nova.conf. So that was the one that we were mounting. And OK, sorry I mentioned that. What I actually wanted to go to next is the environment. So the environment field there. This is an interesting field, because what you get here is there are two parameters for this called the coloconfig strategy. And right now it's set to copy once. So there's two things we can set this to. So copy always and copy once. What copy always is going to do is whenever a container restarts, it's going to look to see if there are any config files there. And if they're there, it's going to take them and it's going to move them to etsynova.conf, for example. So it'll move into the desired location. So on copy once, what's going to happen is we're only going to do that when it first starts. So on the first attempt, those config files are there. We're going to move them to their desired location, and that's it. So every time you restart the container, it's not going to pick up any new configs. So this is kind of an interesting thing, because this can deal the way you can deal with updates. Say you just wanted a different container, but you may have another config lying around. You don't want to pick it up, or you just don't want it to even deal with this process. You can just do copy once. Or if you want to actually do this, and you want the config to pick up each time, you can go copy always. You can move those configs around however you need to. And last there, the volumes from compute data. So what we're doing here with Noble Compute is we're running it with the compute data volume container. And what this is allows us to have another container that's going to remain around whenever we are changing out services, so specifically Nova Compute. If we want to upgrade Nova Compute, we don't want to have our data to disappear. We want to hold onto them. So we actually want Docker to keep one of these containers running that's going to have a bunch of stuff mounted into it with a bunch of data so that, for instance, we don't lose any VMs or anything like that. So that's what that is, and that's what it's going to hold onto. So specifically, this is what that looks like. Here's the creation of that container in heat. So also very similarly, you can see where it's coming from, the container name we're saying it to. And specifically, draw your attention to the volumes there. So Varlova instances, Varlova instances where Nova is going to store those instances that when it boots them, that information is there. And the second one is VarlibLivert. Now this is important for specifically Livert, because if you do want to do an upgrade of Livert, you really don't want to lose any of that data. So that's where we're holding it in this container. So this container is going to remain stagnant. It's not going to move anywhere. And so this will remain around, and we do also do a volumes from for the Livert container also for this. So I wanted to talk a bit about the benefits, like what we're going to get out of this, what's important, what's cool about just containers moving into this deployment cloud here. And what I want to first talk about is compartmentalizing each service. So now we're taking a service, Nova Compute, and we're really kind of putting a boxer on it. We're dealing with it as an individual unit. So anything we do to Nova Compute in there is not going to affect any of the services. That can be really anything that you can think of. You want to change around some packages. You want to upgrade and update those packages in there. Whatever it is, it's not going to affect any of the services. It's not going to have any change whatsoever. So second, the easy service rollback. So for instance, if you have a failure with Neutron, whatever it is, that you have containerized, how are you going to get back to what you had in the previous state? So one way you can deal with this is that if you have and you still have that container around than what you had before, so you can bring up the new one and say you did an update of a Neutron, whatever it is, Nova Compute. You update that piece. So you have a new container. Now so how are you going to deal with that rollback? Well, OK, it failed. So I still have the other one around. I'll just kill the new one. I'll go back to the old one. That's not a problem. You can easily deal with this. And so third and finally here, we talk about updates and upgrades. So some of the benefits that you get from this are kind of a little bit what I mentioned above. And some of the benefits between the two are very similar because especially with upgrades, you're still going to have to do Dain Maze migrations and things like that. But really just handling the service as a single unit does provide some benefits that I would talk about here. So the general update workflow, this is just very, very simplified. And what I mean by update versus upgrade, update would be within a version of OpenStack. So you go from 7.1, 7.3 or something. And the update, you can go from 7.8. So a different jump, it's a totally different process. So we're just going to talk about the update here. So the general workflow, we're going to stop the services and we're going to run a YAM update. So here's where things can become interesting because it's specifically around the YAM update. And then finally, we restart. So what are we going to get out of this here? First, we're going to have Docker is going to handle the service start and restop. So this is good. We can stick with what we've been using the whole time. We can stick with just have Docker do it. And it will handle swapping out anything that we want to replace. And because we're now using Docker, we can use the service rollback here. Anything we have, we can keep around, we can stop before we start the new one or we can just have Docker replace and remove the old one. Now the YAM update, specifically, now we're dealing with services being installed at build time and built at build time. So a lot of this, this YAM update here, is that now the YAM update is really done all beforehand. So we're not actually running it anywhere. So when you want to update something, you're just going to pull down that new container. So one of the things that you can really avoid here is dependency issues. So I talked about how the different layers of containers that we have to really get this all started. There are a lot of issues that you can avoid with dependencies between packages. If you specifically, you can think of a lot of different scenarios. You just want to update and you'll be able to compute. You still have some dependencies across the other pieces that you can still run some issues between packages that can cause this to break. So you can completely avoid that. Now with this isolated environment, you can just pull down that new piece as you need to and run that update. So the ability to mix and max service. So I briefly touched on it there. This is something that operators I've talked to really, really like. So we just want one service. We want two services. We want half of them. We want all but one. We can just do an upgrade of those services without having to really worry about too much of the issues here that we would be avoiding if we were just doing YAM updates. So now what kind of gets interesting here is what do you want to upgrade? How does it really all play in together? And this kind of enables rolling updates is that over time, whenever it's convenient for you, you can run a single service update as you need to. So I need no compute to update because I want a new feature. OK, well, let me do that. I'll update to the newest version. I'll go to 7.1 and the rest of my compute node is 7.0. OK, I don't like it anymore. Well, I'll just go back to 7.0. And then if I need another one to be 7.1, OK, I'll just go to 7.1. So I can jump back and forth as I need to. Containers really provide this flexibility. OK, so the demo. So my demo, this is just kind of an overview what I want to talk about. I don't think I want to have time to do the live demo. It does take quite a bit of time to do. But I still have a video around, so we can just go through that. I shortened the waiting pieces out so that we don't have to sit and wait and watch text go by. So what I want to do is I kind of want to walk through what a container update workflow would be like. But this is a very simplified version. What I'm trying to do is just kind of demonstrate that we have containers running in the compute node. And I want to demonstrate how heat goes through and actually swaps them out for a new one. And I also want to demonstrate the second thing, which is that you're able to actually mix and match services. So specifically what I'm going to target is I'm going to target node compute. And I'm going to give it a newer version of the container. And I'm going to swap it out for the old one. OK, so on the side here, I'm actually just going to have a diagram because I just don't want to lose anybody as to where we are in the stack. So I'm just going to have I'll just briefly bring it in and out so that we can just follow along where exactly we are. So here we're actually starting on the compute node. So this is where the containers live. So you should be able to see that. So we'll do a Docker PS here and you can see what we have. So just specifically to highlight some of them, you can see there's the Neutron OpenV-Switch agent. There's a Neutron agent. We have a Libvert. We have a Data Container. We have the OpenV-Switch DB server, all sorts of stuff like that. Specifically you can see that there are two containers that are actually pulling from the same place, but there are two different names. Because they actually are different. The config files going into them are different. So they have the same packages, but they're actually running different things. So lastly, I just want to highlight the Liberty tag. So that's just the tag that I've used for this to identify Nova Compute here. And I will show how it changes. OK. So next, we are in the overcloud. So we are now outside of that compute node. And we're actually looking at the VM right now. So we have a VM running that I just spawned up. I just wanted to show you that it's there. Because I want to make sure that you see that we don't lose it. It's still going to be there. OK. That went by a little fast, but you saw it was there. OK. Now we're going to go down to the undercloud. Now in the undercloud, OK. So that you can see our stack deployed. So we have an active one running. So now what we do with our deployment, actually let's see. I think we're going to go into yes. OK. So we're going to look into the YAML file that actually drives this all. And this should pull a lot of this together. OK. So you can see here what we're dealing with is the YAML file. So you can see that there's the atomic image right there that we're referencing that lives in glance. The Docker namespace that we're running with is mine. The Docker compute image that we're using here. And what we're going to do is we're going to get rid of the liberty tag and we're going to put in latest, so different container. So really this is kind of where this all kind of makes sense. You can see this is where the containers are coming from. You can have, this is the namespace. This is all the stuff that should really make sense as to where the pieces are actually coming from. And one of the things is we actually have support for even local Docker registry. So if you do want it, you can run it in the under cloud. You can have your containers there. And you can point to it. And you can run with that if you like. This is specifically not. This is actually going up to Docker Hub to get these. OK. So we're going to save that. And we're going to run the deployment again. So the command is exactly the same. And it's going to go through and run the update. And so he's going to go through and it's going to figure out, OK, what changed in here. And it's going to actually make that specific change. So this is really the part. I don't specifically skip on the screen, but this is kind of the part I'm skipping a little bit through because this will take some time. OK. So now we are back on the Compute Note. So you saw here are containers that were there before. So this is kind of where we do our little time skip because now he's going to run. And what it's going to do, it's going to go in. It's going to look, OK, you just change the tag for Nova Compute. Used to be Liberty. Now we have Latest. OK. So what do I need to do? I need to go tell Docker to change this out. So that's what it's going to do. So there's a little time skip that we moved ahead. OK. So there it is right there. So we moved ahead a little bit. And this is where we get the Nova Compute Latest container down. OK. So that is up and running, as you can see there. And then we have a new container in place. OK. So this is the stack complete. We actually did the complete update. We're back down in the undercloud now, just kind of showing where that this actually went all the way through. And so we're just going to show that create complete now. The update complete. OK. So what we're doing here is we're actually going to go back to the thing I'm bringing up right here. There we go. OK. So we're going to go back to the overcloud here. The point of this is really to demonstrate that the VM is still around. And what's really important about this is because the very unique case that Nova Compute is, if you don't mount in those proper directories, this is going to be gone. It won't even be able to communicate to where it won't be able to pick up where it left off. And that's very important. So really, when going into the design of these containers, Nova Compute and Libvert were really, really the tricky ones because they deal with the situation and a lot of data being left around that you need to be able to capture and pick up when you start again. Specifically, Libvert runs with pinnacles host. So it's a little bit different. So those two are specifically very unique cases. I know Glance has a data container that you need to mount in for our LibGlance images, I think. And I think there's a few other cases, like Ceph is also a very unique one as well, Cinder also very unique because you still need to access the devices on the host. So in order to do that, the container, really the wall between the container becomes a little bit thinner between the host so that you can actually access the right stuff so that you can complete the transactions that you want. So we'll just do that in the list. OK, so that was just to show that this actually does remain around. So just to kind of conclude, the COLA project itself is something that's really been growing within the OBSLite community as well as the triple O project. And so the integration itself has been very successful into the projects, between projects. And it's really open to what you can do, other projects integrating with COLA. And really what you can bring when the container is integrating into triple O. There's still a lot more technologies we can do. We can look at Magnum. Magnum exists within the OpenStack undercloud that, or it may not exist right now, but it's something that we can build into the undercloud that we can possibly leverage in the future. That's really one of the advantages of having that undercloud there is we can really get at all the OpenStack APIs. So that's something we can definitely look at. And also Kubernetes too. Kubernetes originally did not support NetEcoSos, PyDecosos, running in privilege, which was just a no-go for a lot of our containers. So given that situation, we in the future are going to look at Kubernetes probably again within the COLA community as something that we can actually bring back. And possibly in this case, we can actually bring it to triple O with our integration. So with that, I think we've got a little bit of time left. So I can open it up for questions for anybody. I have some cool swag here. If you have a really good question, I'll throw it to you. Anybody? Yes? You mentioned you have a separate container for persisting the compute data. Yeah. What kind of source is running the compute data? So he's asking specifically, you said a container for what was that again? The compute data container. Yeah, OK. So he's asking about the compute data container and what process really runs within it. So we actually, what we do is we actually just have it do nothing really. It just sits there. The purpose of it is to mount in those volumes. That's really where the value of it is. We need to have that container persist with the information so that Nova can reconnect to all the VMs that are still lying around. So anyone else? Yes? So specifically, the controller knows a little bit different, specifically in triple O because it does have a pacemaker. And there's a bit of different. There's a bit of things that complicated a little bit on the controller note. But something like that may vary a little bit. So actually, specifically, when we repeat the question, I was talking about, is it a Docker pull? Is that something that can do an upgrade or a replacement for controller or compute nodes? So with regards to the controller, that's kind of what I was answering there. And compute, it's a little bit different. It's a little complicated. But really, the improvement, I think, of this integration, I think, if we adapt for things like Kubernetes, it may change and maybe make the control a bit easier. But ultimately, if you want to get a new container, a Docker pull is really what's going to get you. You can also build it if you want to. But primarily, you'd probably pull it from a place for the developers who've been building themselves. So that could be any range of versions that you want to get will be available to you. Yes? I'm sorry? Yes, so the question was, are we looking into containerizing the underglot? So that was kind of an experiment, I guess, for a time. We're basically trying to look at how small can we make this if we were to put this in a container. And we actually looked at having heat, ironic, really the bare minimum pieces we had in a container and see if we can get it to work. We had some success with it. It was a little bit difficult. And then we kind of looked at even containerizing just all the pieces and really just trying to get it to work. And so what I think kind of concluded is that to look at the containerization of the undercloud, I think the best place to look would actually be at the COLA project in which they use Ansible to configure just a cloud by itself. So what we could do is we could use Ansible in just the way that they use it to configure their cloud. And we can use that as the undercloud where we can deploy an overcloud on top of that. So that's something that's been kind of of an interest. It's kind of an interesting concept because it would require integration kind of into the COLA community from triple O. So it'd be a little bit different direction in this, but it's definitely something I've looked into and it's certainly possible. I don't know if I can reach you, but I can, someone wants to throw it up to me. Oops, I'm out of scars, but I'd love to take good questions. Oh, okay, we'll get more scars then. Yes. So the whole business of keeping the container around in case you need to go back, is that something that happens by default or is something that you can kind of control by some parameter essentially? So it's something that we would probably have to add in by a parameter because currently right now the, so it depends on actually the mechanism which you go about doing it, specifically the way we're doing it through Docker is that we're just gonna replace them. You can do different things depending on the naming if you wanna have different names, so you can almost version them. You can have Docker stop it and then start the new one and then we can handle that rollback that way. I'm not sure quite how Kubernetes would handle that specifically, but we could also do just generally, having even just a stop command. How would we do it? Because there are multiple tools that this compose is just regular Docker, Kubernetes, any sort of these things to actually handle container replacement and starting and stopping and so we can go any which way, but generally right now within the templates it does do a replace, but we could have it sit around if we want to. Sure. Any more questions? Got another one, all right. I mean it was kind of a follow-up question. Yep. Particularly updates, because we just gave a talk about the updates of Drupalode and like how fast is it? I mean it's one of the advanced, how fast, I mean we can't really compare directly because we have more control on nodes yet. Yeah. So we're updating the control for compute node, how quick is that? Yeah, so I mean, so there's a few ways I can kind of go about this. First, if you want to go through and have Docker, just do it right in front of you. I mean it just takes a few seconds if you have that image in place. So now to kind of branch out, get a little bigger now. So now you need to have the image download and have it pulled in, but most of the image is actually already cached. The layers, Docker will have those layers actually already cached. That's also not too fast, it's not too slow. So even pulling it's not too long. So kind of now as we get a little bigger now with heat, heat still needs to signal all the way through. The update piece, the run compute nodes post-deployment, which is one of the last things done. So it actually takes a while for heat to even signal over to it. So in general it still takes, I don't know maybe like five, six minutes to get there, but the actual container swap again doesn't take long at all, or even the pulling itself. So there's still ways you can get around this. I mean maybe improve it a little bit, they can imagine, but that's really what it is right now. So maybe like five, six, seven minutes or so to really complete that cycle. Yes. Which of those compute containers? Yeah, so specifically Nova Compute's gonna run pilgrimage. Let's see, Neutron needs to, I think Neutron, I forget which one needs to run pilgrimage, maybe it's OVS, I don't actually know one of those needs to run pilgrimage. Livert needs to run pilgrimage. I think that's it, we tried to, as you made one more, I think, well, I know in Kolo, I think Seth might, I know Cinder did for a bit. Yeah, sorry if you didn't comment. Yeah. So specifically the question is about Seth, Block device and stuff. So I mentioned Seth and Cinder, because specifically when you get to Seth and Cinder, these are very unique cases. I'm here having to do these containers myself. It was very challenging because you need to really go out to the house quite a bit to be able to interact with these devices. So you're mounting in slash dev into the container. Now this was very tricky for Docker. For a while, this wasn't possible, I think until one eight slash, mounting slash dev was actually possible. So it took a while. And so specifically, those containers compute with a neutron one, Seth, Cinder and Livert are running privileged. And there used to be more, but we actually kind of cut down on it, especially as Docker improved to be able to actually cut down even more. So hopefully there will be more in the future. Do I give you a scarf or anything or no? Okay, good. I just don't see it, so I'll just take it. Okay. All right. Well, thank you everybody. I think we're out of time. Thank you for coming. Thank you. Thank you.