 I'm sorry guys. Luigi, thank you for saving me. So, I saved it in two resolutions. Which one do you prefer? 16 by 9 and 14 by 3. I guess it will be the one that we use. Yeah, it's the one that will be on the website. So, which one do you prefer? Just take one for the computer, which is better. I'm really sorry, I see that people are interested. Oh, no, no, no. I know, I know, you guys got to go. Yeah, absolutely. You know, it's recorded on YouTube. Also, I mean, we missed the phone. Maybe some of the questions will not be heard. Oh, okay. Yeah. Thank you so much. This is Luigi's. Luigi, thank you so much for saving my phone. Actually, what is mine? Did you get the sticker? I did not. Can I give you my card? And now they hear me. Try to say something. Test. Yeah. Okay. We didn't show anything. Just one slide. So, yeah, even with, like, I personally said, that compliment is a background for the computer. So, maybe that's one. Yeah, but can you still avoid it? I don't understand. I'm sorry. Oh, keep it pushed. I think it, I don't know, fried. No, it seems like something intermediate. Yeah, there is another microphone. Do you want to use the remote? Yes, it is. So, if I turn it on, I can, okay. And you can also control the presentation. Okay. Can you hear me? Can you hear me? Okay. Cool. So, hello, everyone. Thank you for coming. I'm Mirka Stransky. This is Marius Andrew and Giulio Fidente. We work on the triple O project, which is a project for installing and managing OpenStack clouds. And we're among the people who implement updates and upgrades there. So, that's what we're going to talk about today. So, first, I'm going to do just a short deployment structure recap just to ensure that we all have a common base for talking about the updates and upgrades. Well, then I'm going to talk about the upgrades and updates in OpenStack in general. And there are some specific differences in OpenStack between major upgrades and minor updates. Pretty defined. So, we're going to mention those. Then we're going to talk about how updates are specifically implemented in triple O. And some of the issues that we faced while implementing the update support. And then we're going to show a demo video with a nice, like, how does it work that tenants have preserved uptime of their virtual machines during the update. So, the deployment structure. This is the overcloud. This is the cloud where tenant VMs run. And it's like a simplified deployment. And it's a simplified view of that deployment just to have the basis for talking about updates and upgrades. There are three controller nodes to ensure high availability. There's Spacemaker cluster in it to control the services and help with the high availability. On each controller, there are APIs, RESTful APIs, which users use to talk to the cloud and perform management operations on it. And there are databases which are clustered together to store persistent data. And there's MQP message bus to facilitate communication between the controllers and also between the computers and between controllers and computers. So, for the upgrades, this is the more general case. The major version upgrades. When you do a major version upgrade in OpenStack, the database schema of the services can change. And MQP messaging, which is used for RPC between the nodes, can change as well. That means that you have to upgrade controllers in parallel because currently the services can only talk to the database schema or work with the database schema that they expect just specifically for their version. So that means that you have a cloud management downtime. You're going to have to take down the controller services for some time to perform the update and DB schema migration. But that doesn't impact the, like, the tenant VMs. They still keep running. And you can upgrade computers in series. In fact, you should to be able, again, to preserve tenant uptime. And you can also do it in batches depending how much free capacity you have in your cloud. You can do that because the compute nodes, the compute service itself doesn't have any direct database connection. So it can't have any expectations about the DB schema. And for RPC over the MQP bus, you can do pinning. So each version supports communicating in its own RPC protocol and also in one version older protocol. You can also do service by service upgrade. That doesn't get you rid of the downtime. You're just going to spread the downtime by different services. We considered moving to a more service-based management rather than the current role-based management in triple O. So this is what we might do in the future. But for now, we're focusing on the global synchronized upgrade. So if we were to look at the workflow, first, we would pin the Nova RPC, ensuring that we can now have controllers of a newer version while computes can be also of the newer version or of the older version. And they can still talk to each other. So once we have the Nova RPC, the compute service RPC pinned, we're going to shut down the cluster of the services on controllers. Then we can do the package updates. And if there was some reason to reboot like a kernel update, we can reboot the nodes. And then once the nodes are back up, we're going to start the databases first, perform the DB schema update in the databases. After that, we can start the remaining services on the controllers, at which point the cloud starts, again, responding to the management operations and performing them correctly. Then we can do package updates on the computes. We can do it in series or all at once. That doesn't really matter that much. We start the Nova compute services on them. And if some of the computes need a reboot, that's sort of an added complexity there, sort of an interesting case. You need to reboot only empty compute nodes, because otherwise, obviously, if you reboot a compute node where there's a tenant VM running, that's a problem for the tenant. So what Nova can do, first, you're going to remove the batch of the nodes, or at least one node, from Nova scheduler, which ensures that new VMs being asked, like if someone asks to schedule a new VM in the cloud, it's not going to get scheduled on this particular batch of compute nodes. Then we live migrate the VMs from the compute nodes away to some other compute nodes, which are not part of the update batch or upgrade batch. Then we can reboot the compute nodes to make the kernel update take effect. And then we can add the nodes back to scheduler. And we sort of do those batches until we have restarted all the nodes that needed a restart. And once we're done with that, we do an unpin of the Nova RPC, and your cloud is upgraded to the next major version. So minor version updates, they are sort of less intrusive to the cloud, because OpenStack, like by convention guarantees that database schemas will not change. And I am QP messaging either does not change or is backwards compatible, meaning that you can minor update anything, and it's going to still be able to communicate with the older version, no problem. So the challenge here is in uptime expectations, because that means you're able to do completely rolling upgrade of everything. And that's what the operators expect. And they expect that there's going to be no tenant downtime, even for the controller part. So we do the rolling updates on the controllers where we rely on pacemaker, because some of the more complicated services like databases and the message queue, for example, have a specific set of actions that should be performed while leaving or rejoining the cluster. Of those nodes, so pacemaker has that implemented in so-called resource agents, which can deal with this for us really nicely. And then we again do the normal package update on the compute. So let's look at the workflow again. We go sequentially over the controllers. We take one out of clusters, stopping all services there. We do the package update and then reboot the node if needed. And then we let it rejoin the cluster utilizing pacemaker as I said, the resource agents. And then again, the compute procedure is essentially the same. We do updates on all computes, can do them either at the same time or in batches. And then we perform the same reboot procedure as we had before, which includes the removing from scheduler, live migrating VMs away, rebooting the now empty compute node and adding it back to scheduler. And that's how a minor update is performed. Now, this is like a very high level theoretical view and practice. Obviously, things get a little bit more complex. So I'm going to give the floor to Marios to talk about the implementation details of updates and triple-o. Hello. Hi, can you hear me okay? Okay, thanks. Okay, so hello. Okay, I don't know how many of you were in Steve Hardy's talk earlier. So I'll do a little bit of a recap on the terminology here. So this is, so the under cloud down here and the over cloud up here is, is this something that people in this room have heard before, apart from the triple-o people, of course, something you're familiar with or not at all? Maybe. Okay, so what happens is in, in triple-o we have two clouds effectively. We have the under cloud, which is what we stand up the initial cloud. And that's what we use to manage the actual end user cloud, which, which we call the over cloud. The under cloud down here is effectively a single node cloud and that's where we run the director node. And this diagram actually shows the setup as in this is what we, you know, this is what we do for the updates. This is, this is the setup that we have. We have a over cloud with our, you know, controller nodes, which are running in a pacemaker cluster to manage all the services. We have our compute nodes, which are hosting the tenant VMs, but it also also shows the goal, which is what we, you know, this is what we want to work towards. The idea is that with your tenant VMs up here, the director, you know, from the director node, an operator can come in, do a simple CLI command, YAM update. And that will then go away and take each of the nodes in the over cloud sequentially one by one. We'll see how that works in a moment in a little bit more detail and update them. And the expectation is that there is very little downtime for end use of VMs. In practice, from, you know, from our setups, from our dev setup and our testing, it was in the order of about 10 seconds. You would lose like two pings, three pings while you're pinging the 10 VMs. And this is while the updates are happening, which we think is acceptable. Okay. That's, I think that's, that's it for this slide. Okay. So we start the update. So the idea is that operator goes on the management node. Steve spoke a little bit also earlier about the plugin that we have to the OpenStack CLI. And instead of before Steve spoke about OpenStack over cloud deploy, here we're doing OpenStack over cloud update. The over cloud is the name of the cloud that we're updating. And then we pass in the templates and the environment files, which have to be the same for the update as what you had initially deployed. And the two main things that we rely on here are setting a pre-update hook on each node. What that means in practice is that it allow, it basically it sets a break point. Okay. And the whole point of this is we only want one node to be updated at any given time. And this is especially important. I mean, it's important for all the nodes, but especially for the controllers where we want to maintain the high availability. We're maintaining essentially a slightly degraded high availability because if you're starting off with three controllers and you take one of them out, I mean, you're left with two controllers. There's still HA, but there's, there's no quorum, you know, if one of them fails, for example, then you're kind of in a bad position. So slightly degraded, but still HA. And also the way we control whether a node, a given node is going to be updated at a given time is by setting this update identifier. And effectively, this is just a time stamp or, you know, just a parameter or variable, which we will set for a given node so that when it comes to updating that node, if the update identifier is set and this is via the heat templates that we use to deploy in triple O, that node will get updated. Otherwise, it just gets skipped. We'll speak a little bit more about what does the actual updating in a moment. But this is just to show that, you know, the two kind of features that we rely on here is the heat native feature to do the breakpoint one by one update of the nodes. We control whether a given node is going to get updated or not by setting this update identifier. So I'm just going to talk a little bit now about what happens on the compute node and then a little bit more about what happens on the controller node. So the compute node is the simplest case. It's the simplest case because there's no pacemaker. The main things that are running there are the nova compute and the agent. So we have like the OVS agent. We have cilometer agents. So effectively, we don't have to worry about bringing the pacemaker cluster down on that node or taking the node out of the cluster, for example. We just, you know, once we've made sure that we don't have anything running on that well. Okay, so Yerkesko spoke a little bit about rebooting the compute nodes. We didn't actually do that in a great detail. That was the one case that we didn't test too much for the rebooting. But, okay, so the simple case is we just update all the packages on that node and then, you know, it go on to the next one. So for the compute node, it's much simpler. For the controller, it's more interesting. So here we first need to bring the cluster down so we take the controller out of the cluster. We need pacemaker to be in maintenance mode for the duration of the update. Once all the services are stopped okay on that node, then we run a yum update. Once the yum update is finished, we put the controller back into the cluster and we go on to the next one. This reference here, the second point about matching the pre and post update environments. That's something that Julio is going to speak a little bit more about because it was an issue that we, it was one of the bugs that we hit in fact because we had a change in the pacemaker constraints from what we had initially deployed to what we were updating to. So we needed those to match for the update to work correctly. Okay. Okay, so this is how we actually deliver the updates is basically just the bash script. It's called yum update.sh and it's delivered as the config property of a heat software deployment. This is software config. As Steve said earlier, you know, within heat you can have this software config which runs an arbitrary code on your chosen node. It can be Python or, you know, whatever you need to run there. In this case, we're using a bash script. And if you like afterwards, if we have time at the end, we can even look. I mean, this is all obviously open source. It's all upstream. You can go on GitHub and you can actually see this script and you can see that, you know, all the logic that we've described here. If it's a controller, you do this stuff, you bring the cluster down, wait till everything is down, run the update, you know, set the maintenance mode for pacemaker, run the update, bring the services back up, everything is, the cluster is settled and then on to the next node. Yum update, the very first thing it does is checks that update identifier. If the update identified by default is empty, it's just an empty string. If it's not set to a timestamp, which is what we're using currently, I think, then nothing happens on that node. So the update identifier is set on the given node that we want to update at the given time. And then the yum update.sh contains just that logic. You know, bring the pacemaker cluster down, run the yum update, and then bring pacemaker back up on that node. Bring all the services back up on that node with pacemaker. Okay. So Julia is now going to tell us about all the things that could possibly go wrong. Well, some of them anyway. I mean, in theory it sounds very easy, but there were a lot of subtle things that we had to deal with. Thank you. Can I overhear me? Sorry, it's a bit. So I'm Julia. So what could possibly go wrong? We have a plan and we have an implementation and we have bugs. So like every software that I've seen. And the interesting thing about talking about the bugs is that we learn a lot from bugs I get to deal with bugs a lot. So these are not just the four bugs that we hit are just more like four different type of bugs that we had to face and that we had to deal with. And so they kind of go into different areas and they also are, they will show how far we have to reach, especially in taking help from other people to get all the things to work. And so the four different types. Let's see. One is what I'm calling the tooling bugs. It's just issues in the tooling that we use to run the update. So not tools that we manage by ourselves, but tools like heat that Steven actually introduced before us. And another type of issues is bugs in the workflow. So bugs in the way we have implemented the updates, which is actually causing troubles that we didn't think about in advance or at least that we didn't think correctly during the planning part. Then there are what I call subtle bugs, which are problems caused by existing bugs or maybe old bugs in the actual components that we are trying to update. So not in the tooling that we use, not in the workflow, but just components not behaving as they should when they are performing the update. And evil bugs. So those which we introduced by ourselves the worst ones, right? The ones that we feel responsible about more than the others. The first one, it's just, it looks like an easy issue. We have each and every resource of our overcloud. Well, we have a representation of each and every resource of the overcloud in heat, including the networks. So Steven at some point said that we have network isolation. So we have multiple networks and multiple subnetworks in the undercloud neutron, which are actually matching the physical networks of the overcloud. And he had a very nice feature that introduced it during, you know, after our initial release, which allowed us to refer to networks by their names rather than their IDs. And we decided to take advantage of this feature. So in initial release, we were referring to the networks from their IDs. Then in the next release, we were referring to the same network by its name. And we expected it to do the mapping correctly and they surely were also, but it wasn't working as expected. So we ended up with the overcloud, you know, deleting and then creating a network, which was causing cascading issues, which so we had to go through actual heat experts, Steve in this case, who not only had to fix the bug in the master version, but also to backport it to the stable version, which we were trying to update to, right? So there's twice the work because it's on the master and it's also backporting pace. We wanted the feature to work in the version that we are updating to otherwise, you know, we can actually take advantage of the feature. The other bug that I had in the list, it's a kind of, well, very different area, but not, you know, not really in the tool itself. The view is we use pace maker in the controllers and pace maker is doing its job as we wanted to. And it has a sort of graph representation of the services so that it has some understanding of which one depends on each other. And during the update as they were describing, we need to do some controlled shutdown of the services so that they get down in a cleanly manner and we can take out one node from the cluster without the other behaving, you know, unexpectedly. And the problem is we had a problem in our constraints, right? In what represents this graph in pace maker. So we were unable to do a clean shutdown of the services. The problem was kind of, you know, we knew about the problem. We just updated the constraints in the newer release, but we were running the update so we needed the shutdown before the update itself. And so we were actually going through the graph before it was updated to the working version, which means we were running through the graph in the broken version. And this is why I call it a workflow problem because we had to perform a matching of the pre and post updates of the constraints before the actual update was started, and the YAM update could start. This time, HA people helped. One of the guys is there on the first day, Mikael, and, you know, he helped us figure what was the proper graph to put into pace maker to make all the flow work as expected. Another one, sorry, another bug, again, which needed help from other people. And this time from Neutron course, I don't know if there is anybody from Neutron in the room or not, but so we finally got hit to behave as we want to. We got the constraints in place as we need before the update, actually. We ran the update. The software is updated as we wanted to, but we also need to preserve the guest's connectivity, and we figured that something is wrong, like guests are not working during the update. This is actually a problem in Neutron, and it took us probably a day to figure. Before we could reach the appropriate people. And it turns out to be a bug which was fixed in Neutron, but not in the version that we were coming from. A very simple bug. It was just, we use a feature from Neutron which allows for age availability of the L3 IP, so of the agent which is running as the L3 router. And this is implemented by having multiple copies of keep alive the running on all nodes. So during what we hoped to be a clean shutdown, Neutron was not correctly killing one of these keep alive the instances. So the actual IP was not relegated over the other. This was not a problem with pacemaker. This was actually a problem with how the thing was implemented in Neutron. So again, we had to put in place a workaround which was distributed on the nodes through heat before the actual update could take place, because we really needed the IP to relocate before the update so that the guests wouldn't lose connectivity. And then another area again, the evil area, the area where we are responsible for. And so update is complete, heat worked as expected, pacemaker did the shutdown, Neutron did the relegation, the guests have connectivity, everything seems fine. And yet there is people testing this after the update and they are doing basic stuff like scaling the cloud after the updates and it's not scaling anymore, like how is it that was working and it's not working anymore. And this is probably, I forgot it in the slides, excuse me, but this is actually the update identifier that Marius was talking about, because heat tries to be clever about when it's necessary to update the resources and when it's not. We just forgot to distribute the update identifier to the tool which was setting the maintenance mode and stuff. So on the first attempt on the moment when the resource was created it was doing what we expected, but at the moment when the resource was updated it wasn't because the update identifier was not changing. It took probably more time to get to our working implementation than to do the initial implementation. So just to give an idea of how much we spent to get it working. But we have a video now because finally it did work. So let's see the video and how it actually works. Thanks. So just one last thing on this slide. One of the reasons we had this here was to call out, as Julia said it was evil, it was kind of subtle, because the update was successful, the update was complete, the cloud was working, the tenant VMs were fine. And this was to call out some of the work that people like Marius Corne up there and the testing guys were doing, because they really tested it. I mean even after everything was working fine they put it through its places, they tried scaling, adding and removing nodes and things like this showed up. Sorry about that. So I'm going to try, I'm going to attempt to show the video. Unfortunately with the update it takes so long it's really not feasible to, can you hear me okay? Yeah. It's really not feasible to do a live demo. I mean even with a very small setup of three controllers and a compute, take the best part of at least an hour, at least I mean to get the whole update through. Why? So hold on. So okay this, the video is not great, I don't know how well it's going to work, but I'm going to skip through and I'm going to, because we're also almost at time for the half hour to leave some time for questions. The main thing I want to demonstrate here is the root of failover really. To show, well the main thing I want to demonstrate is how the 10 VMs remain available and they kind of go down or you lose two pings throughout this whole process. Okay so I've got some times here on my phone which I'm going to skip to. Okay so initially, let's go to here. Okay so actually hold on, sorry about that. Yeah okay let's start right at the start. So okay this is, this is the setup, okay. At the top there we've got the heat stack list. So you see I've got a heat stack deploy, I've got my over cloud deployed, it's create complete. There's four nodes involved here. So you see the ironic nodes and you see the nova instances at the bottom corresponding to those nodes. So we've got three controllers in HA controlled by the pacemaker and we've got the one compute in this setup. And then I'm going to skip to here to show you what's happening on controller zero. So this is a terminal into controller zero. The output you're seeing here, again I mean you don't really need to see, you know, well okay it's pretty bad but I'll try and point at things. What this is showing is an IF config on the neutron router namespace. Okay and the main thing I want to show is this line here which shows that the tenant router IP is currently being hosted by this controller. Okay so it's on controller zero and we'll see in a moment that when this particular node gets updated, that IP goes from here and gets failed over to one of the other controllers, it's not being updated. Okay and that means that you can still get to your tenant VMs which are being hosted on your compute node. Okay so let's go to where the update starts so you can see what the CLI looks like in practice. So okay so this is what the operator would do on the director node, you know open stack over cloud update stack and begins the update. And this will start, pick one of the machines based on the breakpoints, pick one of the machines and start updating that one. So let's go to the first breakpoint to show you, so yeah actually that's here. So you can see, hold on, go back slightly, okay so that will come up in a moment. So you see here that's hit a breakpoint and it's picked controller two. So it's picked controller two here, it's waiting for some input from the operator. You want to start updating this controller, you say yes and it starts running the update on controller two. I'm going to actually skip forward here to where it's updating controller zero so we can see the root of fail over. Actually before, yeah I'll do that, so go to seven. Okay so the text is very small here but what this is showing is that I mean I've got, I intentionally made the text much smaller here because I didn't want you to see so much what was written but the fact that this is a growing list of services which are pacemaker, this is the output of PCS status, so this is the status of your pacemaker cluster and it's showing that on controller zero because it's being updated now all the services are coming down. So this list grows with services which are coming down on that particular controller. And I will skip to this point where you can see now that this is controller zero, which remember this is the one that was hosting our tenant router at this point now and this is, hold on, okay so this is controller zero and then we'll see here how the IP will appear on, so you see the cursor in the video is, sorry, okay here. This is controller zero, you see the router IP is still on there and then it's gone, it's gone from there and it just appeared on controller one, okay so it fails over and what that means is on the, if you see where we're pinging the overcloud tenant instances, you hold on, I need to just find this one point and then I promise I will leave some time for questions. Yeah, so this is what I want to show you here. Throughout this process we're pinging the tenant VMs, you know, pinging, pinging, pinging and it's fine. Those two unreachables is the time it takes for that router failover to happen. So in practice it's about in the order of 10 seconds of, you know, downtime, well, not being able to reach, not even down there, not being able to reach your tenant VMs. Yeah, so that's the demo video and I think we have a few minutes for questions. Thank you. Any questions? Yeah, that's a very good question. The question was, do we do any validations on the update? So right now not, there has been some work, Steve has done some work on checking the heat resources, is that right? The pre and post update heat resources matching. Yeah, so right now we do have this simple checking of the heat resources, but I mean the actual validations and doing more comprehensive validating of your environment and that would apply not just to the post update environment, it would apply something you would use in general to the cloud that you've deployed. Yeah, so I wanted to say one thing, if it's about the configuration, if the question is about the configuration, we use Puppet, which is meant to enforce the state. So if Puppet succeeds, then you can either have the exact configuration you had before or you may even have updated the configuration, so slightly changed something and if Puppet is okay, then I mean that's our first guarantee that something actually happened. If the question is more about the versions of the packages which you have, because this was actually running on Santos in the demo, then no, but I think there are probably other tools which fit better the scope of... Okay, so can you mention any tools for, let's say deployment validation, the update validation that is actually still valid and you can say like it's working, even the scaling up, you know, it's working, so how would you validate it? Oh, so like Tempest or what I... I'm not sure what is on your... I'm thinking about the Tempest too, but yeah. But like in a lightweight version or... Okay, so... It's a good idea though to... Is it to use Tempest? Yeah, it's a good idea to use either Tempest or a subset of the existing tests in Tempest. The idea is that there's going to be a couple of different levels. I mean it's going to be the idea, the vision is that we're going to have a pluggable validation implementation that you can write the wrong validation, but I mean just to point out there's a couple of levels of validation. I think you're talking about validating that your cloud works, but there's also things like validating the inputs, validating the conflict between the products. I'm saying the Tempest is more like for the developers, right? Yeah, so one of the... So one of the options would be Tempest again, or we have actually a couple of people working on an external validation tool, external to heat. I'm not aware that it's been integrated into RDO yet. I don't think it has, but there's some work on it. And there's another option. La Diazmola had a talk before showing CloudForms integration with the director or manager, and he had these checks there built in for the package versions, for example, to verify that you're not vulnerable from shell shock or bugs like this. So the other option would be to integrate with something like ManageIQ. What is the work for Roleback? Rolebacks? I don't think there is, and I'm not sure how would we implement it really. Of course you can do backups and then Roleback manually from some backups, but once you do a YAM update and do a bunch of config changes, do the database migrations. As far as I know, those aren't reversible by themselves, so you would really need to do a complete Roleback, I think. Not for upgrades, but even in the updates for the database migrations, because we're not changing versions for the updates, but even then, we don't currently know. Yeah. Okay. Thank you very much. Do you have a question? Yeah, you can come. Thank you. Actually, I have the same question. The microphone was a bit small for the recording, so for the puppy here, it was fine. I didn't think it was too low. Well, the face when passing the microphone. Hello. I'm Marius. I recognize your name, but we never met. How are you? People actually are okay with upgrading their whole cluster. You know what, man? This is all really new. The updates, we worked on it in November, and the upgrades, we talked about it this week. No, but from this perspective, I would expect people, you know, let me upgrade two clusters and two computers. You know, let me see if so work. In a week's time, I'll complete the update. I'll see if the clusters are happy. I mean, this all or nothing thing is like, bro. Yeah, scary. Yeah, that's true. I mean, we have a way of controlling which nodes, and we can start the process with a simple CLI, and you can control that during this process, only one node will get updated. You can enforce that. I wouldn't even... First of all, you know, just upgrade Cinder. The next week, I'll upgrade Neutron, but I know... But there are also other approaches that we consider which we can say for in that respect. So, for example, instead of updating things in place, you bring up a parallel node and then just swap it. Yeah, it is. Which is just a complex. It's all new stuff. Yeah, I mean, you could go that way. You operate to turn it around and say, no, it's bullshit. You can deploy it already in the field. The updates, yeah, we have some customers. Oh, yes. You know what? It's revealed.js. It's like HTML. Is it good for you? Yeah, yeah. We will put it for you. I'll do it over here so we're not disturbing. Yeah, yeah, yeah. The next one. We call it. It's HEE. Okay, so you have 40 minutes for the photo talk, including the Q&A session. It depends on you when you would like to stop. Basically, if you will keep talking, we'll notify you on when there is 10 minutes left and 5 minutes left to when the time is up. We will stop you, basically. Okay. Can you sit on the water? Thank you. Let's set up the mic. Could it be somewhere in the buffer? I think that it's not a good idea because it would do the weird sounds. Makes sense. Is this better? Yeah, that would. Maybe a little closer, but... So let's check it. So let's check how it works. One, two, one, two. One, two. Check. Mic check. Right. So it works. Now it's on the mute. I already would like to speak just on the mute bit. By doing this. Right. So it looks a little bit somewhere. Good. Mic check. So I'm going to look at the mic. Let's connect the laptop. Good. And... These are almost there. Yeah, like... Yeah, like... Okay, great. Let me just see the microphone. Yeah, the other thing, we have the remote control. So if you would like to use the remote... What? Extreme remote control. This thing captures the tape, which is also the... Let's try again. So you can use the remote if you want. All right. Okay. Yeah, we have three scouts. I'll look at it for you. So you... But you can reward three best questions. And you can make questions. The ones of yours. Okay. So I have three scouts. Yes, yes, basically. And if you don't like somebody, you can just, you know... Yes. So here, I prepared it for you. Thank you. So you can use it. Here's some questions. Do you want us to introduce you? Yeah, what would you like... I have like a slide to introduce myself. So we should... You can just say my name. Okay. I will say my name. I will say my name. I will say my name. I will say my name. I will say my name. I will say my name. Hello, everybody. I'm pleased to welcome Nikola Dipanov from... from Nova... OpenStake Nova Compute Engineering Team. He will be having a talk on high-performance VMs. So NUMA, CPU pinning, and large pages. So welcome, Nikola. Thanks. Thank you. Thank you. Coming so as my introduction said, I will be talking about high-performance VMs in OpenStake Nova. A little bit about myself. I, as many other speakers here, work as a software engineer at Red Hat. I've been working on OpenStake Nova since 2012 and been a core developer since 2013. So enough about me. So this is the overview of what I'm going to be talking about.