 I don't want to explore it later. So I hope it was a bit of a join themselves and scale. So as Ian Cushing said, this is a container for virtual. I'm going to talk a little bit about how there can be a few things that are kind of a broader, strapped up, broader, you may not have thought about, don't necessarily, correctly relate to the effort, are still pretty common. So really coasic facts just to make sure we're building significant. So this talk, I guess, we talked a lot about the perfect key to what are others. So Kegu, it's an emulation, it's started as a shared user space, working through partners. Basically what it does is it launches Kegu's things, having an ally. It will say the online documentation, the main pages per version are a little each. So oftentimes, first help is the way to figure out stuff. And then finally, what makes it really interesting, it allows you to basically create a configuration that is X amount. Plus as much as the X amount, but at least if you have consistent structure when you have faces, much when you're trying to figure out stuff correctly. What do I mean by that? That's a pretty standard VM. Every single one of those is an Atlantic or a Kegu process that defines something you think about. And this isn't even a fancy one. This is a pretty basic one. So that's quite a few, I'm not sure what my argument is with quite a few options. So the X amount, actually. So the two elements that I really want to talk about are machine type and CPU model. So machine type is defining basically the entire hardware, we have some number of things like that. So what machine type does, we do of course have all the chip sets that go into supportable and have that part on the loader. That's kind of what machine type defines is it's a way of creating a syntactical name and references a particular partner or file or set up. So to get a list of those, the link there is to the source code. There's not actually a way to find that. You can't figure that out, go read CPU. Yay, Kegu. So if you actually want to know what it is, you've got to go read it out of the file. I'd probably link there to the Kegu X86 simulator, if you want to see the list of the machine. So we should type this to find the source code, we need to specify the machine type and pass the name. So in this example here, this is taking from Red Hat. And so they have a machine name called PCDAG. So it's got rel right there in the name, rel7. So it's based on the rel7 operating system. This is defined as source code. So hopefully that's triggering machine type called rel, their version of Kegu. Ubuntu does nothing about the rel machine types. Machine types are passed by name. So you start to see that there's a problem here. Upstream Kegu has a set of kind of basic defining machine type simulation, things like PCDAG 145. And then most distros go ahead and patch that file pretty aggressively to create their own profiles. What this means is oftentimes different destroyers, even though they might be running the same version of Kegu, they're not compatible with each other. It's called rel, they're gonna call something the same machine type of this too. This is a case where kind of a wish cover that will just accept that the upstream committee is done and use those, but it seems to be pretty cool for distros to modify as you didn't have them. Well, it's because if they wanted to backcourt a fix, and this is a great shooting point, I was gonna talk about this in the live migration section in the talk, if you want to backcourt a fix, that means you've changed the machine type. And since the machine type defines your hardware profile, you can't just go changing the machine type. That changes what the definition of naming the departments. So you can create any compatibility by saying, well, between version one and two, the machine type is still named foo, but I'm totally changing that function. So the case, the red hat does in this case is to say, well, if we backcourt, so we can change the machine type, we're gonna create a different name for the machine type. So we're not creating a, we can fight by one piece of feedback, and completely agree with that and understand what you guys did that. You also patch out all of the upstream machine types that can't use any of the upstream machine types in the realm. That I kind of go, eh, but that's true. So the thing that's a very good point, it is what that operating system, what that game container is now, is kind of just a machine type. Just like for, you know, like a desktop server or something, or a desktop computer, you know, a server, you can't really just name the other one out, put another one on the board, and you go, oh, hey, the memory system's still running. Same thing's true with CPU models. This is actually really similar to machine types. So if machine types make sense to you, CPU model is the same thing, but it's for CPU architectures and flags. So there's a lot of different CPU architectures out there, different CPU architectures, and even CPUs within the same architecture are simply different flags. So CPU model is what I'm saying, well, do you want me to say I'm a cable processor, or a saving bridge processor, and what flags do we have? This gets a little challenging. Again, you can ask you to list those. This QXA64, that should actually be, that will get you a list of all the, Libvert also maintains a partial list of its own CPU models, which in some cases, and in some cases, can use the same name, but redefine them. So that could be a little challenging to understand exactly what's CPU. It basically means, well, the host is the physical server in the game, so I'm gonna launch a VM and take the VM out, and then Libvert has a node, a motion kind of node, which sounds similar, a slight difference. Can you host a node, which is actually what you're passing on your computer, to the host mode? So can you will literally read what is on the host to present that? Libvert will use a pass-through, Libvert will look at the flags that the CPU that will list each one of those. So when you start, that is functionally the same thing, but over time, if you start migrating, and by migrating, doing some suspense and presumes and for VMs, those can actually become two kind of different, this RR-resistant sub-flags, it's just, as you can tell, keyboard actually has to emulate the functionality of that. So it's not passing that capability directly up from the CPU, it's emulating that capability, but as a result, it's a lot slower than just directly asking how the CPU can do that. So those two, machine profile or machine type and CPU model, they're basically how you define a lot of the partner characteristics. The reason I said they're important is, and a lot of them have, like I said, over the life of the VM. You're just starting the VM, you're just trying to start migrating these VMs around you. I actually, sorry, I thought you were asking that one. Yeah, so quite a few. Yeah, so the comment from the audience is something about desktop. The long answer is the big public version of nested virtualization. And if you turn on the nested virtualization, this will be popular. If you don't turn on the nested virtualization, storage, I'm gonna talk a little bit about storage here. So first I'm gonna take a little bit of a step back and talk to you about keyboard and deferred specifics. About storage in general, in clouds, but also really any application. So, you have seen it all for a lot and incorrectly. So, if you're gonna head down to the cloud, that is what the storage back in is what the user actually capable of. The very next point is don't forget about storage. We talk about how much space they need. Right obviously in 2016, space, as you happen to have, you can get six terabyte drives. Space is pretty easy to come by in space. More often than not, the problem is that you don't understand how many Iops you need. So, consider this as you're trying to design your virtualization. Think about Iops. Think about how many Iop operations you need. How many themes are gonna be on the single server? What's gonna be serving those themes? If you're using a little, little, say, drive, little things, if you're a little greater in, how many Iops is that capable of? So basically you might discover that that's gonna limit the number of themes that you actually need to look at on the server. You could have tons of, tons of rain to fail with the required games that you don't have the Iops to do. So, consider things like adding more spindles. This is a pretty, a lot more spindles, especially if we're going down across that. A wide of the six terabyte drive, it can storage capacity, yes, it can cost you a little bit more, but you also get a lot more Iops. It's certainly using SST caches, especially now there's a lot of that you SSTs caches through storage. So, when it's a layer of things like slightly different games on that, you also have things like if you're using SAP through storage, you can employ a lot of storage on the RAID cards now, which are to support the notion of well, how some of them are spinning many of their items, that's your storage, but also out of an SST. So, that's a good way of increasing the degree. Also, be careful with trade-offs. A lot of really high-end, fancy storage back-end systems have a lot of really cool things that's tied to that CPU. So, every single one of those features goes to these high-end storage arrays. It costs you so much. So, again, be careful that you're not turning on compressions that you can buy less from. You know, consider it to your storage. It's not for everyone, but not if you do it, but consider it's a great having two different tiers of storage. So, spinning dry or far. Discards. So, you've got the tiny themes and they're running at a rate, and the underlying post, here's two values that you can pass about the control of people. There's error policy and our error policy. Error policy is basically it's for rights or error policy if our error policy is set to that live operation. So, this is how you tell people how you want to put it, and when that error is going to come back to you. There's a couple of different ways. There's a report, which is the default in those versions, which basically says, well, whatever the error was, it just passed out alone. So, if you've got a theme, you've told them to do exactly what they do. There's a couple of ways you can use stop, which basically tell us, uh, what you're going to do is you're going to, in essence, pause the VM. I know there's a lot of questions that you're going to complete. Hopefully, you're monitoring the state of your VMs, you know, since you can't push a stop at the VMs. And you just send one of your operators to the company and say, what's going on? And you can solve the underlying situation, presumably. Ignore, which is what it sounds like. You can counter an error, and actually not entirely sure what the use case for ignore is, because I'm not entirely sure how you could ignore, especially error writing, but, I mean, you just ignore it, but then we try to read that data and it wouldn't be there. And there's an enos space, which is basically, it's a, let me use case report. Instead of reporting it as something like a scissor, it's saying, no, we're reporting it out of space here. Just say, no, we have a creation of all of this, regardless of why it actually fit in. Just say there was no more space. That could be useful, some things were recently given out of spaces and hard-to-scuffing errors. The flip side is, it can be a little, it can be a space that you can do that, and it says there's a terabyte tree, what do you mean? So, those are, you may want to play around with different options in this. If you've got, say, thousands of web servers, and all they're really doing is writing a few of, like, blog files locally, but not your transactional data. If you have a network live, and all your disks are going to load a number, you really want all, first of all, to suddenly be gone into a process to read all the data right here. Or would you rather, they start going into a stop-state? Hey, you have a last one. Different use cases for the different modes, so somebody can consider what you're planning now. This cache mode is when this is kind of similar to despair. How do you want Keymoon handling the caching in the IO? There's so many different values. I will not attempt to explain in detail which of the values, primarily because there isn't necessarily a super simple answer for each of them. There's, there are actions between devices. This caches and those caches, not sort of stuff in the different modes, but you can turn those different caches off if your modes have a different positive, you know, different use cases. There's a detailed explanation here. So I should say that you kind of longs to know the most about how to run and run through. So right through is the default. It works most like you think. You can set that at the oversimplition and oversimplishing of your hardware. So if you have a guest curl, or what unbacked support will do in Keymoon is it will purge the data so your back-end support supports that. So things like QCount 2 fileback, based back-end supported, are going to support same-for-same. Several vice-guessing. If you're trying to do local provisioning just as smarts, you're only going to use a few, a few megabytes of data structures to gain weight for the years. That's great because you're allocating undergains when you're going. So if you have a QCount 2 fileback, you can copy it to the right file that starts out as a few megs, over time to the front of 100 games and you can't really shrink it. Well, if you come up with a QCount 2 and you can't get to your A's in there, you need your guest's kernel list to support it. You need the last three years, basically. You need to purge the data so it's less time. It's not supported on the storage back-end. So in case of RV or ice-guessing, it just passes that through all the way through and sets it. So you can go through any calls that QCount 2 is, basically. I'm second to be useful if you're, again, trying to get more provision in your storage capacity. Here's an example of kind of what this looks like in the LibreDex account. This is a snippet, a definition for a disk drive. So in this case, what we're doing is we're declaring a disk type this file, so it's a fileback disk. In the loading system driver, we'll discard a policy on that. In this case, I'm changing our policy. Where is that disk? And we're just agreeing that RV and A's name are just, you know, scousing us, and giving them scousing us. And the migration system platforms, you know, made it into themselves. And you can do the same thing if you move over. So why do migrations matter? What do people say? That's great. We're from one of those shops. Congratulations. Actually, the vast majority of companies in the world still have the U.N. board and the Specialist Board. And so migrations, the ability to keep the VM running or keep it running with very much capacity as a partner and the performance of the hardware is pretty critical. You can have great uptime, if you will, to just create and encourage everyone to use that whenever they can in the sense of special snowflakes or so migrations. In liver, we talk about human migration, but there's really a bunch of nuanced different types of migrations that exist. So there's most basic level of cold migration. Just shut the VM off, how many days are those hours? Transfer the XML over there, load the XML and say, start migrating the VM. That's great. So this is where your VM live migration can copy just CPU and RAM. So that implies that you have some type of shared storage in some way so that the disk devices with your VMs are available. There's also a variant of live migrations live which is to say, hey, also copy the disk data as well which is actually pretty good to write up on the model. So what if we learn about migrations? We'll have quite a bit. We've actually spent quite a lot of time. So I'm going to talk about the majority of the things that you're interested in is related to migrations. The Migrate Command actually is a wrapper, if you will, to several different API calls on Libre API. There's four down. Migrate calls on Libre you can have. Migrate URI, Migrate URI 2, Migrate URI 3, at least four down, something like that. So there's actually more than one. In API, there's more than one. First, migrate maps all the holes, what happens? So if you just create a version migrate from interesting arguments, what Libre is actually going to do is get an empty container on your destination. It's going to copy all the CPUs to it, all the RAMs in all the hardware state, top all the data over there. That's all that's done. Shut it down on the old one. Resume it on the old one. So it's in line. If you run a single API command, it doesn't have to be a start to the running state, into the running state. But it's not live from protecting the disruption. If you do, it will stop. You can add a flag to version migrate, but even then, it's not entirely live. What it does here is it says, okay, I'm going to sync all this data over, and then, at the very end, pause the VM, sync the basic with the CPU and even APRML sync with the CPU. This is important to note because if you have the application that is changed back to our collection cycles, you will actually discover that with the default setting that you can apply a migrate to that VM. The reason being is because the memory state changes so rapidly that when it goes to pause that VM with the size of the improvement home, it needs to sync. It takes longer to sync because it's configured to allow the VM to sync the VM. Pause it again and try to do that copy again. You'll have a migrate that you can start trying to do. So there's a couple flags you can use to control that. So there's about, well, before running a migrate, there's a command you can use, version migrate-set-max-down-time. What that does is it instructs the bird if I perform a live migrate on the VM I'm going to sync with. Here is the number of milliseconds that will allow you to be in that final pause you can listen to sync with. So if you happen to have VMs that you want to migrate that you know and have a really high churn rate discovered a lot of Java-based applications in the case you're going to want to set a higher time. The other thing you can use to control this is there's a to-live that's a argument that you would pass with dash-dash-live and what that basically says is, yeah, look, if the whole thing is going to take longer than this, forget it. Just pause it. Give up. Try it with a more live. You just need to be the same on the destination because what's happening is we told the bird migrate for AP is taking the X-mall names. So they've got to say X-mall means that, okay, here's what you want to do. But that becomes a path and we have to say path. If you need to change the path, pass, extra X-mall, then that's that same backing from a structure so you're not doing anything you want to do to copy sort of an interesting thing when you think, oh, yeah, well, that's not what I was talking about in your case. You just copy that up. You don't think you know what you're going to do about live-long migrates because the bird is not smart or it is smart, but not target. If you tell it to go live-long, that was the same file and that's accidentally you should have locked migrate on VM. You're sitting on a shared storage and you're sourcing an estimation. If you've got the same path mounted, it's going to copy that into the path. Things like what was that note that wrapped a lot of extra checks around those policies to make sure if you told me to lock migrate, are we sure that we have a different source of estimation? That's why it's really important if you want to do live migrations to make sure that your source of your estimation are compatible. You don't have to be the same. You have to have the same idea. I say machine type food. I'm sorry if you're trying to migrate between the distributed distribution to an upstream edition. If nobody's advice, then it will say yes, I can't do migration. Yes, there's actually that's something to note. One of the... Well, in any case, if you look for most of the time, you can just drop a place and restart it. We've seen problems. We've seen all of these. One of the most recent times that happened was when they just flat out across the board and said get back. You start, you get back, but you can't get there for good. So there are times when we do run-up issues with that, so you can try. You can also consider having multiple versions of the queue or something like that. Whether you actually want to specify machine type or not, the first one to do this is if you read what you get, you get the latest machine type. That's good. There's benefits to it. If you have an operating system or application that is highly sensitive, think about that now. There's also the fact that that's just not the way it's developed. It's always assuming you're doing migrations and you're doing upgrades. Always consider the source that's old. The liver does do some checking around source adjustment. There was at least one distro that shifts with the config file and sets it up for you. So what that means to the liver is it tells you the books to talk to each other and they think they're the same by themselves. So you want to make sure that if your liver config file either specifies a new UI or ideally it will specify a new BIOS section. So this is basically, there's a couple different sections. There's one where you can pass some of the data. This has to do with the way all this app works. This is the information that the stack passes through. The manufacturer of the product. It lets you perfectly have data that you're trying to fingerprint the hardware to make sure that you need to do licensing management and that sort of stuff. So some of those applications that you can use to run and say, hey, you have no serial number. I refuse to issue you a license or all of a sudden it says, I don't have a license or do you have any ideas? No, that's the same as this other box over here that's using L. So no, we can't have more than one license. The question was if you're using Gersh, if you're doing a lot of migrate, is there to this part on the remote side to be able to update any type of switching infrastructure so that your network knows that your liver is over here now? No, it doesn't. Liver can use to do it. I don't think it does. It doesn't help. It always works. I know that. It's been the right name to know for that very reason. So it is there to compare. I think he will attempt to do that. You're using CFS. So with regards to using, you're using the over-ice studies in the U.S. We're giving our studies an initiation to a backing that was based on CFS. So at least one copy out there seems to be probably several other drivers that keep their support stairs. Now again, it's abstracting a lot of that.