 Okay, my name is Steve Rosted. I work with VMware. I'm an available source of developers there. And I wanted to make sure I'm an original developer, maintainer of something called App Trace, which is a personal trace of individuality of Bill Lang's kernel. I also work on the go-to-time patch. Actually, that's where App Trace came from. This is where I'm working to try and make Linux into a real-time, which hopefully we'll go in this year. Back in 2009, I created a tool to visualize the trace data because I was debugging the real-time scheduler and the migration of it. And I knew a good way to visualize stuff. So my first introduction into open source development was back in 1996, where I sent a patch to GTK back then. So I liked GTK, but I haven't done it over a decade. So I used it as both, hey, I'll make a visualization. I'll learn how to use GTK. I worked on it and it was one of my hobby tasks. When I went to VMware, D'Arcondo, who's the vice president of our department, asked me if there's anything that I would like to work on that I'd be able to help out with. And I said, yes, curve her. I haven't really been able to work on it. I've been, like I said, with my idle task. So we hired Yordan, which is his name, but he keeps fixing my pronunciation, and I've got to get it right. Anyway, Yordan came in. He was working, he was actually a theoretical physicist at CERN, and he was doing more programming than physics and finally he liked programming more. So that's why he left CERN to become a programmer. He did a lot of development data analysis and stuff like that, and he did a lot of things with Qt. And I said, perfect, because when I wrote Colonel Shark, I cornered myself into a corner with the design decision, because I really, like I said, I was only working on it when I had some free time. I didn't really put that much thought and the development of it. And I made several core design decisions, and that's why Colonel Shark never made it to 1.0, because I wanted to rewrite it. Now I had someone that had experience and said, great, you're going to rewrite it. I helped architecture. Make sure he didn't fall into the same traps I fell into. He rewrote it from scratch in Q and C++ using the still bit with the trace management. And this is how we got Colonel Shark 1.0. And just before all those of you with little kids, I have to do it. Colonel Shark do-do-do-do-do-do-do. Colonel Shark do-do-do-do-do-do-do-do. Colonel Shark do-do-do-do-do-do-do. Colonel Shark. It goes on too. And finally also one last thing. I did have to do it in my last session, so I have to do it now. I always take selfies of the line. This is called a camera. I don't make phone calls with this. Smile. Oh, there's a word. Ness. So let's continue. It's a single. We'll be able to figure it out. So Colonel Shark, what is it? It's a front-end practical user interface or trace command. What is trace command? Well, it's a front-end command line in the base. Graph trace. And what is ftrace? Hang on here. Not know what ftrace is. So everyone here will actually know what ftrace is. Wow, it's actually the unpopular. It's actually, this is several times a lot of people who say not knowing what ftrace is. Well, I kind of gave you a definition. So technically, everyone that was here when I gave my introduction knows what ftrace is. Where is it? It's on getthatcolon.org. That's where it's located. It currently lives with trace command in the same git repository. But what we're working on right now is we're trying to make trace command into a library, a bunch of libraries, such that we can move Colonel Shark into its own repo and Yorda will then become the main maintainer of it once that happens. So, what does it look like? So we start up Colonel Shark for the very first time as well. No big deal. By the way, I mean it's mostly visuals. You click on tools, you click on record. This is actually what it's, the old one did this, but this is a little bit better. A little color scheme thing, I'll show you how to make it indelible. You click on that. So basically, you can do this from a user. You can ask it to be rude. It would execute a smaller program. So it's a little bit easier to audit because when you write things that super user and rude could use, you wanna make sure it's kind of condensed so that no one could take advantage of it and hack into your system because of a security hole or something. So we try to keep the recorder much smaller. And this is where actually what it looks like. You can actually run this by itself, but you need to be rude to doing anything because this actually reads into the Linux kernel referees that are protected that only rude could read into. And that's how it produces all the events that Ftrace shows you. So if I could go and select certain trace events here. And by the way, right around today, I am giving a tutorial on Ftrace. So if you're not knowing what Ftrace is, I am giving you a tutorial later today. Probably shouldn't have that first event, yes. But I can actually just schedule it. So I clicked on a few things here. And then down below, I hit it. You can see, you can see it actually needs time to pass it. So right here, you can actually put in where you want the output. You can put in a command that's where to run. If I hit apply first, that's good, you can do that here. No, I didn't show you. You can see it does the, it actually executes trace command record with the events that I, the system events that I didn't do any individual events. And I hit capture and then it recorded those things. And then what happens is it pops up with the data. So you get all this. What you see here is per CPU, all the data in the visual, on the CPU side, each different color happens to be a different task. So you can see how things are scheduled. If there's some, if this plank, that means the system is idle. If you just see a black line with nothing in it, that means the idle task, because the idle task itself when the system is idle, could actually do events, interrupts can come in, where there's no actual task. That would be a little black line. But here you can see the different events on each line. Down below here is this, this is basically an output. If you get just trace command report, it'll show you all the output. And you can look in here and you can search within that. So we want to zoom in. Because right now we can't really do that. So you just select your mouse, slide it across, you can do this little great now. When you let go of the mouse, it zooms right in. So if I go back, this part from here to here becomes that window. So that's what you see. That was inside that little box. So you get down to a little bit deeper. If you put the mouse over events, you see the mouse over here, I put over event. You can read up on top that this was my Chromium task on CPU zero. This little magic numbers here is the same thing as the latency. Now that's something I probably should document a little better. I'm not going to describe it in my documentation. You do have, like the first one means interrupts are disabled. The dot would be the next one would be the capital N, which means that there's a need reset, sked flag set. So if you're inside the kernel, what could happen is in a vector come up and say, look, we need to schedule. So there's another task that wants to run that's higher priority than the work task that you're running right now. So we set the flag saying we need to do a schedule. So that when we enable interrupts, we know that we got a call scheduled. So that's why you'll see here where the H needs to run a hard interrupt as opposed to a soft interrupt, doing different things. And then the last one I believe it might be a cramp to count. I have to look at it because different hurdles have different information on what that shows. So when I click on that event, you'll see a little dot here. This is one of the advantages or this is one of the enhancements from kernel shark 1.0 that the old one didn't have is that it actually, this little dot will appear on the event. So if I actually move across you'll see the dot will bounce to which actual event it corresponds. Because before you just saw a line and you didn't really know which CPU that, you had to actually look here to see which CPU it was to see it was. So on marker A, I click marker A here, it gives me the time stamp, same time stamp up there, it gives me a little spot there. So if I hit this double plus plus, I zoom all the way up as far as it could go. So I can come, if I want to see all the, see everything there. And then I hit marker B, it gives me a second marker. So I want to see what's over here to over here. So I want to say this guy, which happened to be a sked waking. And it was waking this compositor, whatever that was. I think it's some, I think it's one of the breads from the Chromium task. And I just curiously, I want to see how long it took to wake up. So right there I said, okay, this is where I woke up. That's a sked waking task, where I went to sked waking events. The sked switch events here shows you that the compositor was scheduled onto the CPU. And you notice it started right there. Now, here's a little interesting. I like showing bugs. Because as I write these things, I find, usually I always find bugs when I'm writing presentations. And I kind of list them up. And this is how this way I can keep you on here. Remember, okay, you are done. I count some bugs and I'll report. Since this is kind of like a design decision that you're done shows that we were questionable whether we like this. The sked event is kind of interesting event because it represents two tasks. You have right here, it's the idle task that is scheduling the composite task. And that's where the composite thread starts. But because I'm here, it's really the idle task that's actually the task here. Now, the reason why I'm saying I think this is, so I click down here, this is where the bug was because the sked events kind of screwed things up. Because now I do a right click against the menu and I want to plot the task. But right here, it's kind of funny because it says idle, put it, at least it exists, right? You can see in there. This is where the bug is. It's saying it's the idle with process ID 13482. I can guarantee you that idle task does not have a process ID of 13482. The idle task has a process ID of zero. So why is that? It's actually showing the process ID of a composite task, which you can see over here. 14482, or what do you call it, 13482, but the idle task is actually a process ID of zero. But that's kind of, it's got mixed. And that can be confusing. So just let you know, I will probably fix this. This question is, which one should you show right there? But when we add it, you notice that I added the idle task, that process ID of zero. This is sort of the idle task of process ID of zero. Now, what's interesting about this, the idle task is a little confusing because there's really actually more than one idle task. The idle task is kind of magic in the kernel because it runs on every single CPU. So where everything else is a single entity, idle is magical, there's a magic idle for every single CPU, and they all have to share the same process ID. So that does sometimes confuse Kernelshark because it's expecting every thread with a single process ID is only the same. So what's interesting about that, when you plot a task, the task color is now, remember I told you on the CPU, each color was which CPU it was on. The task color represents which CPU, well, I'm sorry, yeah. CPU is telling you which task, each color is a different task. On a task plot, it's a task plot, it's a CPU plot. On a task plot, each color is a different CPU. So you can watch things migrate. So I can click on the right screen and scroll further out over. And as I scrolled over, let's see where it's right here. And yeah, so because now I'm more interested into seeing the events of actually the compositing task. So I click there, now I actually got the real name followed by the real PID. And there I want to add that guy. So now I got this task, now I can see something more interesting. Here's something that's really kind of interesting that's a trace of what Kernelshark actually does. And it's actually a plugin I forgot to, I don't think I should talk about that here, but I can show it on the demo. But the plugin that we added is a scheduling plugin that modifies how scheduling events work. That's probably why I got one of these too, because that's actually a plugin that did it. So you'll notice this green box here, right? Plot tasks, this green box shows me that this is where it was woken up, this is where it actually was scheduled. So actually I can see from here, I can click on marker A, click up here where the wakeup is on this guy. Okay, schedule waking. And I can go over to marker B, click marker B, click that. And between the two, you get the wakeup. So that's a 48, so between the time it woken up to the time it's scheduled was 48 microseconds. Now, there's another enhancement that you're not there, I actually thought this was kind of cool because I never thought about this, was that you actually put spaces between in there so you can see which is the milliseconds, which are the microseconds, which are the handle seconds. And that's kind of nice to see. So, let's see, let's go back to, let's do something a little bit more interesting, I guess. Um, I don't know what this was. I clicked on the capture, oh, yes, that's what it was, the mouse, I didn't put the mouse, I didn't put the mouse. Okay, I think I did it. You can get important export settings. When I did a lot of these screenshots, you have to remember to hit, you want to select the mouse or not. So usually I would say select the mouse to show you what I'm clicking on. I forgot to do that in this one because there's also an arrow on my export settings. So then you come up with this and I'd say, okay, I want to save my settings. And you can actually, so in other words, if you have something, if you've got really complex settings, you can always save all your stuff there. And then I go on import settings and this time I'm going to run something different. When do exceptions, or when do exceptions, I can actually see what I did. Schedule events, blah, blah, blah. I called this guide migrate. This is the actual code that I wrote back in 2009 to debug the real time schedule. What it did was it created a bunch of tasks more than number of CPUs. Each one had a different priority and then it executed them and each like the highest priority task would just run in a very quick interval or just run for a little bit and then go off, run a little bit, go off, run a little bit, go off, run a little bit, go off. The next task would have a lower priority, slightly lower priority, would run a little bit longer time but for a longer period. So it would go off and so on and so forth. And since there's more tasks than CPUs, obviously there's going to be a bouncing round and I want to make sure the highest priority tasks migrated the least. And it wasn't happening. It was like things were going crazy. So I actually did this to record and then I got to watch things. So when I record this, this is what you got. This whole huge of all these tasks, very pretty picture. So now I only want to see, is this an events? Yes. No, I actually clicked on the wrong thing. It's supposed to be show tasks. I don't care about the events, it's supposed to be on show tasks. It's one of those things that I said, you have five seconds to pick the thing down. So although I think I thought I was on show tasks, I think when the screenshot happened, I didn't verify it. So it's actually supposed to be, go back here, that's a mistake, that should be on show tasks. So I clicked here on all the tasks and I picked the tasks I want. This is all the migrating tasks. So it's one, two, three, four, five, six, seven, eight, nine. Nine tasks for, I guess, eight more. Yeah, eight seconds. So now you get this. You see these big gaps here. This is where they weren't running. Interesting. So what I did was, why are these gaps here? You know, they're all real-time tasks, running, running, running. I'm like, why are these huge gaps here? And what was interesting, I did a marker A, clicked here, a marker B, clicked here, and looked at this, and it was pretty much one second. Exactly one second intervals. I checked in here, it was a little tiny thing. And it was, you know, 500s of a second. Anyone know why? I think you got it. RG Throttling is enabled. If you go into your cat proc system, there's two files, or two, yeah, two files in Proxys, Peril, that control your system. This is in microseconds, so that's a little US there. And, you know, one million microseconds is one second. So a period of one second. And this is saying that real-time tasks are allowed to run 950 milliseconds for every second. So if you go back, that's 950 milliseconds, for every second that you can actually see the visual, because the migrate task basically counts onto the system, RG Throttling comes up. In fact, actually, when I did the D-message graph, you get this little message. Okay, so let's try to do a little bit more real-time. So I took the RG2 cache, one more recent one, although we're at 5.2 now. But I found out when I ran this test, 5.2 crashed on me, so I had to go back to 5.0. And I enabled config note-hurts-full. Anyone knows what config note-hurts-full is? Anyone here knows what config note-hurts is? Okay, okay, I want to explain this. This is a really interesting thing. You should be running all your machines with config note-hurts. That's definite. Basically what note-hurts means is, when you have, or are familiar with the Hurts, you have a timer tick for a Jiffy. If you know anything about the next card, I'll be defining everything by a Jiffy. A Jiffy is defined at compile time, either 1,000, like it'd be 1,000 hertz, so a Jiffy's every millisecond. It's how the scheduler kind of uses to figure out how to block how much time each task should get. So it's how you get to find it. Some of them it's like 250 micro, or 215 hertz. I can't go the calculation right now in my head for whatever that is. So 250 times a second, you'll have this little tick going off that defines, this defines also your usage, ever do a run time out of process, or time command, it shows you user time, system time, all that. That's defined by the tick. So it actually isn't really calculating the actual time. It's a general name. The more the higher your Hurts is, the higher your, or the more precise those values will be. The problem with the tick is it's interrupting the kernel. It's, you're going on, you're gonna get this interrupt at that hurt, so if you have 1,000 hertz, 1,000 times a second, your computer's gonna be shooting off the tick to do some time calculation. Well, what happens when you want your machine to go to sleep? If your machine is idle, and the tick still goes off, so your CPU will never get into a small sleep state. Years ago, it's actually been in the kernel for some time. We have this thing called no words. Confined no words, which means that when the system goes idle, it turns the tick off. And then when your machine, a real interrupt comes in, besides like the timer, there's no timer in our, something actually comes in, it wakes up, then it looks at the time difference and then injects how many jippies are supposed to be in there. There's a bit of work there because if you have a timer that's based off of the jippies, you have to make sure that you set the actual timer if you don't just turn off the tick. If something's actually expecting you to go off in 100 jippies, that's another thing you say, go off in 100 jippies, do something, better keep that slot in there. You can't just turn that off. That's no words. So I don't want you to be running no words. I think every distribution compiles that in. So because with no words enabled, your system could get into a nice deep sleep and your battery lasts a lot longer on your laptop. So if you go in there and compile your own kernel, turn off no words. So notice that your battery will last a lot shorter if you turn on no words. No words full is something else. This is a real-time task thing. No words full means that if the tick that is there is for scheduling, looks like for scheduling, if you have one task running and no other task is running on that CPU, why do you have a tick? That in a real, sometimes like real-time people would like to have a task that will end up a device driver information. Like say I'm looking at the network card and I want to see when packets come in. I got to respond to packets on network card immediately. I mean, I need to respond within a microsecond. Now, if you wait for the interrupt to come in, and by the way, if you have no words enabled and you're pulling, if you go on to a waiting for the interrupt to come in and your machine goes asleep, some CPUs could take up to a millisecond to wake up. That's a huge latency. So sometimes they have a user process to spin it, not letting the CPU go to sleep. Yes, it's going for electricity, you'll have to have ARC, kill climate change, and all that. But what you have is you're spinning on some memory space. And the thing is, we say you have a microsecond latency. Well, this tick that goes off could take two, three microseconds to do that tick. So when the packet comes in and you're in the middle of a tick, you could be two, three microseconds latency where you just missed your deadline. And there's a lot of some real-time developer systems that can't spare that. They need to be able to spin and not be interrupted by the kernel at all. So what we're trying to do is we're working on, it's still not completely finished, is config no hurts full, which says, okay, when we have a single task running, we just turn off the tick and act like it's idle. It's a little more work because that guy's running. If it does system calls, system calls will trigger ticks. So it's got to be running in user space. So you got, if something's running fully in user space, you want to do that. So what I did was I booted up my test box with this in the command line. No hurts, well, I enabled this. You have to tell what CPUs, there's different ways you can do all CPUs, but then this adds a bit of overhead too. So don't want to do this on all distributions. This is basically, if you have a use case for it, if you have something where you really need to turn off ticks, the monitor, so the curve of the body when you do this to yourself, on a general operating system, this is mostly going to be turned off. So you add no hurts full. I cheated ISO CPUs, you can use CPU sets, but it's just easier just to say ISO CPUs. You're trying to duplicate this. I think it's easier to do. So I just said these two, these CPUs two and three, what this is telling me, the CPU two and three are not going to run on anything. So the current, when the current boots up, it says, okay, these two CPUs, I'm not going to schedule anything on, except for things that have to know the current threads that are first reviewed, those are still scheduled. But it won't, any user tasks in it or something, will never, like when it works, all the affinities for each task will have these two off. This RCU, no callbacks is telling the RCUs, kind of like the garbage collector of the Linux system, and it kicks off, has callbacks that kicks off threads and such. It's saying don't run the callbacks on two or three. So even if two or three calls, needs this garbage collection, it will then say, okay, if I need a thread or something, put it off on something that's not the other CPUs to handle it or two or three. Then I did, okay, let's do, after I've booted this, I said, trace command record-eall, says give me all events for one second. That's all I did. And this is what I got. So up here, CPU zero is doing a lot of something. What the heck is that? CPU zero. CPU one is, got some stuff going on. CPU two did a little bit of stuff and like that shouldn't have happened. Three did nothing, which was good. But two and three should have been completely empty. Anyway, I said, okay, let's just zoom in here and see what's going on. So I zoomed in, and that CPU zero is actually doing a lot of little things. It's not doing, it's not spinning. You see it's doing a lot of things. I measured what's going on in CPU zero and it actually, look at that, it's basically once in a little second. So it looks like CPU zero is not no pitch. It's actually running the tick constant. It's not CPU zero, it's not going to sleep. I debug into it, like, why is that happening? I'm not sure. Sometimes CPU zero, sometimes no pitch doesn't work on all your CPUs. The other CPUs obviously aren't going to take it. Yeah, no pitch is not good. Tricking it. So, let's put in a little spin. Let's add something else. Let's put in a spinner. So I have this user spin. All it does is basically spins for 30, the spin you pass on grammar for 30 seconds. So it's doing nothing between spins for 30 seconds. I ran tasks. So basically here's the recorded, look at that. Task set just says, okay, I want this guy to spin on CPU two. So set the affinity of CPU two. Although this kernel will not schedule anything you're allowed to say, run this on that CPU. So I say, okay, run on CPU, test CPU. This I'm going to record something. I'm going to say, okay, I'm going to do the function graph tracer, a graph trace of max step three. What that means is it will only trace three functions down, it will not trace one. I only care about the first few functions that are there, I don't care about anything else. It's like, if I go into the kernel, I just want to see why I go into the kernel. There's some, first I usually do max step one, but the note where it's full adds a wrapper around it. So all I got was the wrapper. And then two, there's another wrapper, I was like crap. So to see what was actually happening, I had to go max step three to see, get down to actually why did it go into the kernel. And this is what I got. A lot more, a lot busier. I mean, this is basically an idle system here. But this guy's running like crazy and he got a lot going on. What I found out was this guy was pushing the RCU call, the RCU, no callbacks. This guy actually calls RCU to trigger, pushing a bunch more RCU stuff going on inside here. So I said, okay, first of all, I clicked on CPUs up there. Let's see what the CPUs are. I blighted, because I only wanted to see these three CPUs, just to clear it up. Then I said, okay, let's apply only the takeoff filters to graph. I don't quite care, I want to tell these guys. So I want to filter this guy here. And I filter the CPUs on this guy here. So now I'm only going to see CPUs one, two, and three. I zoomed into this little spot because this seems interesting. And it's like, yeah, there's a lot of stuff. I didn't really look deep into this, but then I looked at what the heck is going on here when it's only a user space spanner. So I went actually here, I did a click here, I said, look, it's doing some v-time. It's doing some virtual time. It's updating a virtual time function, but why is a virtual time happening on CPUs too? I looked, here's, then I come down here, I said, wait, IRQ 24 triggered here. Why is the interrupt 24 triggered here? Guess what? ISO CPUs does not touch the interrupts. CPU 2, over here we got a bunch of interrupts going off onto CPU 2 and CPU 3. This is a local time interrupt, which will go off when you have it, but so I'm like, okay, I have to do something for that. So, okay, let's retry this. LS, all the IRQ, S&P authorities, while we die, you echo F3. It takes in a mask, CPU mask, so three. So basically it's usually FF, that's 8CPU 037, taking off bit two and three, we just drop off bits, yeah, the first two bits, so that F turns into a three, echoing two, whatever that there is, run it, and I did this, I did the same thing again. I got much better, but what's kind of interesting here is I see, hey, this is a tick work, tick work, tick work, tick work, tick work, and I like, it's very, very, you know, formal. So I looked at this, and interesting enough, it's four seconds apart. Everyone happens in four seconds, four seconds, four seconds, four seconds. They got the same user usage, and it's going in here, it's doing, if you read it, it's doing it better up here. Here, user spend, runtime, it's doing, oh, it's doing accounting, it's still doing accounting. Remember I said no hurts are still in progress? This is why it's still, actually, there's some things that it still needs to be done, but we're able to say, hey, we won't go off all the time, we'll go off once every four seconds. So yeah, your system happens, but the chances of collision right now is much less. There's work, there's patches out there to actually eliminate all this. So we're still working on it, it's still garrison, yes, we need to fix that. So, KernelShark can be used to visualize ftrace. There's a lot of filtering of events, and there'll be a lot more coming too. The thing is, when KernelShark 1.0 got out, this is when it allows to open up libraries, we're glad you're having a RipKshark, actually, and to start working on KernelShark 2.0. This is why we're so excited to get one point out, because 2.0 is in the process. We have a lot of prototypes working, we're playing with it, trying different ideas. We're going to make it a lot more plug-ins, so you can virtualize, we have Python plug-ins. Jordan from this CERN work is familiar with NumPy, how many people are familiar with NumPy? Wow, okay. He's making a NumPy interface to crunch numbers and crunch the data of data traces. And we're working with people like Wolfgang Maurer, who needs to talk to our jittery bugger that's doing an analysis on real-time tasks, and we're going to be working with him to try theories. We're going to be working with him to take these traces of gigabytes and crunch the numbers to try to prove statistically that we have what the worst case execution time is, that's the Linux is so complex and a lot of real-time computers are so complex you can't do it really the old way. So we're working on that. And one thing that, oh, and better recording features, tracing virtual machines with hosts, that's coming out, that's what I'll show you on the slide. And with PayShark, what he says, I'm working on something that's called the Unified Tracing Platform, where I want to get all tracers, Battle Trace, OPTNG, Perf, Ftrace to have libraries, that any tools, if you're running a tool or you're running a Python plug-in or Perl, yes, I guess that's still out there, I like Perl, or Go, Java, whatever, have a good interface, and you need some sort of tracing and this tool or this library has that feature. I want you to share everyone's tracing tool utilities and be able to use it anywhere. This is a community effort. This is something that why have competing tracing algorithms when you could just say, hey, this guy does this work part better, but I need this too and it's done better here. Let's work and merge them together. And this is what Kernel's trying to do when Kernel started to build virtual machines. And this is actually a screenshot of one of the prototypes. I didn't quite get it working because I couldn't get the synchronization right in the end. So my machine is a little different from the machine that we ran it on. Well, ideally is you will record an agent, you put an agent on each guest and then from the host you could say, hey, record this guest, record this guest, record this guest and record the host and then you could go where you merge it and you could say, okay, this thread here on the host it actually represents this virtual CPU 4 from the guest and then you get attached and it will show this to you. So this actually shows you that this is the events that happen on the guest and obviously when you're running as a guest once you view. So a host, there's a host thread that represents that virtual CPU for the guest and you should never see two events at a time with you running the guest or it's where your host works. Just a context which like running user mode or running in kernel mode. You have kernel mode, user mode, guest mode. The guest mode then has kernel mode and user mode. And this is inside the kernel of the guest and then from here you saw it actually exited the guest went to the host, host set event and back to the guest. Here is a longer so you actually measure times between like how long these things were then you can see, so say if a host is blocking something and you want to see why the guest is slow you actually do this as a visualization of, ah, the problem's in the host, not the guest. Or you can say, ah, the problem's in the guest don't leave the host provided. Well, actually I have one minute so I guess I'm not doing a demo, but.