 OK, let's do it. So today, we're going to make a small adjustment to our process model that's going to allow us to talk about the rest of the process life cycle. So we've been talking about processes, which are one of the core operating system abstractions. And we're going to continue to talk about processes today. Hopefully, we'll get through this by the end of the week because by next week, I want to be talking about synchronization, which dovetail a little bit with the second half of Assignment 1. I'm going to start doing this quickly in class. I'm trying to move announcements onto the forum, so I'm not going to do announcements in class anymore just to create a little bit more time for us to talk about non-boring stuff. I have been sending out emails with links to the announcements that are on the forum. You are responsible for those announcements, so you need to read them and process that information. I'm not going to talk about it in class. However, I do want to start doing assignment check points, so you guys have some sense of whether or not you are in the right place with respect to completing the assignments. So these are hard to do for everything, but we'll give them a try. So for Assignment 1, if you're not started, you're behind. You're in trouble at this point, so that's not good. If you don't understand semaphores, you're behind. This is kind of step one of trying to figure out how to do this assignment. If you have working locks and that's it, you're like a little bit behind. I would prefer that you were a little bit farther. If you have finished your condition variables and locks, then you're OK at this point. I see people, I see some wide eyes, I see some happy looks, I see some like, if you don't understand anything on this slide, you are way behind. The Assignment 1 is one of the only exceptions to our rule about the assignments in this class being cumulative. There are a couple of parts of Assignment 1 that you are not going to use in the future. We'll probably not use your reader-writer locks and the synchronization problems that we ask you to solve, the stop-by problem and the whale mating or whatever it is. Those don't matter. Those get compiled out once you start working on Assignment 2. So the place to devote your time and energy is locks and condition variables. Those need to be rock solid. And what I've told people who have completed that part of the assignment, and there are people who have completed that part of the assignment, I've told them to, it's a good idea to come to office hours, sit down with a core staff member, and just have them look at your locks and CVs to make sure they are perfect. Because the tests are pretty good at this point. If you're running the Assignment 1 test through Test 161, that suite is pretty comprehensive, but I'm not going to promise that it's correct. And locks and CVs are both a fairly small amount of code. So if you put that in front of someone who's seen a million of these, they will be able to tell you immediately whether it's correct. Well, not immediately, maybe. It might take them a couple of minutes. But that's a good thing to do before you move forward. Because if you have problems with these primitives, which you will use in later assignments, you're going to be very, very frustrated. Because things in Assignment 2, I mean, you'll probably never get to Assignment 3. You'll just give up and just dissolve into a puddle by that point. But Assignment 2, you'll have a problem with, because you need these to work. OK, any questions about this week's announcements? Stuff from yesterday or today? Again, I'm not going to highlight stuff from this. When will submission be done? Soon. Scott is finishing up some changes to Test 161. That said, you can still use Test 161 locally. How many people have used Test 161 locally? Yeah, just start doing that. You don't have to submit anything. You can run all the tests. And it will print a score for the assignment and everything. It's pretty cool. For Assignment 1, it's a little different than what you're going to get on the server, because you have to guys have to write the reader-writer lock tests that we have on the server. But yeah, no, this was in the announcement for today. So I would suggest that you start using the Test 161 submission testing tool as part of your local workflow. Again, you don't have to submit anything. You can run the same test locally and examine the output and have a pretty strong idea of the score you're going to get on the assignment before you ever submit. That's one of the nice things about it. So I would start using that. Other questions about announcements? Just one note. The public forum is something new this year that we haven't tried. So if you guys notice anybody on the forum who's doing random stuff, like posting stuff that's clearly incorrect or calling people names or posting huge chunks of solution code or whatever, please bring that to the attention of one of the core staff. Like, we're on the forum. We're going to be keeping an eye on it. But at this point, anybody can create an account and sign on to the forum and start posting and reading posts. So just be aware of that. OK. So last time we were about halfway through the process of inspecting how a particular program or programs use system resources using a set of fairly cool command line utilities that you guys can use on your own Ubuntu installation. So the last thing we did was we were using PMAP, which shows us the memory mappings of the process. And this gives us some idea of what processes use memory for. And if you remember, we looked at bash. We saw that what are some of the things that bash has in memory that this command reveals? One thing, anything. Yeah, constants that came out of its executable file. So remember, we saw a case here where there's a small, where'd it go? Yeah, there's a small amount of, oh, no. Sorry, right here. There's a small chunk, only 4k, and that's a significant number. We'll talk about that later. The memory that it's using is marked read only. And so that is information that those aren't parts of its code that would be in this section that's read and executable. Those are constants or other things that were loaded out of its binary file. What else? So bash has constants. What else has bash loaded into memory? Global variables, so we have a section here that's read right. These are probably statically defined C global variables that get declared at the top of a file that have global scope. What else? Heap, we saw a fairly large, well, I don't know, 2 meg. Not that big, it's bash, right? What could it be doing? A heap, what else? The executable code itself, so this is the first part of its address space. This 876k, and these, if you looked at it, these are instructions. This is the part of the address space that's used to store the code for bash itself. So as bash is running, the instructions that it's executing are being pulled from this section. And then we saw that it was using some libraries, which got loaded up here. What else? There's one last little kind of significant. Yeah, Ed, what's that? Yeah, it's got the C library, right? So it's loaded the C library in here. And the C library is interesting, right? Because the C library, like bash itself, has a part that's readable and executable. This is the actual code for the C library. It has some of its own constants, and then it has some global variables it uses as well. So the structure of that part of the address space or of the memory that bash is using mirrors the part that bash uses for its own code. Again, last thing here, maybe it's at the bottom of the slide. I guess you can't see it. Stack. So bash has one thread, and you can see that there's one thread here that has a stack. And it looks like that thread stack is using about 100k of memory. All right, questions about this before we go on? Yeah. So each thread gets its own stack space? Yeah, each thread is allocated its own stack. So one of the things that you should probably be doing is you guys, how many people are kind of unfamiliar with C code? OK, what I would do is I would make sure is your writing C code, you understand where the things that you guys are, the variables that you're allocating and the code to the writing, where does it end up? How do I get something in here? How do I get something into this section? How do I get something in here? Again, this is not a C thing. Every program that you use, if I did this for the Python interpreter, you'd see the same thing. If I did this for no.js, you'd see the same thing. If I did this for Java, you'd see the same thing. Those are just programs that interpret programs that are written and sometimes compiled in other languages. But they all have the same structure internally. So I have global variables that get stored on my heap that can be allocated and deallocated. I have local variables that could put on my stack. Those end up down here. I have static global variables that are never out and deallocated. Those end up here. And I have the code for their C code itself. Yeah, the discussion here. What's that? What are you talking about? Native methods. This is all native. Yeah, I mean, that's a great question, though. When you're talking to other languages about native methods, what is a native method in Python or JavaScript or Java? What does that mean? Does anyone know? What's a native method? What does it sound like? Implemented in hardware. OK, so we went too far down. Let's come back a couple levels. Yeah. Let's say built-in. It's a little different. Built-in functions don't have to be native, although they can be. What does it mean? Native. Native to what? The machine, meaning what? Mumble, mumble. Anyone? What's that? No? Well, I mean, that is the environment to which it's native. But how does this? So again, you think about your writing Java code. Well, let's say you're writing Python. The Python code that you write, does the Python code get compiled? Do you guys have to run Python C before you run Python code? No, it's never compiled. It's interpreted. So it's a program that takes your Python code and interprets it, parses the syntax, and does the things that the Python code says to do. So most of your Python code is written in Python. If you use a native library in Python, what does that mean? Doesn't have to be built into the system or pre-installed. It has to do with, yeah. OK, we're getting warmer. It usually is. Why? It can be or cannot be. I can write libraries that provide native methods. That's an interesting answer, but I think it's not true. Yeah, we're getting warmer, warmer, native. Yeah, you guys are warm. Maybe I'll just, yeah. I think you guys know this. You just don't know how to say it properly. It's compiled code. It's code that's been compiled for that machine architecture. It is so as opposed to your Python code, which is never compiled and is interpreted, there are parts of the Python libraries and parts of the Python interpreter and parts of external third party libraries that are actually compiled for that architecture. So when you make a call into that library, the code that runs is not written in Python. The code was written in C and it's been compiled. And the library that's loaded in run is actually, quote, unquote, written. It's represented on the machine in assembly code. So it's actually using lower level instructions that are accessing hardware directly. Why would you do that? Why would I ship something that has native methods in it? Yeah, it can be a lot faster. Yeah, absolutely. So I have a library. The process of executing Python code and all the nice features that Python gives you can cause a performance degradation. Whereas the whole idea of compiling is I get the compiler a chance to optimize parts of the code and to represent it in a way that the machine can run as fast as possible. It's not always true. It's not categorically true, but it is true a lot. OK, sorry about that little digression. All right, so here's the picture of the process that we have so far. So remember, this bash had one thread. We can verify that only as one stack, but we were also able to see that using PS. We have its process ID, which is the name that the system uses to think about it, refer to it. And now we have this thing called an address space, which is a memory-based abstraction that we will talk about in detail later in the semester. But you can think of the address space as mapping how the process looks at memory to the contents of that memory. So we know some of the things that are in bash are its code, its static data, it is a heap where it dynamically allocates global variables probably using something like malloc. And then it has a stack for the thread. All right, so let's keep going. And these are, again, these are all commands that you guys can run. Not all of these tools may be installed on your machine, but they're very easy to install using apt-get. So you don't apt-get install lsof, and you'll get this command if it's not already there. Lsof, what do you think this does? List of open files. So this command is designed to allow you to peek into the files that a particular process has opened at any given point in time. All of these commands, so let me ask you a question. Do you think you'd be able to log into Timberlake and run lsof, unlike the department web server? Would that be something that you could do or could do or could not do? Yeah, these open a lot of security holes. So typically, these functions are only used to inspect your own processes. If you have pseudo access or some sort of super user access, you'd probably be able to run these on other binaries. But certainly, looking at the memory mappings of a process can expose information about what it's doing, looking at the files that it has open, whoops, can certainly expose information about what it's doing. So this is bash, again, 5257. That's our friendly PID. What files does bash have open? Does anyone know what these are? Dev pts slash 0. It has that open three times. Anyone know what that is? Take a guess. It's bash, OK? So down here, this is this configuration file, right? And maybe it's using that to load in the configuration as it opens. What are these other three files? I want something that's someone other than David, because I think David knows what these are. Yeah, like how do you get data in and out to bash? You guys use the terminal. Well, I don't know. Maybe you don't. Maybe you have some sort of fancy. Maybe you have an assistant that types for you. Maybe you have like a fancy voice thing that happens. But most of us use our fingers. So what bash is doing is it's reading data from the terminal. Where does bash write data? Like when you write type LS, think about what happens. You had to input data. You hit return. Then what happens? Yeah, bash writes information to standard out. And then there's a third one here. Does anyone know what that's for? Yeah, there's this extra file descriptor bash keeps going for something called standard error. Where is standard error usually directed? I mean, these are all pointing in the same place. So bash is directing standard error where? To your eyeballs, to the console. Same places. So it just gets mixed in with standard out. But you can change that when you run programs. You can tell them to direct those things to other places. This is a common UNIX convention where it's just convenient, in many cases, to distinguish between normal output from a program and output that is produced if there's an error. Now, the conventions for doing that are entirely up to you. If you want to write your program, so it generates all of its normal output to standard error and all of its error output to standard out, you can do that. And then people will hate you and not like using your program. LSOF should essentially, I think LSOF works by actually accessing the file table for the process. So it should show you everything that's in it. Yeah, and we'll come back to this later. All right, so and then the fourth thing that's open here as we pointed out is Bash OC. So unfortunately, when I built this slide, I had a really hard time finding something that had an interesting file open, so I just lied. I can do that, I guess they're my slides. So actually, great question going back to, what's your name? Alex, yeah, so the file table, you can actually see that this is the file table, right? This is a file descriptor number, right? So that's the index in the file table. We'll come back to this. Anyway, just in full disclosure, this wasn't open, so I just modified the output and fooled you guys, right? I mean, who cares? I'm the professor of everything I have to say has to be true. So yeah, just think of it as an alternative fact. All right, so yeah, but just pretend that I caught Bash when it started up and it was reading. At some point, Bash has to open this file, right? It has to parse your Bash RC file and load in the settings that you use. So this has to be open at some point. The reason it's not open later is once Bash loads its configuration files, it closes that file inside opening, right? Because it doesn't need it anymore. That file's only used during the start. Okay, so here's our process model at this point. One thread, address space with the variety of different components we've discussed, a PID, and then a file handle. And what we're gonna do next, because it's important to be able to talk about the way that we create processes is we're gonna spend a little bit of time talking about how this is modified slightly to enable some common design patterns that allow processes to share data with each other. So this is close to the way things actually work, but it's not completely correct. All right, I'm gonna skip this aside, you can look at it. Essentially the way that these tools pull this information, actually I can't skip the aside because it's too interesting. The way that, so you might ask, somebody asked about top before, PS, all of these tools like PMAP, where did they get this information from? They get this information from a file system called PROC. Has anyone ever poked around in PROC before? You can do this in your Ubuntu virtual machine, you can go into slash PROC, and you can start looking around. So what do you think is inside this directory? Slash PROC slash 7615. What is that number? It's a task ID on Linux. So it could be a thread ID, it could be a process ID. Remember tasks are sort of interchangeable. In this case, I think this is a process ID. And there's all sorts of cool stuff in here, right? So I can see the executable that it loaded. I can see information about its memory, memory usage. I actually don't know. I can see information about its scheduling, the current working directory. So what directory is it inside? So this is pretty cool. Does anyone know what's kind of special about PROC? There's something magical and special about the PROC directory and the PROC file system. So I'm inside here. I'm looking around. Yeah. Is it not actually files? Yeah, it's fake. There's no files here. There is no place on disk where there's a file called slash PROC slash 7615. It's totally fiction. And this is cool, right? Like you can go in there and you can poke around with CD and you can do LS and you can do tab completion and everything works. But there are no files on disk anywhere that correspond to the PROC file system. This is a common way for the kernel to export information about the system. So every time a process is created, the kernel creates a new fake entry in this process file table and it's able to fool you to thinking that this is an actual file. And this is actually super useful, right? But again, this is what's called a pseudo file system. When we come back and talk about file systems, keep this in mind because as long as I, you know, this is kind of a cool lesson in the power of interfaces. All the kernel has to do is make sure that it respects the file system interface that other file systems use. And it can fool you into thinking that there are all these files and directories here that have no physical reality. They don't exist anywhere on disk. All right, cool, PROC, yeah. Yeah, yeah, yeah, that's how TOP works, right? So TOP and PS, all they do is they read stuff out of this. So if you look at TOP, it basically monitors PROC and it looks for new entries and like every second it reads the entire PROC file system and compiles all the information and redraws the display. So this is cool. I could get this information in other ways but this is kind of a neat way for the kernel to export it because it allows all these other tools to work, right? So again, you can CD into PROC and you can start poking around in there and you'll find out all sorts of cool stuff, right? But there are no actual disk blocks that store the information for PROC. All right, I'm just gonna skip through the review unless people have questions. Okay, any questions about processes? So this is sort of where we've come to so far. So we have some idea of what's inside a PROCESS. We remember that the kernel is responsible for isolating processes from each other and making sure that processes can't crash the system or interfere with an operation of another PROCESS. But there are ways for processes to communicate data to each other, obviously. Otherwise, they wouldn't be very useful. Any questions at this point before we go on? Yeah. Basically, the information that it gets, so where is all, so it's not stored on the disk but the information where is the information from? Oh, that's a great question. So how does PROC get populated? Where does all that information come from? Who knows all that stuff? Yeah. Well, FORC modifies this information but who maintains that information? Who knows a list of every process on the system and all of the memory that they're using and all the files they have open? Who knows all this stuff? The kernel, yeah. So the kernel is the ultimate gatekeeper for all this information and the kernel is involved with creating PROC, right? So as you're poking around in PROC, the kernel is kind of on the fly being like, oh, okay, he asked me to list the PROC directory. Let me give a list of every pit on the system, right? Yeah, so the kernel maintains all this information. It's what knows, right? This is an operating system abstraction and illusion. Yeah. Yeah, absolutely, right? What if I obtained any one of those files? Most of those files you can't write data to, right? Some of them, well, okay. Let me make sure. I suspect that PROC is usually marked read only, right? PROC is a file system that's used to export data to user space. So it's the kernel saying, I wanna share information with you about what's going on in the system, right? And if you have the right permissions, you can actually access that information. Now, there is something, does anyone know this? This is sort of like super linear, someone's got to raise their hand. There is kind of an equivalent of PROC that is that you can write into and does magical things. Does anybody know what it is? There's sort of like the PROC is like the kernel saying, here's a file system that allows me to export information. There's another pseudo file system that's mounted at boot where the kernel allows you to provide information and to control the configuration of the system. Does anyone know what it's called? If you run mount in your Ubuntu Virtual Machine, you will see both of these file systems mounted in kind of special ways. So one's called PROC, other is called slash sys. So if you go and start looking around in sys, there's all of these interesting things. So for example, let's say that I want to change how the kernel modifies the CPU frequency at runtime. I can do that by writing data into the SysFS and then the kernel will change its policies. I think on some systems you can actually go to sleep. You can put the system to sleep by writing a value as somewhere into SysFS. Assuming you have the right permissions, right? I mean, you wouldn't want anybody to be able to do that. Yeah, so Sys is kind of the equivalent where rather than, you know, let's say I want to turn the, I want to give user space processes the ability to turn the Wi-Fi on and off or something like that. I can do that by creating a file in Sys and then to turn it on, I write a one into the file, to turn it off, I write a zero, right? Something like that. All right, yeah. I mean, that's one way to think about it, but they don't really need to be stored, right? The pseudo file systems, particularly PROC, I mean, the kernel can create PROC on demand, right? All the information that's in PROC is stored in kernel memory, but it's not stored as a file system, right? So for example, if you say, okay, show me the file descriptors that a particular process has opened like LSOF does. I think what would happen is when you read out of that entry in PROC, the kernel is like, oh, I know what I'm supposed to do. It finds the task structure that it maintains for the process, which is something you guys will work on for assignment two, and it just enumerates the process's file table, writes the data out in a structured way, and that's what you get, right? So the answer is yes and no, right? I mean, all that data has to be in memory somewhere, or otherwise, or on disk, or in some way where the kernel can fetch it, but it's not stored in that format, right? There's not a file somewhere that has the data in that format, the kernel can just produce it on demand. Does that make sense? Okay, good questions, all right. So let's update our process model a little bit. So the current process model we have is process have some references to open files, and we'll talk exactly about how the rest entrances work in a minute. The actual way that this works is a little bit more complicated, but this is kind of, I wanna point out something here, which is maybe this is one of the first cool system design principles that we're gonna learn in this class that comes out of the way operating systems are designed. And so what happened here, other than the process, oops, process moving up a little bit, in this model the process has some sort of reference to a file, and in the actual model what happens is the process has a reference to something called a file handle that itself has a reference to a file. Anyone know what this general design pattern is called? So I took something that had, oops, one link, and now in order to get to the file object that has information about the file, I have to follow two links. Anyone know what this is called? So sometimes referred to as adding a level of indirection. So again, in order for the process to deal with a particular file, I now have to file two pointers. The first pointer gets me to this thing called a file handle. The second pointer gets me to the actual file object itself. Not really symbolic links in the typical way you would think about a file. These are really references. The kernel file, has anyone programmed using read, write, open, close before? Yeah, well actually, let's put it this way. Has anyone ever, you call sys open, or you call open in the C library, which wraps this open. What is open return? Okay, what is it? Have you ever printed it? What is it? You get something back, and yeah, it's this like, okay, they made up a type for it. It's called a file descriptor or whatever, who cares? Like, what is inside that thing? Yeah, it's an integer. Anyone know that? That's all it is, it's just a number. Yeah, you thought, oh, it's this magic structure, like they taught us when we did Java, and it's got like methods, no, it's a number, right? It's a C, we don't do objects. So, when you call open, what do you get back as a number? Then later, if you wanna read to the file or write to the file, you pass the number back to the kernel, and the kernel says, oh okay, I remember what file you wanna use. So when you call open, that associates a number with that particular file, and then later operations, you use that number again. Why do I get returned in, sorry, why do I get returned a number? How is that being used internally by the kernel? There's a huge hint up on the slide. Yeah. It's an index into an array. This is one of the reasons why systems like numbers like this is because it's an index into a static array. So, when you say write, you pass the file descriptor, the kernel says, okay, they want to write to file three. It takes the number three, it looks it up in your file table, which is an array. It translates that into more information than it needs to finish the operation, including, it doesn't store the name of the file, but it stores something that's functionally equivalent to the name. A lower level file object that allows me to figure out what exact file do you wanna use. All right. But the actual information that you need to complete the rest of the operation is split between these two things. One is a file handle and one is a file. So, I don't know, it's like I wrote these slides or something, I find this all the time. I should just hit this button more. So, the file handle that you get back from open is an int. That points into an array. That array gets you to something called a file handle. There are three different things we're talking about that are easy to get confused. File descriptor is an int pointed into an array. The array contains a pointer to a file handle, which is a separate object. That file handle itself contains a pointer to a file object, which is a third, well, second object. There are two structures in here and one int. And then, the file system is responsible for taking that file object, the second object that I was able to reach using this chain of indirection and is actually needs to find the files on disk or over the network or whatever. The file system is responsible for making this work at the end. So, I have three levels of indirection. A file descriptor maps to a file handle. File handle maps to a file object. And the file object really doesn't even have to map to blocks on disk. It just has to map to something that acts like a file. Remember, we just talked about PROC. PROC has no blocks on disk. All PROC has is a file, is it obeys the file system interface. It acts like the other file system. Okay, so why would I do this? You know, like, level, like, so don't add levels of indirection just to make your code more complicated. No one is going to be impressed, right? It's like, well, I had one hash array that was storing references, but I could create three and just have things go together. No, I mean, there is an actual reason to do this. And the reason is that file state is one of the ways, or file handles are one of the ways that processes can communicate with each other after a fork. And this is something that allows me to set up pipes and all sorts of other things. So it's important to talk about. All right. So the actual file descriptors, that number, that's private to every process on the system. When I open, let's say I want to open food.text, the kernel hands me back a number. That number is meaningless to any other process. So let's say the kernel hands me back five and that's my reference to food.text. If some other process says, kernel, I want to read from five, the kernel will either say you don't have a file five open or your five points to bar.text or something like that, right? So the file descriptor is entirely private to the process. It's meaningless outside the process context. The file handles are usually private to the process but can be shared in certain cases. And we're gonna talk about the canonical case in which they're shared in a minute, which is after fork. So normally, when I open a file, I get a private file handle. But after fork, which is how processes are created which we'll start talking about today or early on Friday, the file tables are adjusted so that the parent and child are initially sharing file handles. So the parent and child have the same files open in the same locations with the same file descriptors, but they're actually also sharing this file handle object. And this is really important because it allows me to set up pipelines and others. And then the file objects themselves, the lower level file objects are typically only created one per actual file and those are maintained by the file system itself. These you don't even really need to worry about. You will encounter these when you get to assignment too. There will be a part of the structures that you have to create that corresponds to this file object. I'm sorry, it's not called a file object. It's actually called a V node. That's not my name. But that's what it is in this particular case. And it's not something you really have to worry about other than you have to know how to create one and how to manage those pointers properly. All right. And this is a case where, this is one of our design principles, right? So adding a level of indirection allows us to share state and facilitate control. And we'll see this here when we do this with file handles. The probably the most powerful place that adding a level of indirection comes into operating systems is when we talk about virtual memory, which will be about a month. Okay, so let me pause and we're gonna start talking about fork and then we'll come back to this file handle issue. Yeah. Yeah. Yeah, so, yeah. So that, my file table is private beach process, right? Remember that file descriptor is only meaningful to the process that receives it from open. So once I get it back and here's how to get me to help you with those things. I can help you open files. I can help you add memory to your process. I can help you create new processes. I can help you communicate with the process, whatever. Windows is like, I'm also here to help you out but I have a completely different list of things I can do. Now, underlying capabilities are the same. I still need to be able to create new processes, create threads, access files, but the names are all different, the semantics are different, the calling conventions are different, blah, blah, blah, blah, blah. And so now, obviously, and this is, it's cool I can say this today and I'm very proud of the Windows people for doing this because it's gonna cause more people to stop using Windows. Once you realize how cool it is to do some of the stuff using Unix-like commands, the Windows 10 now includes an environment in which Windows can essentially run Unix executables natively. So that's how much similarity there is. It took a while, it took us to Windows 10, but as a Windows 10, there's now a way for Windows to take the things that are, so essentially it's like, I ask Linux and then there's this new Windows, I think it's called the Linux Windows subsystem that translates Linux into Windows and accomplishes the same things internally. So there's a lot of overlap, but the interface that we're gonna talk about and the operating system that we're gonna talk about is Linux, but again, the concepts here are very, very, very simple. All right, I will see you guys on Friday. I'll answer your question if you come up here.