 All right, there we go. All right, sorry about that. All right, course recap. So, learned a lot this last, what, four months, three months. So, we get to recap our journey and begin our review process. So, here is one of the slides at the very beginning of the class saying what an operating system does. So, it manages resources. You have applications that you knew from your intro course. I guess this one's ECE specific, not EngSize specific course numbers. And I have to go complain at the EngSize people because you didn't have anything really in second year. So, that is one of the things I will do in the summer. So, I will try and remedy that or see if anything can be done about that. But this is how it looks like. You start off writing C, their application, they're using the operating system, even though you might not quite know it yet. Then you maybe had a hardware course that teaches you a bit about hardware. And then we have the operating system which sits in between. And we know the two major parts of our operating system are kernel and library. And we all like kernel stuff because that's what we spent most of our time discussing. So, there's three main concepts in this course, virtualization. So, we just share one resource by mimicking it, thinking it has multiple independent copies. We did this for literally everything. So, we had like virtual registers to mimic having multiple CPUs. We had virtual memory that mimics every process having access to all of the memory. And then we wrapped it up with a virtual machine. We had virtualized the whole machine and then we can have multiple operating systems running. The next topic, which was I'm sure everyone's favorite was concurrency. So, that's handled multiple things happening at the same time. Main concern there is data races. So, data races to concurrent accesses to the same location with at least one being a write. If I know that off the top of my head, so should you. And to go ahead and fix that, we had to have some type of synchronization. Main one was mutexes to make sure that only one thread is running that code at any given time. So, we don't have any concurrency problems. But, and then we also saw like ordering with centerfors, condition variables, that fun stuff. And then we also noticed that if we had multiple mutexes, we can potentially deadlock our code. And that means it can't make progress anymore. So, concurrency, lots of fun. Then we ended up at persistence. So, talking about file systems, disks, RAID, all of that fun stuff. Hopefully you're mostly done with lab six. So, that's a file system. That's just how you organize files on disk or organize the data. So, going all the way back. So, now this was like one of the first slides in the course. Now we can actually kind of understand all of the CPU modes because we visited them all in the course except for machine mode. So, machine modes like the most privileged instruction mode, it doesn't have virtual memory or anything, but it's more set up like when you're booting your computer. And then there is a more privileged mode called hypervisor mode. And that would manage virtual machines. And then what is less privileged than that but can still interact with hardware is kernel mode might be called supervisor mode that can access hardware directly. It is the thing that is managing virtual memory and deciding which process to run when. And then user mode, that's where all your application and libraries use. It can't interact directly to the hardware. It has to ask the kernel to do things for it through system calls. So, there's different ways to figure out or different ways to architect your kernel because we said that hey, what's part of the kernel? It's whatever code is running in kernel mode. So, there are different things you can put in kernel mode and that depends on your kernel architecture. So, if you have a monolithic kernel, you're pretty much putting most of the operating services directly into kernel mode. So, that would include things like virtual memory management, process scheduling, inter-process communication, file systems, device drivers, they would all be done in kernel mode. So, they're all running in that privileged mode. You have to do system calls to go ahead and interact with any of these. Then, and this is what the Linux kernel is and hand-wavy other ones. So, the other extreme from this is called a microkernel and that is trying to run the minimum amount of services in kernel mode. So, the minimum the kernel has to do is deal with virtual memory, deal with process scheduling and some basic form of IPC and then you could actually implement your file systems and device drivers and other things in user space but it would all have to go through the kernel through like the basic IPC mechanism. So, it's a bit slower because, well, everything would have to go through that system call interface which is a bit slower than just actually using functions. And this architecture not really used that much. So, like macOS will put things like device drivers in user space because the macOS kernel is sacred. So, you can't touch it or modify it or do anything. So, you can have architectures that are in between monolithic and microkernel. They're just pretty much the two extremes. And then we talked about libraries briefly. So, the kernel is definitely part of the operating system but what else do you consider part of the operating system? Kind of depends on what applications you want to run and what libraries you might need like depends on the application. So, like things like network manager that manages like your wifi and everything like what network you wanna connect to might use different libraries than say Firefox which probably draws a user interface for you and uses a network, blah, blah, blah and all that stuff. So, all those libraries may be considered part of the operating system depending on what you want your application to do because like, I don't know, your Android phone that runs a Linux kernel but you probably say it's a different operating system than your Debian virtual machine because, well, you run Android applications. Debian doesn't really run Android applications so you might consider them a different operating system for that but if you just ran Hello World they could both just run Hello World and they would behave exactly the same so maybe you don't consider their different operating systems if you just care about like command line things although kind of a pain to use on your phone. So, then we talked about processes so the operating system, more specifically the kernel has to maintain processes which all the information stored in a process control block that would include all of its state so like all of its current registers all the virtual memory we now know like the open file table, everything like that would be stored in that process control block and we have to create new processes and anyone remember the system called to create a new process off the top of your head? Fork, yeah, only way to create a new process creates a new process that is the exact copy or clone of whoever called it and they're independent throughout that point and that's just with Linux and the UNIXE systems that's the only way to actually create a new process. If we want to run a different program well the curl is going to have to like re-initialize that block and load in the memory all the details about that program and everything and that's done with the exact VE system call so that's how you can run a different program. So, then we talked, then we had the cool terminology where you could have a parent with an orphan and zombies and you can kill your children and all that fun stuff so we realized we're responsible for managing processes in UNIXE we have strict parent-child relationship so the newly created process is the child and the parent has to manage it so you should be able to identify and prevent zombie processes so zombie processes is when the child has died and you have not acknowledged it yet so it's still consuming some types of resources at least it's process ID and like exit status and then we had orphan processes so that's where the parent terminates before the child so the child has to get re-parented likely it'll get re-parented to a knit or a sub-reaper if there is one in the case of what you did in lab two and then question we know that the user and supervisor communicate via system calls how do the other modes communicate AKA super to hypervisor and hyper to machine so they communicate exactly the same way as you communicate from user mode to kernel mode but instead of a system call there's just another special instruction so it'd be called like a hypervisor call or machine mode call and it's pretty much exactly the same way and there's different interfaces defined for that which we didn't go into because we didn't go into how they're actually implemented but the idea will be the same all right then next we did IPC IPC was lots and fun so read and write through file descriptors file descriptors could represent anything including a regular file which we kind of now know how it's represented now through an I node and then we could redirect file descriptors for communication so like just making one process is standard out go to another standard in through a pipe so that was one way we could communicate between processes we saw signals so you could send a signal to a process tell them they should stop they're technically IPC because you can send signals between any processes you would like and there is a special signal to make sure we terminate a process the sig kill or kill dash nine so that was a fun one and signals are basically like interrupts then we transitioned to well we did a bunch of examples using pipes we even liked a brief detour on sockets and all that fun stuff but we kind of took a detour before we started threads to talk about scheduling fairly boring, fairly straightforward had the first algorithms we looked at we're like first come first serve which is just FIFO aka going to McDonald's or something like that you're just in a line then we did shortest job first that tries to reduce the average waiting time then we added preemptions and did shortest remaining time first so if a new job came in that was shorter than all the other ones we'd immediately switch to that and that would optimize the lowest waiting time then we did round robin to try and optimize fairness and response time so with all the scheduling algorithms we tried to like balance the average waiting time the response time so how long it took from when a process started to when it first ran and then the fairness aka how often you actually run it and one thing we wanted to prevent was starvation aka a process does not get to run at all so we added more complexity to our scheduling so we introduced priorities that also introduced priority inversion which was another issue so if a high priority process depended on a low priority process well then you should probably treat at least temporarily that low priority process as a high priority process so you can go ahead and clear that dependency and then you can revert it back so that was solved that problem by just doing priority inheritance and then we discussed some processes need good interactivity some others not so much like maybe you only care about response time for programs you are interacting with and then you have background processes you don't really care about you just want them to finish quickly then from that point we were only talking about single CPUs we started introducing multiprocessors and said okay well the dumb thing could do is just schedule them to CPUs as we have CPUs available but you know now especially after dealing with concurrency you can imagine how that would be actually hard to implement and have locks around the whole thing so it would be slow so one thing they did was they just have a scheduling algorithm that only runs on a specific CPU core and then we just switched around what processes are on what CPU briefly talked about real time time for class so briefly talked about real time so that required some predictability Linux too complicated to actually do real time so it does soft field time but if you do embedded systems you might have to start counting core times because you need a reaction within a certain time period and then we briefly looked at Linux's current scheduler called the completely fair scheduler that tries to model the idea of ideal fairness where every process gets approximately a equal share of the CPU time all right then we switched to virtual memory everyone's favorite topic or one of the top two at least so how we did virtual memory was we use pages for memory translation aka just big blocks of memory so we divided memory into blocks called pages and then we did our translation on a page level and to do that we use page tables so our first idea was just to have a big one big table of all the mappings so it was just an array of page table entries and the page table entries what they tell you is the physical page number that that virtual page number is associated with and all the flags so whether or not it's valid to actually access read, write, execute and then some other ones we kind of experimented with so like the custom bits you can use whatever you want for so you use that for lab three to do copy on write we saw the reference bit when we talked about the clock algorithm so that's also one of the bits that would be in the page table entries that are populated by the MMU and then yeah we discussed at this point if we have one single large page table then they're always huge so we discussed multi-level page tables and this is where the fun really started right so how we did that is we took our virtual address and we split it off into different indexes and the number of bits we used for the index was determined essentially by how many entries we could fit on a smaller page table and our idea was that each page table fits exactly on a page so this was a lot of power of two math but hopefully it wasn't too bad so here our page size was four kilobytes pretty much is always four kilobytes for most systems so that means we had that's two to the 12 bytes so we need 12 bits to reference an individual byte on our page so that's why our offset was 12 bits and then our page table entry in this case was eight bytes aka two to the power of three so we could fit two to the nine of them on a single page so that's why we had nine bits for each level of page table and the reason why we have three levels of page table page tables is well we wanted to support in this case a, what was it? 39 bit virtual address, right? 39, yeah so this is a 39 bit virtual address why is it 39 bit virtual address? Well that's half a terabyte or 512 gigabytes and that's good enough for most current processes if we determined that that is woefully inadequate we would just add another level so we just add another level and then we can support a 49 bit virtual address and that will probably keep us good for a while but the reason we don't automatically just go to having four levels of page table is because this is really slow when you have to translate an address, right? To translate a single address in this case I have to access the entry in the L2 page table then look up what entry I need to access in the L1 then access what entry I need to access in the L0 and then I finally get the address I was looking for so I turn like one memory access into four using this which is really really slow if I added another level that's just another memory access I enter it into five now. All right, any quick questions about this? Our favorite thing. So then after we did this, we realized it was slow we briefly talked about we don't have to write the translation, we just have to manage the page tables if we're the kernel, the MMU or memory management unit is hardware on your CPU that will actually use the page tables which will do the translation for you and in order to speed it up, it uses something called a TLB or Translation Look-Aside Buffer that is basically just a cache so saves any recent translations you've done so we speed up our memory accesses so remember we had that little program that accessed like every page it was really slow and then when it accessed everything on the same page it was really really fast that was mostly because of the TLB and not having to retranslate everything over and over again and that is also why they told you to like make sure all your memory is contiguous so it's really really fast, okay that was a lie they told you make sure it's contiguous so it's really really fast so it's all on the same page not it has to be contiguous because if it goes across the page boundary that's more we have to translate and that's basically why it's always slow otherwise if you just use physical RAM accessing memory is fast it's called random access memory for a reason so then we had a brief detour because I thought you needed a break from that hardship so then we explored some question, yeah so the TLB would basically just store the like page table entry associated with it so yeah, yeah so it wouldn't store the offset because the offset, oops, the offset whenever we do the translation we never have to touch it, right so it would just store, yeah, the PPN it would probably just store the whole page table entry so it has all the permissions and everything but it would definitely store at least the PPN and oh, and yeah then we had other stuff with the TLB where if we context switch between different processes we'd have to flush the TLB because remember your virtual addresses well they're only valid for your process so you probably shouldn't use another process's old entries in your process so we had to flush the TLB aka another word for just clear it clear everything so that you don't use values for another process and all that fun stuff all right, anything else with virtual memory? All right, mostly okay, all right then we had a brief detour we had the question using priority scheduling yeah, yeah, and then if we did mmap that's not gonna be covered on the exam but that's basically a way that we can manage our own virtual memory so we can ask the kernel to hey, just map this file to memory so I can just access the file through memory instead of doing a read and write system call all the time just do all the mappings and handle it for me so that was fun then we went to threads and then things went downhill again so then we had concurrency so we had threads, we related them to processes they are lighter weight, they share memory by default so the only thing that's independent in a thread are its registers and its stack otherwise they live within a process so they share all the same virtual memory they share all the same open file table so that's all shared between a that's all shared within a process and each process can have multiple threads originally you just have one to start and that's where we had some concurrency issues but both processes and kernel threads at least enable paralyzation so there's different ways we could implement our threading library most implementations map one user's thread aka like one of your P threads or one of your, well, yeah one of the P threads to a kernel thread so the kernel actually manages all the scheduling and things like that and if we have multiple CPUs in our system it can run them all at the same time aka in parallel and then while we also had some complications with what would happen when we fork when we have multiple threads or signals if we have multiple threads and we fork the new process that gets created only has a single thread and that single thread is a copy of whatever just called fork and for signals if you have multiple threads and you send a signal to it a random thread gets to deal with the signal so that was fun and the P thread implementation was one to one you also in the lab four did user threads so you just had one kernel thread and then you implemented all of your user threads 100% in user space so you just manage them through you context you created their own stacks and then you manage switching between them by yielding and all that fun stuff that you implemented so after we have threads then we have synchronization issues biggest one being oh wait then we took another detour cause I thought you needed a break so we had another detour on sockets and I showed you how to use a U context so you don't need to know sockets for the exam but they're just IPC aside from the setup which you would probably just look up anyways whenever you have to use them you just use read and write system calls they won't act any different they're just IPC but it might be processes on different machines U context would be fair game for the exam it's basically the state of the user registers I probably wouldn't ask anything too complicated but it's fair game because you used it on lab four to implement your threads again U context was basically just all of your user thread or your CPU user registers so like your stack pointer, your program counter any other values in the registers all right then we had data races and yeah things went off the rails again so we definitely 100% a million percent need to know what data races are and how to prevent them that's like probably most of your exam questions so we had mutexes and broke all right there boom all right I fixed it all right I should stay out of my pocket there we go okay so any questions about this stuff so lots of fun stuff questions like hey is there a data race in this code which I mean if I ask you the exam the answer is probably yes and then you fix it so then that was data races we also talked about how to ensure order because we had mutual exclusion before the main way to ensure order between threads so making sure one thing happens before another thing we use center fours they were basically they worked as like a atomic integer so they had some initial value you can set and then they had two fundamental operations you could post it which basically incremented it by one and then it had a decrement called weight and the only it would atomically decrement but the special thing it did was if the current value is zero it will block and wait for another thread to increment it the rule is it will never decrement it below zero so it can never go negative so it will block you can force a thread to wait and for yeah you can force a thread to wait for another thread to do some operation the other thread posts and then the other one gets unblocked and it continue still have data races still need to prevent data races we saw some more advanced locking so we went over condition variables might be clear for more complex condition signaling we briefly talked about like locking granularity you had some practice with that in lab five so I told you to use one lock you discovered it was slow hopefully and then I told you you can do whatever you want what you likely did is you probably made a lock for each entry in the hash table and then boom it was a lot faster if did anyone try making a mutex for each individual node in the link list so you didn't get it running okay so we could you could have tried I guess it was also hard to implement but having a mutex for each node kind of a pain to implement as someone found out for you and if you actually implemented it you would probably find that that is really slow too so yeah because you just have way too many locks at that point so there's some trade off there how many locks we have how big the region is so on and so forth and then well we use multiple locks to speed things up and then we have the situation where you would have to prevent things like deadlocks likely you didn't really have to do that in your lab but that's another concern you would have so you want to make sure that you don't try and acquire a lock while another thread has that lock and then it's also waiting for you so we needed what circular weight so you need to have a cycle of locks and then another condition is you have to have hold and wait so while you have a lock you try to acquire another lock if you have those conditions you can deadlock and then we prevented it through breaking one of those so we broke hold and wait with try lock remember we tried to acquire that lock if we didn't get it we unlocked the one we had and then went to sleep for a little bit then tried again and then the other strategy we had is we just always acquired locks in the same order that way we did not have any circular dependencies so we were good alright and then we were done with the hard part of the course joy so then we talked about disks and enabling persistence we talked about SSDs and RAID so SSDs kind of like rammed except they're accessed in pages and then blocks so there are a bunch of pages on a block and it had like weird rules where you would have to you can only erase blocks at a time and then you can only write to a page if it was freshly erased so some complications there the hardware would have to work with the kernels to get the best performance aka trim so you just tell it that you are not using a block anymore and you kind of have to know which pages are on which blocks and some minor details like that the major part was using RAID aka multiple disks to tolerate failures and improve performance so we went over a bunch of different modes the first ones being like RAID 0 which was striping aka I'll call it sonic mode that was the gaigo fast mode so you just striped your data across all the drives you could use all your data but if you lost a drive you lost data so not really good if you care about your data but if you want things to go fast well it was n times faster where n is the number of disks then the other extreme was RAID 1 aka mirroring so all your disks are exact copies of each other so as long as you have one left you're good, you don't lose any data then we talked about RAID 4 which introduced the idea of parity and had a parity disk and said RAID 4 was stupid because everything was concentrated on that parity disk it always had to be updated no matter what physical disk you actually had the information on so then we had RAID 5 which was the same idea but it just distributed the parity across all the disks and then we had RAID 6 and then RAID 6 was two disks of parity so that, well, you can tolerate up to two disks dying without data loss and then we briefly talked about RAID 10 or RAID 1 plus 0 or whatever you wanna call it that creates two copies of striped data and the number of disks you can tolerate is kind of up to luck so it might be one but as soon as you lose one pair of disks then you're screwed then we had file systems to describe how files are actually stored on disks so API-wise that we had looked at before you can open files change the position to read and write at through that seek system call that you were using in lab six because you're managing a big one megabyte file and then each process has its local open file table and the file descriptor is basically an index into that table and then there's a global open file table that the kernel is managing for you and that contains a pointer to a V node which represents something you can read and write to if it's a real file, that V node is an I node that we kind of know a bit more about now and then it also has the position and the permissions so where in the file we would read and write to if we did a system call and in order to store a file on disks we went over a few different allocation strategy contiguous aka just like an array so you just pick the block it starts at how big it is we said that was a bit silly to do then we went over linked allocation which is basically a link list of blocks said that would be kind of slow because the pointers are stored on the blocks themselves then we did fat or file allocation table which was the same idea but we just used like a big array of pointers instead of having them on the blocks so they would be a bit faster and then we did indexed allocation that's basically like a big array and then said that well that doesn't really work that well because we can't support that big of a file so then we talked about I nodes that are more of a hybrid so they're more or less use the index strategy except they borrow the idea from page tables and have multiple levels but unlike page tables that you just have always three levels or always four levels if you have a 48 bit virtual address it has varying levels so in an I node like the first 12 blocks are direct blocks that are stored on the I node and then there's one single indirect block so that points to a block of pointers and then those pointers point to actual blocks and then there's double so it points to a block of pointers which points to a block of pointers which points to a block and then it got up to triple and then we figured that was good enough that is big enough to support all the files we currently need and on Unix every things a file named in a directory can either be hard links or soft links most things and a hard link is basically just a name to an I node you can think of a soft link as just a name to a name but technically it's like a name to an I node and then that I node has a name on it so as you might have discovered from lab six soft links, yeah, technically they're stored the same way but you can think of them as named a name hard link as name to I node yeah, I think that was that so any questions about this fun stuff or I guess lab six is valid too all right, so then we talked about page replacement it was fairly boring it was fairly straightforward we saw some algorithms so there's optimal that was the one that we used for comparison where we looked into the future figured out what page to replace by what's not going to be used in the longest time then we did random didn't have an example of that but it actually works surprisingly well because it's better than FIFO, FIFO sucks but it's easy to implement but had that weird anomaly where we had less memory and we had less page faults so that gave us a solution of if we want our computer to go faster we just get less memory which obviously doesn't make sense so not actually used we talked about least recently used which looks forward in the past to try and approximate the optimal so you replace the page that hasn't been used in the longest but it's expensive to implement what we can actually implement is an approximation of least recently used called the clock algorithm that we did not that long ago what was that Friday? I think that was Friday, Saturday, last Wednesday something like that wasn't that that long ago so I probably don't need to go over all the details there will probably be a question on this because sure, why not then we briefly, oh wait, no, that was longer ago then we briefly talked about memory allocations so the kernel doesn't have malloc or anything like that so it has to implement its own if you wanted to implement your own memory allocations so aka any dynamic memory allocations you have to be concerned with fragmentation so fragmentation between allocations or blocks is called external fragmentation within a block is called internal and there was three general allocation strategies for just any size allocation you want they all use linked lists there was best fit, worst fit and first fit then we talked about other memory allocation strategies we talked about the buddy allocator which kept things in powers of two and that was a linked list for each power of two then everything's the same size and the main idea behind that was to make merging blocks back together to be really, really fast so every block of memory has a buddy you split it off into two so if you and your buddy are free you know you can merge them back together and since computers like powers of two you can actually implement that very efficiently then we talked about a slab allocator which is basically just allocating as if there were an array and then we can just figure out what is allocated by using like a bitmap that just keeps a track of whether or not that index is allocated or not then we wrapped up and virtualized a whole machine that allowed multiple operating systems to share the same hardware idea like with processes to provide isolation between different operating systems the hypervisor is the thing that allocates resources and actually controls the hardware type two supervisors they're implemented in user mode they're slow, type one hypervisors need that hypervisor mode hardware support there's some complications if they over commit on resources but you can physically move virtual machines to different actual physical machines because it's basically like context switching and then we briefly touch about containers to have all the benefits of virtual machines without the overhead and that's the course that was the pretty much entire course so I'll leave the few extra minutes for other questions or let me know on Discord what we wanna do in the last lecture I assume questions probably unlike virtual memory and concurrency and stuff like that but let me know I can also try drawing so here's me trying to draw to like try and summarize the whole course in one little picture I don't know how well it works but bear with me so like we had our registers in stack and those together made up a thread right and then while in a process there would also be a heap so their stack and your heap are all in virtual memory and that's managed by the kernel and then other things in your process is the open file table and one of those things it could reference an I node which is how you access the hardware storage through the kernel and all of those are all in a process so this big thing is in their process and then in order to figure out what process to run that's what scheduling is so that's this so your kernel also has to deal with scheduling so that's dealing with which process to run and that's basically figuring out which process should be executing on the CPU so everything in red there is like actual hardware so your compute would likely just be your CPU and then we manage physical so this would be physical memory so that's managed by the kernel because each process uses virtual memory and then managing this is done through page tables page tables that's also done by the kernel and then I nodes you know they're on storage through a file system and then the other part that's missing is when we had like multiple threads so when we had multiple threads you know what color should I use so when we had multiple threads that's when we had concurrency whoops not note you concurrency so that's where we had concurrency issues when we had multiple threads because they're all accessing the same memory which would be the same physical memory and that's why you didn't have concurrency issues before because you didn't know about threads and everything was independent data races did not exist before because you only had a single thread uh... anything else that should be added to that or like where anything else fits and then I mean I guess I could draw a box around the whole thing and be like virtual machines but that's that drawing ish kind of makes sense kind of that's a kind of the whole course right you can point at pretty much any topic in the course and it's somewhat there I think right alright any other questions in our last few minutes I have not started writing the exam yeah yeah so how I write the exams so we'll have the reviews well we can do a review session on Friday and I'll think about the exam more but generally how I do the exams is the midterm was mostly the first half of the course so final exam is everything but since I already tested you on stuff it's going to be weighted less heavy towards the midterm stuff it's still there yeah the final exam is the entire course so everything will be there generally how I do it is if there were parts of the midterm I didn't like or like things you did bad on then I would include them in the final so probably expect a virtual memory question yeah yeah so like like the like the midterm I'll give you all the functions and everything like that so you have them alright uh like we program stuff from scratch probably not at most I haven't really done one but at most if I did want to be like design this program question like how would you do this I don't care about you writing C code on exam unless you want me to yeah yeah probably I think I forget most of the time there's a process there's a question on processes question on threads uh like if you look at the past exams I put like topic headers on all of them and then like I'll do something about ordering something about locking something about deadlocks something about files something about and then short answer questions that fill in the rest a clock question yeah clocks page replacement uh I very likely do not need to write a scheduling question uh yeah I think that's pretty much it unless you want like a write me a C program and I will type it out and if you have a syntax error it doesn't compile and you get zero no yeah they're kind of silly yeah if you want you can ask me like spoiler alert if you have a good idea for exam questions I'd probably steal it and put on the exam I mean I've written down some already I just have to sit down make the exam but some questions are based off literally what I get asked in class if you didn't notice that in the midterm so yeah I guess we're out of time so I'll be around again just let me know on discord look at the past exams so 344 and this course those past final exams look at them any questions you want me to definitely go over it Friday just let me know but so just remember pulling for you we're all in this together