 To talk more about parts of the system call interface and give you an introduction of what the kernel is and what it does and the role it plays in helping processes. But because assignment one comes out, and we want you guys to be prepared for it at least somewhat, although I know it's been out for a week, we're going to move this material up a little bit sooner, and it does sort of make sense in the context of fork, which we've been talking about recently, because fork is what creates synchronization problems for your operating system, and of course on modern systems fork and its variance and clone and things like that are what create synchronization problems for applications as well. So we're going to talk a little bit about this for probably the rest of the week. Okay, at this point assignment one, if you have not started you're in trouble, so please start the assignment. There will be no late credit given for assignment one. It's only 50 points, not the end of the world, but we're not going to drag this out. Plus there's actually a part of assignment two that's due a week from Friday, so we're just going to push outwards. So get your points for assignment one, or you can implement a lot of the same things that you could have gotten points for for assignment one for assignment two anyway, because you have to have working locks and condition variables to do the rest of the assignment. So those are your choices at this point. You know blah, blah, blah. I mean probably where you should be at this point is working on the second of you know the three sort of other parts of the assignment pass locks and CVs which are reader writer locks, whale mating, and the stoplight problem. So hopefully you've solved one of those three problems, and you're sort of moving on to the second one, or the third, yeah the second part of it, and that's what you guys can work on for the rest of the week. Yeah, okay, any questions about this? So as Scott announced, the test 161 submission server is open. We've started to see people submitting. I would suggest the following. So regardless of where you are, can there be one conversation in the room? Thanks. Regardless of where you are on the assignment, please submit. Just go ahead and submit something. At least even if you have zero points, Scott I think in his post reminds you there's a way to do a verification step that makes at least partly sure that things are set up properly so you can submit later in the week. The reason to do this now is that you don't want to be trying to solve these problems at 4.30 on Friday afternoon. So get to the point where you have one submission up, you've gone through the process, there's some setup that you have to do that requires you know setting up some new things on GitHub, creating an account on test 161, adding some things to your test 161 configuration, et cetera, et cetera. So get that done. We will certainly be able to help you with it in office hours. Scott's tool does a really nice job now of checking for a lot of things before you submit. So there's a lot of problems that we can catch before you actually submit anything to the server, which is good. There is one little gotcha for this assignment that Scott I think has talked about in his post. Did anyone read it? I haven't read it yet. Does he talk about the reader-writer lock thing? Okay, yeah. So even if you don't implement reader-writer locks, you need to define reader-writer lock primitives somewhere. My suggestion is just either have them return false or something or have them be regular locks, which won't get you any points, but we'll at least allow the test we take complete. That's the one little gotcha here that's something that we don't, we can't really test before we push things to the server. Any questions at this point? But as far as we are concerned, we are ready to rock. So we're expecting to see assignment ones come in this week. And like I said, 5 p.m. Friday, that's the deadline. You know, get your 50 points before that point at that point, they're gone. So we will take your best assignment one submission before 5 p.m. and then move on. Yeah. It's the submission time that matters, right? So when you submit through test 161, if there is high load on the server, you'll get put in a queue and your assignment may be run and scored later. Assignment one's pretty fast. This becomes more of a problem once you get to assignment three, although at the same time with assignment three, we see fewer and fewer people submitting. So, you know, it's like the end of the Oregon Trail. Everybody's died along the way. So it's not as big, it's not as big of a problem. But yeah, for assignment one, the testing is pretty fast and doesn't consume a lot of resources. So, but yeah, it's the submission time that matters. So if you submit something at 459, you know, we'll grade it. I hope that it works, right? I mean, we will give you somebody else asks this, we will give you your best score before the deadline. So, you know, feel free to submit as many times as you want. Clearly, we hope that one thing I would ask you guys not to do and this is not as much of an issue for assignment two and assignment three, but please don't repeatedly bang on the submission site, right? It's cool, like we'll handle it. And I think there's logic there so that you can't submit something if something that you've submitted is already being graded, which is good. So if there's a long queue and you're in it, I think we defend against that and prevent you from submitting again. The reason why people do this on this assignment is they don't write tests for reader-writer locks. So, write the test to reader-writer locks, that's the solution to that problem, right? It's not that hard. I'll be happy to describe to you how our test works and you can implement it yourself, but please don't like whack against that to try to get things to work. It's not a really great debugging tool. For assignment two and assignment three, we give you all the tests. I've seen people asking these questions like, what does this test do? You have the code for the test. So, go read the code. I mean, that's the easiest way to answer that question. I know that the tests aren't necessarily super readable, sorry about that, but they are there, so there's no mystery here. And for assignment two and assignment three, we don't hold anything back. You guys have the test suite that we use. We run it as you guys have it. All right. So, let's talk about synchronization. Yeah. No, we don't grade your tests. At some point, we will. Yeah. At some point in the future, we'll create broken reader-writer lock implementations and make sure that your tests work, but we don't do that yet. I wish we did. All right. Any other questions? I want to move on to talk about synchronization. Okay. Yeah. You should. That's our goal. Our goal is that the score that you get before you submit is the score that you get after you submit. Now, there are some subtle differences between the testing environments that can cause your score to vary, but our goal is that the two are the same. And actually, we have data from last year showing that they are quite close. Again, for assignment one, the only difference is this is the one time that we are going to run a test that we don't give you the code for. So there, depending on if your reader-writer locks work, if you don't have a test for it and you're just submitting blindly and hoping that you get points locally, or maybe your test just returns true. So you get credit locally, but the server test exposes problems in your implementation that you don't check. But yeah, that's the goal. That's the whole goal of this approach. We don't want you guys to have to submit to the server to get a grade. We want you guys to be able to iterate locally and then submit when you feel good about the score that you have. Test 161 is a testing tool that should be part of your development workflow. It's not just a grading program. Yeah. Okay. Any other questions? All right, so let's talk about synchronization. How many people have dealt with synchronization in some way, shape, or form before? Okay, this is good. Every year, that number keeps kind of creeping upwards. So that's good. So here's what you need to know. Here's sort of the primer and then we'll come back and sort of explain some of this later. Even on a uniprocessor system, one of the ways that the operating system makes better use of system resources is by creating the illusion of concurrency. Now, today, even your smartphone has like two or four cores. So parallelism is a reality. Your phone can be doing two things at the same time. There can be two or maybe four or eight separate threads of execution that are running at the same time. In the past, that wasn't true. Processors have been for a long time, the most expensive part of a system. And so there were many decades for which computers came with one processor. And actually, the reason why we've chosen to go to multi cores pretty interesting, and we'll come back and talk about that later in the semester. It has to do with some interesting hardware realities that have started to take hold over the past couple of decades. So in the past, you had one CPU, but the operating system created an illusion of concurrency by quickly stopping one thing and starting something else. So even on a single core system, you could be listening to MP3s, you know, watching YouTube, loading your ops class assignments in another browser, typing email, whatever. So there's this illusion that the operating system create that multiple things are happening at the same time. And the way the operating system does that is it exploits your limitations as a human being. So you guys can't see the fact that things are, you know, maybe if you were like a dog with really, really good eyesight or some sort of animal that could see this happening, you could see sort of things stopping and starting all the time. But we have limitations, the computer exploits those to us, everything blurs together. And it looks like things are happening simultaneously, despite the fact that what's actually happening is things are switching back and forth extremely quickly. The abstraction that we use to multiplex the CPU is something called a thread. Threads exist within processes. There are also kernel threads, which the kernel uses. And again, you can think of these as being attached to the kernel process, but threads of the abstraction here. So threads are the core abstraction when we multiplex the CPU. All right. So this illusion of concurrency is fundamental to sort of modern interactive computer systems. The reason that we swap the CPU back and forth, so often it's not just to make it look like multiple things are happening at the same time. It also allows us to make use of all the resources on the system more efficiently. So for example, if there's a program that's making heavy use of the disk, if I allow that program to run a little bit, it may get to the point where it's going to have to do something with the disk again and then it can sleep and something else can run. So swapping the CPU back and forth between multiple threads also allows us to put all of the resources on your system to work more efficiently. And that's what you want, right? I mean, you paid a lot of money for the computer. You want all of it to be working at peak form. However, and the idea of concurrency, so thinking about multi-threaded programming. If you guys have performed multi-threaded programming before, in a lot of cases this has a reasonable mapping down to ways that we want to structure our applications. So the user clicks a button. That button causes some code to run. That code does something. Maybe it submits, makes some sort of web API request or updates the screen or loads content from a file on disk or performs some sort of database operation or whatever. But this idea of tying a sequential set of operations to some sort of event is pretty useful. How many people have done something like this? How many people have written an event handler in JavaScript? Or I guess Java has UI frameworks? Although I'd be frightened to use them. Or like Python, Go, whatever. I mean, you've written an event handler for something like a button click. User clicks a button or an Android or something like that. So you guys have some idea of how this works. Like I said before, the illusion of concurrency is also useful to hide the latencies caused by slow hardware devices. So the CPU is by far the fastest part of your system. Everything else is way slower than the CPU. That includes memory. Why do you think you have three caches in between the CPU and main memory? It's because main memory is actually dog slow from the CPU's perspective. And modern processors play a lot of cool games to try to hide the latencies associated with memory. You thought memory was fast. Then start talking about the disk and we're into a whole new universe of slowness. And so a lot of what the operating system tries to do is make sure that even while we are waiting for some terribly slow device like a flash drive that you paid a thousand dollars for, the CPU which is like tearing along an incredible clip can continue to do useful stuff. So that's another reason that we do this. However, concurrency forces us to reexamine how we build computer programs. So many of you are probably used to this notion that for some reason we still, you know, there's really no reason anymore for us to continue to teach people how to program sequentially. It really doesn't make a lot of sense, right? I mean I know it's sort of a comforting mental model, but really you guys should see concurrency a lot earlier because it's it's fundamental to the systems that you're running on. Like your smart phone has four cores. So this is just how the world works now. In order to make sure that things can happen in parallel and safely, we need to do some extra work. So the idea of being able to use multiple CPUs simultaneously or to break our application into multiple threads that can run simultaneously is really useful, but it causes us to have to do some work to make sure that things happen safely and coordinate particularly access to shared state. So imagine I have a data structure and I have ten threads that are accessing that data structure in parallel. They're accessing it at the same time. I have to do some extra work to make sure that those threads can access that data structure safely and that the data structure is consistent as those multiple threads access it together. So coordination, how do we enable multiple threads to work together? Correctness. How do we make sure that certain invariants still hold? How to make sure that that data structure has certain properties that cause it to remain usable over time even when it's being accessed at the same time by a bunch of different threads. So what we start out focusing on normally when we start talking about concurrency is correctness. And that's kind of what you care about, particularly if you're trying to do the assignments in this class. You care about the fact that even if a bunch of programs are making system calls at the same time or a bunch of programs are accessing memory at the same time, there are certain variants that hold across the system that allow some of the core data structure you guys are used to not become corrupt. So this is important. Okay. Now the operating system is itself probably, you know, was one of the, I don't know, I don't want to get into too much trouble here, but you guys don't know anything about computing history. So I'm safe here, right? You guys were born like five years ago. So the, you know, the operating system is like historically one of the more interesting parallel or concurrent systems to write. Even before you had today, you know, every modern operating system supports user level concurrency. So it's quite easy and natural for user level programs to use multiple threads to do things at the same time. How many people have written a concurrent user level program? Okay. So this is like something that's doable. Why? Why is this so important now? Like let's say someone was like, you know what, I've got this fantastic operating system for you. There's only one limitation. Your process can only have one thread. Like, you know, that's cool. You know, that's not a big problem. Why would you, why is that a non-starter today? Yeah. No man, I'm just building some sort of like party finder app, right? So I can dominate the college party finder app marketplace or whatever. No, it's not big data, right? I mean, maybe that's big data. I don't know. I don't know how many parties going around here. Dan? Yeah, but whatever. It doesn't matter. I mean, remember Facebook and Gmail, it's cool if they all both have one thread because I can use them back and forth, right? Yeah. Yeah, but so what? I'm just like, you know, I can use some sort of user level threading library that makes it look like that one kernel thread is actually multiple user threads, right? Why is this a problem? Threading is not usually used for correctness. Usually if one thread does something wrong, everyone's going to die, right? Like I just can't keep those things that isolated. Yeah. Yeah, but that's I can just trick you. That's not the problem. We started off talking about something. What is the reality of modern hardware? Yeah. Yeah, but I can do that. Yeah, you know, at some point, here's the problem. If your application can only use one kernel thread, it can only run on one core. So even if nothing else is going on and all the user wants to do is use your application, they bought four cores. They paid good money for those four cores and three of them are going to be sitting there doing, you know what, right? And that's not what you want. So as multi-core became a reality, became increasingly important for user programs and for the system itself to provide the ability to run things concurrently and that's also why you see a lot of interest today in programs that allow you to write concurrent code more easily because I've got four cores on my phone, on my computer, on my under my desk, I might have 30 cores, right? So it's just not acceptable anymore to have user programs that only have one thread unless it's like bash, right? Now, but even going back into the dawn of time when processes sometimes only had one thread, the operating system still was fundamentally a multi-threaded process, right? Because as soon as I have two programs, as soon as I have two processes now the kernel has to deal with this sort of problem. So the only way the kernel can get away with not thinking about concurrency and synchronization is if the kernel only ran one program ever and that's not a very interesting system, right? Imagine you boot up your computer and it's like which application would you like to use, right? And you click one and that's all you get to use, right? You have to reboot and then you can use, you know, okay I want to listen to some music, okay reboot, listen to some music for five minutes, okay, reboot. That's actually kind of, did anyone have like an old Apple 2, Apple 2e? Has anyone ever used those computers? That's sort of how they were, they had a disk. You guys, does anyone know anything about this? Do you guys understand how computers used to work? At all? I know, okay, just to hold on it's like grandpa's just gonna light you for a minute, right? So a computer used to have a disk drive, right? I know those are weird, okay? Like an actual disk drive and there was like a disk, it was like that big, okay? It held like 32 bytes of information, all right? That's what fit on this disk. Actually, I missed out on the real, it used to have these floppy disks that were like eight and a half inches. I never, I never experienced those. Those were awesome. It was like a record, you know, except it probably held less information than the record. So you stuck that in, you turned the computer on and then whatever was on that disk was what you did with the computer. If what was on that disk was I'm gonna play some sort of number game that has monsters in it then that's what you did for half an hour. When you were done, you turned the computer off, you took out the disk, and then it was like, oh now I want to play Carmen Sandiego. So you put in a different disk, turn on the computer again, and then you would play Carmen Sandiego for a little while. Like this was how, this was how computers were when I was growing up, right? And I'm not that old. So it's kind of awesome how much progress we've made in this area, right? I mean you guys have no idea that this ever happened. Yeah. No, because that's like, if you, if you have a floppy disk in your life today, something has gone wrong, right? I saw one around the department the other day and I was like, what is that? Like, I had this, I had this burning desire to just throw it in the trash. I was like, this should not be sitting out here. Is this like a computer history museum? I mean, people are going to come and you'll be like, I'm not going to study computer science here, there's a disk, right? That's evidence that this place is backwards. There's also books about computers lying around. No, those are almost as offensive, right? Anyway, like huge books, the .NET framework, like who is going to read that, right? Anyway, okay, back to the, back to multiplexing. So and the kernel is also actively involved in some of these latency, these attempts to hide latency. So fundamentally remember one of the kernel's responsibilities is multiplexing system resources, making efficient use of all that hardware that you paid good money for. And one of the core ways that the kernel does this is by, is using concurrency, right? Stopping one thing, allowing something else to run, starting that thing back up again when something needs a disk or needs to wait for a lock or whatever, putting nothing to sleep, starting up something else. Okay. And the kernel ends up in this position where it has lots of shared state because the kernel is also sort of gaining access to all those underlying system resources and lots of threads. And so the kernel has this difficult synchronization problem to solve. The other problem, of course, with right now brain systems is if the operating system gets it wrong, everybody suffers. If your application has a synchronization bug, who cares? From time to time it crashes. And then you have to restart it. No, my, my lovely wife uses wonderful computer programs that are very effective except they crash all the time. Right. I have no idea why that is. Maybe it's some sort of synchronization problem. But who knows. So this is still happens. But it doesn't crash the whole computer whereas when the operating system gets these sort of things wrong, bad stuff happens. Look at, look at that. It's like I wrote these slides. Okay. Let's keep going. There's, and I've been using these terms a little bit loosely today. But if you want to help yourself become a modern computer programmer, I would, has anyone ever seen this talk by Rob Pike in currency versus parallelism? Anybody? That's awesome. It's really, really good. So concurrency, parallelism. So when we, and the point he's making here is that these are distinct terms. Concurrency is the composition of independently executing process while parallelism is a simultaneous execution of possibly related computations. Parallelism requires that two things can actually happen at once at the same time. But you don't need parallelism to write concurrent code. Even before we had multiple cores and multiple processors, people were writing concurrent programs. It's a little bit different. Concurrency means you expose ways that different parts of the program can act independently. That doesn't mean that they actually run at the same time. Parallelism means things are actually running at the same time. So this is kind of an important distinction. Right. And it's a great talk, by the way. It's worth watching. One of the things you have to do once you enter this brave new world when, you know, you're writing concurrent code is think carefully about how code can interleave. Because when you start writing system calls, when you start writing your locks and your CVs and things like this, you have to start thinking about different things that can happen, different ways that one thread might stop and others might start and then might cause data structures or other things to become corrupt. So here are the ground rules. Unless you do something about it. So once you start applying concurrency and synchronization primitives and other types of approaches, you're bringing some order to this chaos. So if you use synchronization primitives effectively, you can avoid some of these things. But if you don't, here's what you have to keep in mind. So the threads that you run can run in any order. So there's no guarantees that one thread is going to run first and other thread is going to run second. Who knows. They can be stopped and restarted at any time. They could be in the middle of adding two numbers together. They could be in the middle of a line of code that you wrote that looks to you like it should be processed by the computer in some sort of atomic action. No. False. They can just be stopped there and then like a bunch of other stuff might happen and then they're going to run again. Okay. And once they're stopped, they can remain stopped for arbitrary lengths of time. So these are sort of the three things to keep in mind. Once you start looking at how code works and how concurrent code interleaves. So you can go line by line. You can think instruction by instruction. Okay. What happens if the thread stops here and another thread runs and does this thing? You know, is the world still a safe place if that happens? Now, in general, remember, these are good things. What this means is that the operating system is creating opportunities for multiple threads to make progress together. The reason why your thread is being stopped is so something else can run. The reason why your thread is waiting is because other threads are working. Right. The operating system is just trying to mess with you. This is good. And we'll come back when we talk about thread schedule and then talk about all the different ways that this can be good and the ways that this allows the system to work more efficiently. But when you're accessing shared data structures, these sort of patterns can cause problems. All right. So this is sort of the this is like the classic, the classic synchronization example, like up in the Pantheon of Heaven somewhere, like this is like the shining example that everybody has to talk about. So that's the one we're going to talk about because, you know, why go against tradition? Okay. Bank accounts. Maybe the reason why this example is so popular is because it has to do with money, right, which everybody, everybody is happy to be concerned over correctness about. Right. If this was like had to do with your grades, you'd be like, let's just fix it. So I get an A every time. Right. Let's create a race condition so that it always comes out to A. But this is money. Okay. So here's a piece of code. What does this do? This is sort of C like pseudocode. What does this function do? Yeah. So it modifies an account balance. It retrieves the balance using a function called getbalance. It performs a local modification to the balance. And then it writes the balance back. It's pretty simple. Right. So now let's talk about the ways that this can go wrong when a function like this starts to be executed concurrently. So let's say at the start of the example, I have a thousand dollars. Two of you guys are trying to make deposits at the same time. One of you guys is giving me a thousand dollars. That's like a B level bribe. And the other one is giving me a two thousand dollars. That's like an A minus level bribe. Right. I really should index these to inflation. Right. It's probably more like 2,500. Okay. So here's the best case scenario. The A minus student runs, modifies the balance and writes it back. After that happens, I have three thousand dollars. The B student runs, modifies the balance and writes it back. After that, I have four thousand dollars. This is the correct answer. Right. If you take a thousand and you add one thousand and two thousand to it, what you should end up with is four thousand. So we can all agree on that. Okay. So what can go wrong here? There are there are two different ways that this code can fail. Okay. Yeah. So here's the less well scenario. So the A minus student runs. Remember our assumptions about threads. The A minus student runs, retrieves the balance, modifies it locally. But before that thread can modify the shared state and update it with the new value, it gets stopped. Remember, threads can be stopped at any point in our execution. So the outbreak system, again, you haven't told it anything about what's going on. So it doesn't know any better. Stops. The A minus student thread starts the B student thread. So that thread gets to the same point. It has made local modifications to the balance, but it has not updated the balance, the global balance. All right. It runs put balance. Okay. So what's the global value at this point? $2,000. And then what happens? So the A minus student is going to run. Now what's the balance? $3,000. So this is not good. This is wrong. Right. And there's one other way that this can work. And in this example, what the only difference is that the A minus student writes the balance back first. And by the time I'm finished, in this case, I've added $1,000 and $2,000 to a $1,000 balance and gotten $2,000. Right. So this is a classic example of something that's called a race condition. Race condition was one of those terms that didn't make a lot of sense to me the first time I heard it and didn't really start making sense to me for about 10 years. But hopefully I can help it make sense to you because the term is actually pretty indicative of what happens. So a race condition is when the output of something is unexpectedly the result unexpectedly depends on the order in which things run. Think about like running a race. Sometimes one person wins, sometimes the other person wins. And if two threads are racing through the system to a result, that's not what you want because you don't want the result to depend on who wins. And you can kind of see that this is what happens. So in this case, the B student quote unquote wins, it writes the last balance and so its balance gets deposited and the A students doesn't. In the previous example, the A student wins, it gets the last update to the shared state. Now again, a race condition is entirely dependent on what you expected to happen. In this case, we expected that the final balance was $4,000. That's what we thought. That's what we agreed was correct. And so either one of those other results was wrong. There are some cases where the fact that a particular value depends on what thread ran last doesn't matter because no one is expecting it to be a particular thing. So this comes back to correctness. There was only one correct answer here. And the fact that I can get three different results out if I don't properly synchronize things is a bug, not a feature. But again, there are certain cases where I have a bunch of threads going on and the last one that finishes updates some shared value and it doesn't matter. Right? Because I wasn't expecting. All right. Okay, any questions to this point? This example makes sense. I just want to point out that people here who have been working on locks and CVs and stuff like that, this is true with your locks and CVs as well. So that lock acquire the lock release code that you guys have been writing, particularly things like lock acquire, CV weight, you have to think about 10 threads running that code at the same time, using the same lock structure where, you know, 10 threads are trying to acquire the same lock. What should happen? What's the correct answer there? A lock is not held, 10 threads try to acquire it. What's the correct answer? One thread makes progress and what happens to the other nine threads? They end up on the weight channel. Anything else is wrong. If they all end up sleeping, wrong. If two of them end up going ahead, wrong. If one goes ahead, eight end up on the weight channel and one ends up like doing something else, I don't know, crashing, like, piece it out, go and run some other function, also wrong. Vanishing. So yeah, so that's another example where there is one correct answer and you have to make, you know, this is why we look at your locks with you, right? Making sure that every line of code is correct. Okay. So one of the ways that we fight problems with concurrency, this illusion that multiple things are happening at once, is through another illusion, which is something called atomicity. Atomicity is the illusion that a bunch of things that actually require multiple operations happen all at once. And in order to achieve atomicity, we have to, again, manipulate the order in which threads run. So this requires either stopping certain threads at a particular time or not starting threads at other times. And this gives us a little bit more control over the scheduling discipline than we had before. The ways that we solve these sorts of synchronization problems always involve using synchronization primitives to build a little bit more control into the situation than we had when we started. If we can't ensure correctness when multiple threads can run at any point in time and be stopped and be swapped back and forth, then we have to start applying synchronization primitives to the problem. That's what we do. That's how you solve these problems. Okay. See we're on time. Critical section. So a critical section is a piece of code that is supposed to look atomic. Critical section can be a part of code that once one thread starts executing that code, other threads cannot enter. So again, we think about critical sections as kind of a portion of your code where you can guarantee that there will be only one thread executing that code at any given time. So from the perspective of other threads that are running that code, that critical section looks atomic. Any of the operations, it's kind of like if threads are kind of watching what each other are doing, as soon as one of them gets to the critical section, all the thread sees is, other threads see is like, oh, it's done, right? And all these things happen, right? All this stuff about the world changed. There's no way for me to observe is some sort of intermediate bout. All right, so a lot of times when it comes to these sorts of examples that have to do with access and shared state, the correct solution is to use a critical section to make multiple operations that have to happen atomically, happen atomically. So I find this, I look at this example and here are some questions that you ask. So what's the local state that's private to the thread in this particular example? One way to start thinking about synchronization problems, you always want to be able to identify what is the shared state that's being accessed? So what's not shared? What is the local private variable that this thread has? It's gua has. That's my local copy of the balance. Typically, unless you are doing something very weird, you do not need to synchronize access to local variables that the thread puts on its own stack because those are almost never shared between multiple threads. You can, but don't. Like it's just, you can do wild and crazy things by doing that, but usually it doesn't make any sense. Okay. What's the shared state that's being accessed here? Yeah, so there's some balance, some layer. And this example, it's not being accessed by modifying a shared variable. It's being accessed through these getters and setters, but it could be, you can imagine that this is a shared variable, just a global variable that's being accessed by these threads. But in this case, you assume that get balance and put balance have access to some state and because both threads are using those functions, that state is now shared. Okay. Now, in this particular example, what series of operations that are not happening anatomically have to happen anatomically in order to make the example correct? Right, but it's essentially the get and the put. So, I guess my line numbers aren't working here anymore. Okay, so it's this section right here. Once I start accessing the shared state, I can't stop until I finished my update because if I stop, like we saw, the, to some degree, the shared states in this sort of in-between place where I'm not finished accessing it but somebody else can start accessing it. And so, in order to make this example work, I have to make sure that once I get a copy of the balance, no other thread can run until I've written my copy of the balance back. That will make this example work. All right, any questions about this? I feel like maybe for the first time, I'm actually talking to an audience of people that understand this. Either that or you guys are just bored and tired because it's Santa Claus, so I can't tell. And it's Monday, I know. Any other questions with this piece of code? Yeah, right. Yeah, so that's a great point. So there is always this tension between synchronization and concurrency because as you pointed out, in this case, in many other cases, synchronization requires taking pieces of your code and making them serializing them, essentially. So imagine I've got all these threads that are busy doing all sorts of stuff and I'm making great use of the machine and then I get to a critical section. All of a sudden, everybody has to get in line and everybody can just go one after another. So, and that's a good point. And it leads to this observation that to the greatest degree possible, I want to limit the amount of, you know, I want to limit the size of my critical sections. If I make my critical sections too large and I really start to reduce the amount of concurrency that's available on the system. If they're too small, then it won't protect the shared state. Well, actually, that's a great lead into a paper that we'll look at like months from now where there was a group of researchers at MIT and what they observed was that Linux doesn't scale very well once you start to run it on like 32 or 64 cores. There are workloads that slow down. You don't get the amount of speedup that you would expect. And what they found is that there were a lot of cases where there was just bad locking going on, right? And the locking was slowing down the system. And what they were able to do is redesign parts of the code to avoid that effect, right? This is pretty cool. We'll come back to that in like April or May. That's a cool observation. But yes, there is this trade-off. Full concurrency, I get no guarantees about stuff like this, but I get a lot of concurrency, right? Once I start to serialize and synchronize things, I'm forcing threads into a particular schedule in order and I'm limiting the amount of concurrency that can go on, but I do that to ensure safety. Yeah, good question. All right. Any other questions? I think are we done? Okay, I have like one more minute. So let me just go through the requirements for a critical section before we stop. Is this a good segue into Wednesday? All right. The defining property of a critical section is that one thread's inside it at a time. Once a thread enters the critical section, I have to decide what to do with other threads. And there are, so the other requirement here is progress. Eventually, if a bunch of threads are queued up, waiting to enter the critical section, they should all be able to make it through, even if they have to go one at a time. And as we just talked about, I wanna keep my critical sections as small as possible because I want as much as possible to be able to run things concurrently. And I don't wanna over serialize the system. There are two main ways that I can achieve the property that I want for a critical section, which is that there's one thread in at a time. One is that I don't stop. Once I'm inside the critical section, I just keep running. And that design pattern is essentially completely 100% broken on modern systems, but we'll talk about it anyway just for fun. The main way that I implement this is by preventing other threads from getting inside the critical section. So once I'm inside a critical section and another thread tries to get in there, there's two things I can do. And you guys actually have seen both of these. What are the two things that I can do? If a thread, one thread is inside a critical section, another thread wants to get in, what does that other thread have to do? That's a weight and there are two ways it can weight. What's one way? It can spin. It can repeatedly be like, let me in there! And then what's the other way? It can be like, when you're done, let me know. Okay, we'll come back to this on Wednesday. See you guys then. Try to submit today. Make that your goal. Next couple hours, submit something for assignment one.