 So welcome back, everyone, also on YouTube and or Moodle. So for today, we will have the answers to the exercises, which we just did. And I first want to start with a little bit of kind of a demotivational speech or a motivational speech depending on how you interpret it. So that's what we did. So a word about practicing. So in the early 1990s, a team of psychologists in Berlin studied violin students. They studied their practicing habits in childhood, adolescence, and adulthood. So it was a long-term follow-up study. So all of the people that played the violin had begun at around five years of age. And at age years, their practice times began to diverge. Because when you're still young, your parents forced you to go to violin lessons. But then when you start becoming eight, nine, you start becoming rowdy. And you don't want to do what your parents tell you anymore. And by age 20, they found that people who were elite performers, so people who were really good, they had around 10,000 hours of practice. And had the less able performers only had 4,000 hours of practice. So this study is a clear insight. And what did they actually conclude is that there are no naturally gifted performers. You can only learn to play the violin by practicing. Because, and this is a quote from their paper, so they say, if natural talent had played a role, one would have expected that some of these natural, so these kind of people who are personally gifted, should have been at the elite level with fewer practice hours than anyone else. And they did not find that. And this is the same what people always say about things like mathematics or programming is that you have to have like a born ability, an innate ability to do mathematics or to do programming. And that's not true. There are no shortcuts. There is no natural giftedness. There is no innate ability. There is no I am a born performer or something like that. The only thing that matters in the end is to practice. Without practice, you will always be at the lower end of the spectrum. So if you want to learn how to program, you just have to put in the hours to kind of do this. Risfren, Redeem, Next Slide in Dutch. All right, we will do the Next Slide in Dutch then. So there is really no natural giftedness. And that's what you hear a lot, especially in the programming field, where people talk about things like 10x programmers, programmers who were kind of born or have an innate view into programming. And that's just not true. If you want to be good at something, you just have to practice. And this is one of these things that I think for a lot of people is kind of an eye opener is that you just have to be like the hours that you put in will pay themselves back in the end. And that's just what you have to do. If you really want to be a good programmer, you have to invest probably around 10,000 hours, which is a lot of time. But after investing those 10,000 hours, you will be an elite program. Was there not even a trend for some people learning quicker than others? Yes, there was a little trend, but that was only at the beginning. So it's only when you first pick up an instrument that other abilities kind of play a role. So there's things like fine motor skills that played a very small role in the beginning. So they found that there were some covariates in their analysis where the people that kind of were learning much quicker from their young age to adolescence, so when they were like 12, that there was some correlation with some other things. And this was mostly sports. So people who did a lot of sports, besides playing the violin, they had a kind of quicker upward trend. But in the end, after like going from eight to 20, so after 12 years, there was no correlation with these things anymore. And the only real strong correlation that they found was the practice hours. So in the end, you can have a little bit of a boost and the same thing holds for programming. If you've been always interested in mathematics and mathematics was your favorite subject in school, then of course, when you start programming, you will have kind of a little bit of a head start because you know the Pythagorean theorem or you know how logarithms work. So you don't have to invest time to understand that part of programming anymore. But in the end, there will not be any shortcuts. So if you really want to become a good programmer and you want to work for Google, you just have to put in the 10,000 hours. And the 10,000 hours is a very good rule of thumb. If you want to become an expert in anything, you have to put in like 10,000 hours. That's kind of the rule of thumb for many, many different things. All right. This is a very easy slide to do in the Netherlands. So because we have seen that you can't have a natural gift for things like programming, I have, because you want to practice, of course, because everyone wants to become a good programmer, I have put extra assignments on the ground. Except the moodle assignments, I also made a PDF and I've already made that one before. And you can download that here, from my website. And the R-introduction PDF is based on this course. So the whole course, everything you learn, eventually comes from this PDF. And I made this PDF during my PhD, because often people who had no programming experience came to us, I did my PhD in a bioinformatics group. So we had a lot of people who came from biology or from other areas within the biology and they then gave us a course for a few months or a project for a few months. And to make those people very quickly in a week, another week or so, to be a little ready to vote, I made this R-introduction PDF. And the R-introduction PDF is just the same as the course, so there is a little text that you read through and then there are assignments that you make. So very similar to what we are doing now. Besides, it may be easy for the next assignments is the cheat sheet. The cheat sheet includes things that are very standard and things that you can always use. So it has a small example of how a pre-loop looks like, how it looks like a function, how you select something from a matrix, how you select from a list. And the cheat sheet is just, you can print it out, it is a single page and you can put it down at the moment that you are programming. And so you have to write a while loop, then you look at your cheat sheet and then you have the standard structure of a while loop, or if it is ready. All right, so quickly summarized in English to practice, I've uploaded additional assignments, they've already been there for some time. So do the assignments and also look at the additional assignments, like I told you guys, practice, practice, practice, that's the only way to get or to become a good programmer. And there are some PDFs available, I used them at the University of Groningen, we were on bioinformatics course. So the bioinformatics course that we did was like a three month project for people that came from biology or other fields of like more or less natural sciences. And they would come to us, we were purely bioinformatics group and they would do this, our introduction PDF. And this is a PDF which is just, it's like 30, 35 pages and it has a little bit of text and then some assignments to practice. So very similar to what we're doing now, but some people just work better by just reading and had they learned better by reading instead of watching me on the live stream. Besides that, there's the cheat sheet which I would advise everyone to just download and print out and when you're programming next time, then there's a lot of little things on a cheat sheet which you can directly just use. So if you wanna know how does a while loop look like, you look at the cheat sheet and there you see kind of the default structure. So you don't have to think about it, you just kind of copy paste it from the cheat sheet in your text editor where you're working and then you can just fill in the missing parts. So it has things like a for loop, a while loop, how does a function look like, but also some other things which I found really useful because they would come up over and over again when I was doing this programming course or the bioinformatics one course at the University of Groningen. All right, so that's it for this slide. So practice, practice, practice. All right, so for today, like I told you guys, we will be doing a lot of theory. There will be a whole bunch of slides as well showing you how you can do it in R, but since the exam is coming up, I want to add some theory so that I can just ask you some questions. So this is the definition that I think I got from kind of Wikipedia or some other page where they defined what an algorithm is, but like an algorithm, and I will just read the whole text for you guys because it's just a long piece of text, but the highlighted parts are the things which are important, right? So an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. It starts from an initial state and an initial input, which could be empty. The instructions describe a computation that when executed, proceeds to a finite number of well-defined successive states eventually producing an output and terminating at a final ending stage. The transition from one state to the next is not necessarily deterministic. Some algorithms known as randomized algorithms incorporate random input, right? So in essence, an algorithm is like a cooking recipe. So hey, you start off with a finite list of well-described instructions, so like 10 steps to cook an egg. You start off with an initial state, right? I have all my eggs in the carton. The book is open. I have a pan and I have butter and these kinds of things. And then what you go through is a finite number of well-defined successive states. So I break the eggs, I put them in a mug, I take the butter, I put it in the pan, and so these are all well-defined steps. And then in the end, after following the whole list of instructions, you end up with an output. So a fry egg or a cook egg, and then this is the terminal state, right? So these are the very basic concepts of an algorithm. So you start off with an input, then you have a finite number of steps, and then you end up with an output. So that is what an algorithm is. So to kind of summarize it, right? Because when you do the exam, if you are good enough to write down this entire sentence exactly as it is, which shouldn't be too hard, since if it's an online exam, I'm not gonna, but this is my kind of summary. So when I read this whole piece of text, then in my mind, an algorithm is a list of a discrete number of steps that when followed will complete a certain task. And completing a certain task also includes failure. All right, so how do we then describe algorithms? So when we describe algorithms, we can visualize them, right? And we visualize algorithms using something called a state diagram. So a state diagram is a diagram used to describe the behavior of a system. It is a visualization of an algorithm and it is very similar to a flow chart. However, a state diagram responds to activities or events, so actions and events. However, a flow chart does not need explicit events. So a flow chart is just, we go from A to B to C to D and there might be branches and these kinds of things, but in a state diagram, there is always an action to go from one thing to the other. So if we look at these two things here, then A and B, they are both more or less state diagrams because here you go from your point, which is the initial state, you go to state one, then you have a certain action, this action brings you in state two, then you have another action which brings you in state three, and then you have another action which brings you to the eventual state. So in these diagrams, there are a couple of things which are important. So what is important? So there is an initial state and the initial state is denoted by a closed circle. Then the initial state always has an arrow towards your first well-defined state in the system. So for example, you might have a state system which is idle, right? If I'm looking at a car engine, right? Then the initial state is that I have a car engine and then the car engine should be idle. So you end up in the idle state. So states are defined by using these boxes with these kind of rounded edges. Then the actions are the transitions from one state to the other. So in this case, it is turning the key, right? That is an action that you perform. And that brings you from the idle state into the running state. And then from the running state, you could go back to the idle state using an action, but there is also a possibility to end up in the final state, right? So here, turning on an engine ends up in the initial state, which means that the engine is idle. You then turn the key, which makes the engine run. And then you are in the final state because that is where you wanna go. So of course, this is a little bit nonsensical when you only have two states, but state diagrams can become very complex. However, they're very useful because they allow you to kind of subdivide the thing that you're going to do, right? An ATM machine is a very complex machine, an automated tele-machine, right? So the automated tele-machine is a very complex machine, and to make it work always, or to kind of make sure that it always works, you have to beforehand think about what the system that you are designing is supposed to do. And then what you do is you write down everything that you know in one of the state diagrams, and then you kind of go through the state diagram and see if every possible eventuality is covered, right? So a bank ATM more or less looks like this. So we begin at the initial state, which is we have an ATM machine and the thing is off. So what happens when the first action occurs? So the first action, of course, is turning it on. So turning it on brings you in another state called self-test because the first thing that you need to make sure is that everything is okay, right? We can't just turn on an ATM and then start sending out money, right? Because we need to do a self-test. So the self-test runs and there are two possible transitions, right? There is a failure condition, right? Something is not working correctly, and then it will go to an out-of-service state. Of course, the out-of-service state can be, you can get out of the out-of-service state in two ways. One of them is you can shut down, which brings you back to the off-state, but there is also the possibility to go from the out-of-service state into maintenance mode. So a mechanic working on an ATM will input a special card and this card will put the machine then in maintenance mode. If the maintenance succeeds, it goes back into the self-test and then the self-test runs again. If the self-test fails, you go back to the out-of-service state and then you can service it again in the maintenance mode, right? So you see that there's a whole bunch of loops which just continue over and over again. Of course, if the self-test succeeds, you go to an idle state, right? So from the idle state, you could also service the machine. So there's a possibility that nothing went wrong with the machine, but you still want to service the machine. So these kinds of diagram force you to kind of very explicitly reason about what you want your machine to do and what kind of software you should write and how the machine more or less should work. And of course, when you are in the idle state, you have the card inserted, which is an event, so an action, which then brings you to the serving customer, kind of sub-event structure where there's all kinds of steps that you go through, which again has a starting point and it has an ending state. So this is a separate algorithm within the ATM algorithm, right? So when we are designing an ATM, we have to be aware that we have two different types of algorithms. One algorithm which runs the machine and one algorithms which service the customer, right? So this state machine allows us beforehand to reason about what we should do and what will happen with the machine. All right, so a very basic for each loop can be represented in one of these state diagrams as well. So if we have four X in A and then we do a certain function, right, on X, then we have a starting state. We have A, if A is not empty, we then take the first element of A called X and then call the function on X and then we go to the next element of A. If A is not empty, we continue going through this loop over and over again until we end up in the state where A is empty and then we are in the final state. So we are ready, we are done. Right, so the initial state is, if we look at a very specific example like four X in one to 10, what am I doing? I'm taking my matrix, column X, taking column X, adding one to it and then storing it back in column X. And so the initial state is my matrix is filled with numbers. There is a finite number of successive states, in this case, 10. The successive state means that the values in column X of my matrix are increased by one and then the final state is columns one to 10 contain values increased by one, right? So this is how an algorithm works. It forces you to be really, really explicit about what's going on, but this will increase your understanding of what your code is doing a lot and it allows you to more or less subdivide codes in different parts. And going back to the ATM machine, you could hire someone that writes the state machine for the ATM and you could hire someone else which does the serving customer part, right? So if you have a big software project, it allows you to subdivide the projects into logically independent units which can then be assigned to different groups which might not even know each other or which might never meet each other, right? So that is the nice advantage of a state loop. And so the nice thing is is that R, the four, each loop is modeled very close to the natural language, right? That's one of the advantages of R, because four X in one to the number of columns of M is very logical. Four X in a vector A, B and C had these things are very, very close to your natural kind of English language. So if you're an English speaker, the R language is a really understandable language because the instructions that you write are very similar to how you would speak it out. So how you would more or less define your algorithm, right? R also, the R language allows you to take your cookbook and then more or less take the direct instructions from the cookbook and put them in R without having to modify many, many different parts. So for example, we can have a while loop. So we say while A is true, do something. And of course, this is less natural because while something is true, do something, it's because we don't know how often. In this case, we don't know how many state transitions there will be. We know that there probably is going to be a finite list because otherwise we end up in an infinite loop. And an infinite loop itself by the definition of an algorithm is not an algorithm, right? Because there has to be a finite number of transitions from one state to the other. So a while loop in itself can create something which is not an algorithm, which is quite funny in a way. But the while loop in my mind, it is less natural because we never say while something is true, do something. We always talk about for something that I need to do and then you describe the things that you need to do. So, but here we have the while loop. Again, the state diagram looks very, very similar to the one that we saw before. So we have our beginning state, we have A. If A is true, then we do our function. If A is false, then we end up in the final state, right? So remember that an algorithm is an effective method. It's a finite list, well-defined instructions. We have an initial state and initial input, a finite number of successive states, and then we end up at an output and a terminating at the final ending state. So something which does not terminate by definition is not an algorithm. So a loop which is while true is not a definition of an algorithm. So when we are defining algorithms, we can define algorithms in several ways. So we can define algorithms by implementation or we can define algorithms by design paradigm. So by implementation means, for example, we can use recursion or we can use iteration. So iteration means using a for loop, 4x in 1 to 10. Regersion is something that we will come back through during the lecture and that means that you have a function which calls itself. And in the end, by calling itself an x number of time, it ends up at a well-defined state which then gives you an answer, which then bubbles up through the entire loop of function calls. You can have like logical implementations, right? So an algorithm can be a logical algorithm, meaning that it is based on Boolean operators. We can define an algorithm as being serial, right? So things happen one after another, but we also nowadays have parallel algorithms. So if you have a multi-core computer, and then you could have an algorithm which splits up a job and half of the job is done on CPU one and the other half of the job is done on CPU number two. We also have distributed algorithms. Distributed algorithms means that you use, for example, 10 computers across Germany, right? So every computer does part of the task and then they communicate back to either a server. So you can say I have a distributed algorithm. So the difference between parallel is that it still runs on a single machine but it splits up the task for different CPUs while a distributed algorithm is something which is distributed to different computers and on a computer it can be done in parallel again. So you could have a parallel distributed algorithm as well. We have deterministic or non-deterministic. So deterministic means that given a certain input, you will always get the same output but an algorithm can also be non-deterministic meaning that, for example, random numbers are being used inside of the algorithm to find more or less a local optimal solution instead of a deterministic solution. Again, this ties in with wanting to have an exact algorithm or an approximation and of course nowadays we also have quantum algorithms like Schor's algorithm which kind of simulate or simulate running on a quantum computer. So design paradigms are also a way that we can kind of classify our algorithms into different groups and these can be mixed and matched with the implementation designation. So we can have, for example, a serial brute force approach. So I think everyone knows what a brute force approach is. So imagine that I'm trying to log into someone else's Google account, a brute force approach would be to just say try all possibilities, right? So I'm first going to type in zero, then I'm going to type in one, then I'm going to type in two for the password and I'm just gonna continue this until I have exhausted all of the things. So brute force is also called exhaustive search. Although exhaustive searches are slightly different because then generally you don't try every possibility but you try every logical possibility which might mean that you are excluding some situations. Why? Because you know, for example, that Google has a minimum password requirement of six letters or numbers, right? So you then are not going to try a password which is only three letters long. So the difference between brute force is you just try everything without looking at the kind of circumstances surrounding what you are trying to do while an exhaustive search algorithm is more or less the same thing but you take into account kind of some limitations from the system. For example, the password needs to be longer than five and shorter than 12 letters. So and a brute force won't care when exhaustive search will take that into account. You have to divide and conquer paradigm and this is very commonly used when sorting. So what you do is when I have like a thousand numbers that I need to sort, what I could do is say, well, no, I don't want to sort a thousand numbers, I'm going to sort 500 numbers and then I'm going to sort the other 500 numbers also, right? So I'm not going to sort both of them at the same, I'm not gonna sort a thousand numbers but I'm just going to divide it and every time you then subdivide the problem into smaller parts and then solving these smaller parts. So one of the examples here is sorting because sorting things like bubble sort or other sorting algorithms, they work by divide and conquer. And divide and conquer is a very valid strategy because you make the problem smaller. So for example, you divide the problem by half and then you have a problem which is half as hard to solve and then you subdivide again and then you have a problem which is only one fourth of the original problem. We can have search and enumeration algorithms, search and enumeration algorithms are for example, chess. So in chess when you play against the computer then a computer can do a search and enumeration which means that it looks up all possible states from the current board. So we have a certain board layout and it will just try to move every piece. So it will enumerate all the possible options that it could do and then it will search through these options to find the option which has the best outcome for the computer, right? So that's how an AI in searching can work. Furthermore, we have randomized algorithms which for example, the Monte Carlo algorithms are a good example of and we will see how you can approximate a Monte Carlo algorithm using a washing machine and Lego blocks, which is actually an active field of research. And then we have the reduction of complexity which is very similar to the defined and conquer approach where we try to reduce the complexity of the assignment by making the assignment more or less smaller or redefining the question. And of course these things can be mixed and matched. And I just, for me, I don't want to have you guys know exactly what a divide and conquer is but be aware that if you have an algorithm or have written an algorithm then you can classify algorithms based by implementation or based by the design paradigm which is behind it. And we can use these things to kind of come up with general cooking recipes. So these are called design patterns, right? So design paradigms are ways to do it and after things have been solved multiple times then we end up with a design pattern, right? So a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. So for example, logging into a website, right? Logging into a website is a very common problem that occurs and this is a solved problem in a way because people have been writing logins to websites for dozens and dozens of years so you don't have to reinvent the wheel every time. So a design pattern, the definition is that it's not a finished design that can be transformed directly into source or machine code but it is a template on how to solve a problem that can be used in many different situations. And so logging into something is something that we've done before so we know how to implement that. So there are more or less design patterns for logging into a system but there's also design patterns for how to write software so that it is easily extendable. There are design patterns and there are books which just have all of these design patterns in there. And just remember that a design pattern itself is not code but it is a template which can be transformed into code relatively easily. So why do we use design patterns in software engineering? That is to avoid reinventing the wheel. And so it's a best practice that has been proven to work just by the things that we have tried in the last 60 years that we have computer. And so there are some very good example for what is a design pattern? So some design patterns are, for example, the model view controller pattern. We have the Monte Carlo sampling pattern. We have the facade pattern and we have, for example, the scheduling pattern. And so the idea behind having a design pattern is that many, many smart people have thought about some of these problems and they have come up with kind of solutions. And then from all of these solutions that the expert have come up with, we have kind of extracted a template. And using this template, you cannot really go wrong. The only thing that you have to do is when you pick up a new language, then you take your design pattern that you already have and then you just re-implement that pattern in a new language. So we will be looking at these four different design patterns. So what is a model view controller pattern? What is a Monte Carlo sampling pattern? What is a facade and how does a scheduler work? So one of the most commonly used design patterns, especially in web design or in interface, so user interface design, is the model view controller design. So it's used a lot in web development, but also for GUIs and like, okay, if you have Windows, then you have your Windows programs. They also follow generally a model view controller system. So how does it work? So you have a data model, which is called the model, and the data model knows nothing. So it just has the data in there. It is something that describes how the data looks like. So it might be our data, for example, at Facebook, data might be stored in a single table, which is indexed by the username and then the username is coupled to a column, which is then the age of the user, and then you have something which is the occupation of the user. And so it's just a big table where every column is more or less described. So the model does not know something about the view and it doesn't know about the controller. We then have the view. So the view is the data presentation layer. So the view, of course, needs to know how the model looks like, because for example, a certain view on Facebook might be your own homepage, right? So your own homepage looks a certain way. You have your photo there, then you have your name, then your occupation, and all of these things. And for this, the view needs to know the model. But the view doesn't know the controller. So the controller is the application logic. For example, when you look at your own page and you are logged in, you see a different Facebook as when you are logged out, because I am the owner of the page. So the view is more or less the same. Well, the view is different if I am logged in or not. So the controller determines which view I can see. So the logic, all the logic, if logged in, then show your page with your occupation and your photo and everything. Else, if logged in as friend, show the page as if you were a friend. Else, if not logged in, show the page as being an anonymous user, all of this logic gets put into a third kind of file. So you can think about this as files, right? So you have a model file, which is just a big database or a big table, could be a comma-separated file. Then you have the view. This takes the comma-separated file and presents it in a certain way. For example, a certain HTML view being logged in or being a user or being a friend view. And then you have the controller and the controller knows the model and the view. So it can kind of look at if you are logged in, then I have to take the view as friend. If you are logged in, but you're not friend with the person, then take the view, which is the anonymous view of this user. And by doing this, you separate these components from each other and you make these components more or less independent, right? Because by doing this, we can change the model, here we can add two columns and nothing in the rest of the system has to change because the view that you had is still valid, even though there's an additional column, because the view didn't display this column. Here we can, when we drop a column, then of course we need to update the view and we need to update the controller as well. But the idea here is to separate things and to have the data be just the data, be the presentation layer, be just the presentation layer and keep all the application logic in something called the controller file. So when you do web design, this is generally how code is structured. And so you have the model, which is generally the database containing the data. You have the view, which is the kind of HTML template on how this data should be presented to the user. And then you have the controller, which can be JavaScript or PHP, which couples these two things together and make sure that when you are not logged in, you're not allowed to see certain data or selects which view you are seeing based on your current status. So why do people use the model view controller? So it separates between the data model and the view, so you can add new views very easily and there's a separation between the view and the controller, which means that the user interacts via a view because you can only see a certain view, but the views don't do any computation. So this improves security a lot because the way that you look at pages on Facebook, if it doesn't really, so you don't have to put in all kinds of smart logic in the view, it's all handled by the controller. So the controller just has a rule. If you are logged in and a friend of someone, then you are allowed to use view number friend view. If you are a user and you are logged in but you're not friends, then you are using the anonymous view. So that improves the security because you only have to define these logic rules once instead of having to redefine them over and over again. All right, another design pattern is Monte Carlo sampling. Monte Carlo sampling is used a lot in things like genetics and economy and agriculture, but also in weather prediction. So it uses an algorithm to map that everything that might happen in a complex event and then determines from that which events are most likely to happen. So the way that I always try to explain this is by using a Monte Carlo sampling device. So this is real scientific research and this is something that people generally always laugh about because people don't think that scientific research can be done with the washing machine and Lego blocks, but there is actually a very big correlation between genetics, biology and using a washing machine and Lego blocks. So what do you need to build your own Monte Carlo sampling device? You need Lego and a washing machine. These things randomly sample and then you have to evaluate the outcome, right? So imagine that I have just a box of Lego. I throw it in the washing machine, then I set it to do 30 spin cycles. After it has spin 30 times, I open up the washing machine and I see which kind of blocks are together, right? So, and then I can see like, okay, so if I have these blocks and I put them in a washing machine, how often do I get a structure which is this structure? How often do I get these structures or these structures? So the idea here is that you can write papers even about this, had the random structures from Lego blocks and analog Monte Carlo procedure. And this was written by Ingmar Althoff and this is actually a pretty well-cited paper. So by doing this, you can really do research. And the abstracts are pretty funny. Recently we discovered a phenomenon when filled with many single Lego blocks a washing machine rate generates random complexes, right? Because the Lego blocks will generally randomly stick to each other. So this generation process can be viewed as a paralog analog Monte Carlo procedure. It may be used for discovering new Lego structures and for interactive generative design. This report is preliminary and tentative, but there's a lot of people that do this. And it's a very fun thing to do because you can kind of calculate the probability of two blocks, how often do they come together or not? And of course, this has a direct effect on things like protein folding, right? Which proteins will interact with each other? Which proteins will couple to each other and which will not? And you can kind of simulate this by using Lego blocks or not so much Lego blocks, but blocks which have very similar structure. So things like docking algorithms where you are trying to, where you have, for example, a virus and now you're trying to dock all kinds of antibodies against this virus. They work in the same way. It's kind of like random Lego blocks. So you dump like a whole bunch of viruses in a washing machine. Hey, you dump a whole bunch of antibodies there and then you see what sticks. And of course, you do this computationally, but you can do this in the real world by doing Lego blocks. And this is called Monte Carlo sampling. So you just write down which inputs you have, you write down how they can more or less interact with each other, and then you just randomly generate structures based on the rules that you have and you then observe which structures occur more often than other structures. So there's a direct parallel to how more or less these algorithms work for drug discovery and putting Lego blocks into a washing machine. All right, so we've been busy for 41 minutes. So I will do the facade pattern as well. So the facade pattern is an interesting pattern and it is used a lot. For example, not Twitch, but the other one with a T, Twitter. Twitter uses a facade, right? So Twitter uses a facade to provide a stable call interface, right? So imagine that we have two clients, right? So we have, for example, me and Daniel. And me and Daniel want to do something, right? So, and the thing is, what do we want to do? Well, we want to do something and this is the thing that Twitter allows us, right? So Twitter, for example, has a function which is do tweet and this generates a tweet, right? And it has a function called add friend or it has a function remove friend, right? So, and I can write a Twitter client and Daniel can write a Twitter client as well. And this client just contacts Twitter and then calls the function add friend. And we don't care how this thing is implemented on Twitter because as long as Twitter offers us the add friend function and we call the add friend function, then we don't care if Twitter uses Java or if they use PHP or if they use some other programming language, we don't need to know about that, right? So the facade allows a company to provide a set of tools or a set of functions to a user but by providing a stable interface towards the user, the company is free to switch its own backend. If Twitter wants to move from Java to PHP, it is free to do so because for us developers who make things like Twitter bots or other like tweet decks or these kinds of software, Daniel might just join soon because you mentioned him so often. I'm just doing that to make him join like next week but the stable interface is the thing that works here, right? So, and because there's just this function that you call so you just go to http www.twitter.com slash add friend and then you provide the name of the friend that you wanna add in your own user ID and then Twitter does the thing for adding the friend but we as developers, we don't have to care how Twitter does that and this is because the facade is very simple. So they just have a very simple shell which is called twitter.com slash add friends. You summon him if you say his name three times in a row, right? Peter Aaron's, Peter Aaron's, Peter Aaron's. All right, so the facade pattern is a very common pattern. And so it provides some very high level functions like an add friend function. The implementation is unknown to the people using the facade but the behavior is defined. So there is advantages because you provide stable interfaces for external tools and application like the Twitter Twitter tools because there's a lot of different apps that use Twitter and hey, you don't have to use the Twitter standard app but you can also use things like tweet decks or online Twitter tools. And so there's this stable interface and it is easy to change the implementation behind a facade, right? So as a company, Twitter is offering a facade so that they are free to kind of choose what they wanna do in the back. All right, so the last design pattern that we're going to take a break again is the scheduling. And I think many students are having their own scheduler algorithms, right? So scheduling is a way to schedule task in a certain timeframe. For example, I need to learn for five exams that's this semester, right? So that's the task. For this, I have two weeks because in two weeks the exams start and then you can use different scheduling algorithms. So the first one is the most common one which is the FIVO scheduling which is first in, first out, right? So you look at your schema and you see, well, the bioinformatics exam is first. So I'm going to learn first for the bioinformatics exam. Then the second exam that I have is programming in R. So after I've learned for the first exam, I'm going to learn for the second exam, right? So that is just it. So the first in, first out is just the first thing that I arrive that I will do, right? So that's a FIVO. Earliest deadline first is another way of scheduling. So, and all of these of course are design patterns. So you can just pick up a book and read how to implement FIVO. How to implement earliest deadline first. How to implement SRT, so shortest remaining time. You don't have to think or redesign these algorithms from the ground up. No, you just take a book which has design patterns and then it will tell you there's a chapter called FIVO. You go to the chapter and then there in kind of pseudo code is written down which steps do you need to take to implement a FIVO scheduler. Yes, so, but the earliest deadline first is the way that whenever scheduling an event, a queue will be searched for the process closest to its deadline, which will be the next to be scheduled for execution, right? And this only works when you know how long it will take. But the idea here is that, well, I have three exams but for exam number two, I already learned like 90%. So I'm first going to learn the remaining 10% of this thing and then I'm going to switch to number three because I already read 80% of the book there. And only then am I going to do the first one because I haven't learned anything from this one yet. So the shortest time remaining is the scheduler, the strategy where the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. So here again, it's very simpler to kind of earliest deadline first, but here we just look at the time it takes that we still need to invest and the things that take the least amount of time is the ones that we're going to do first. And then we have round Robin scheduling and this is then a way of scheduling saying that, no, if I'm going to learn for five exams, the thing that I'm going to do is every hour, I am going to take 12 minutes and I'm going to learn 12 minutes for the first exam, 12 minutes for the second exam, 12 minutes for the third exam, 12 minutes for and so on until I finished my hour and then I start again, right? So it's just investing equal amount of times for each of the exams and just skipping from one to another. It's not the best way, but for computers, scheduling like jobs or processes in a way, this is a very common way of scheduling. So just give everyone everything 12 minutes. And of course, if you finish before these 12 minutes, right? If I'm learning for exam number three and after six minutes I'm done, then I'm going to do nothing for six minutes because that's the way that I plan my schedule. So I'm just going to assign 12 minutes for every exam that is going to be there. And if I finish before the 12 minutes are up, I'm still going to idle for the rest of the time. All right, so we will stop here. We will, I want to talk a lot more about functions. I have like 15 slides left. So we'll first do a break, it's four. So I've been talking again for two hours and four minutes. So we're going to do a short break of like 10 minutes, 10, 15 minutes. So I will be back at like four, 10, four, 15. I have forgotten which animals are there for break number two. But I will first stop the re...