 Yeah, I'm not one of the authors, so Kanti wasn't able to make it this time, so he asked me whether I could present this paper on his behalf. So I tried to do as much justice as I can. So this is about the pre-empty version of the classic resource allocation problem done by Kanti Sarpatwar, Barukh Shibar, and Hadash Sachnai. So let's just get into the basic definitions of, I mean, what does the setup look like? So it's a resource allocation problem. So we clearly have jobs, and we have machines. And our task is to schedule these jobs and the machines. So what are the attributes each job has? For instance, we have, so each job comes with a resource requirement, which is the amount of resource it requires, I mean, every unit of time while it's being processed. And it has some processing requirement. That is some time that the job takes to be completed. So you could actually say that the total resource that the job requires to be completed is the product of the processing time and the resource requirement per unit of time. For the rest of the talk, it would be more convenient if you actually look at a job as this rectangle, where the horizontal length is actually the processing time, and the vertical length is the resource requirement per unit of time. And every job has a release date and a deadline. So, I mean, a window within which it has to be scheduled. And for the machines, all the machines are identical. And all the machines also have a resource, I mean, so machines, you can visualize it as this. So machines, you can actually visualize this as this gray slab over here. All the machines are, so we will be talking about multiple machines. But for most purpose of the talk, this is not going to make a difference because all the machines are identical with exact resource requirements per unit in time. So you can actually stack all the machines up and look at them as one single rectangle. So yeah, so let's look at this setup. So you have this machine, which is this big gray slab. And you have these jobs on the left hand side, the colored rectangles. And these things actually denote the time in which the job has to be scheduled. So for instance, the yellow job has the longest, I mean, it is the one which requires least emergency in a way because it has the, I mean, longest difference between the deadline and the release time. But on the other hand, if you look at something like this green job, then it has to be scheduled almost immediately because the difference between the processing time and the interval for this job is zero. So this is basically the setup. Now let me show you what a feasible allocation or a feasible schedule would look like. And then once I define a feasible schedule, we would actually talk about what the problem is. I mean, what the objective of the problem is. Okay, so as you said, so each job must be, I mean, this is the first constraint that every job has to be scheduled within its time window. So for instance, I schedule this green job within its time window. So you can actually schedule multiple jobs, this yellow, orange, red, purple, and the gray one. As long as the resource constraints are not violated, at any instant in time, if the total amount of resources required by all the jobs scheduled at that instant of time is less than the resources available, then yes, it's a feasible schedule. So it kind of gives you more flexibility to schedule more jobs, but you should respect the resource constraints, like I said right now. And since it's the preemptive version, you can preempt a job and migrate it to, I mean, not just different time slots, but across different machines. But as pictorially, this should not make any difference because I said that you can actually stack all the machines up together. So for instance, this yellow job, I could process it for a while at the beginning and then I can process it later. This is kind of what I refer to as a feasible schedule. So having said all of this, now we can actually look at, I mean, what the paper does is it looks at two variants. One is the maximum throughput variant and the machine minimization variant, I would be defining them in a minute. But throughout the talk, I would only be focusing on the first one. The second one, I just state the results and some, I mean, some open problems related to that. So Kanthi just sent me slides for the first one. So we would be focusing on the maximum throughput variant. Okay, so what is the maximum throughput variant? Every job comes with a particular weight kind of telling us how significant it is and our machines are fixed. So our job is to feasibly schedule these jobs such that we can maximize the sum of the weights of the jobs that have been scheduled. So kind of scheduling very important jobs. And this is exactly the preemptive version of the classic resource allocation problem. So in the classic version, so how does this differ? So yeah, it is more, I mean, you are allowed to preempt jobs. But also, in the classic resource allocation problem, as we would be seeing in the next slide, all the jobs have the same resource requirement. So all of these rectangles, they have the same vertical length. So this, as a result of which it doesn't quite make sense to schedule multiple jobs at the same time. So that is a flexibility that this model allows. But again, this is a very special case, which also we would be seeing. So we have certain assumptions of this model for which the algorithm works. Okay, so the other problem, the machine minimization variant. So here it's more like a, it has more of a bin packing flavor. So you have, you need to schedule all the jobs. And you need to minimize the number of identical machines that you require. The authors also consider a more generalized variant, where the resource is not, the resource of a job is not just a scalar, but it is actually a d-dimensional vector. And you have to respect this resource constraints in every dimension. So it's much like a generalization of what's known as the vector packing problem. Having said that, I'll just quickly go through some related work. So most instances, as I said, have considered this special case where every job has identical resource requirements at any instant, at any instant in time. And as long as it was just a single machine, very 30 years ago itself, you know that there is a polynomial time approximation scheme. Quite until 2001, this has been improved to one over sixth. And if you consider a further special case, where you can kind of ignore this interval for a job. So all the jobs have the same release and the due date. If you can completely ignore this, then you can improve this to one third, minus epsilon. And if you look at the non-preemptive version of the problem, well, there has been a lot of work and the best known approximation is, I mean, currently the best known approximation is one half. And one half minus epsilon. Okay, for the other variant, which is the machine minimization problem. So as I said, it's a more generalization of something which is called a vector packing problem, where again, I mean, so it's like a bin packing problem where every resource has, say, capacities in different dimensions. And you need to minimize the number of pins that you need to actually pack this. So for this also, you have logarithmic, I mean, you have approximations that actually depend on the dimensions, so you have an order log d approximation. And quite recently, this has also been improved. But anyways, I won't be spending much time on this variant because we won't be talking about this. So what is the contribution of this paper? So this considers the preemptive version of the resource allocation problem. It kind of generalizes it by saying that, well, jobs can have different resource requirements and you can schedule multiple jobs at any point in time. However, it makes a very crucial assumption, which is a slackness assumption. Which kind of says that none of the jobs requires an immediate attention. So it's a comparison between the processing time of a job and the interval. So for instance, as we saw in our first example, this lambda was actually one for a green job. So as soon as it is released, it immediately has to be scheduled. So the algorithm that this paper presents works mostly for the case where lambda is significantly small. So you have quite a significant window to schedule the job and you don't need to do it immediately. So having said that, the paper actually goes two phases. First it again considers a special case where these intervals form a laminar family. So what I mean by a laminar family is if you look at any two intervals, if they intersect, then one of them is a strict subset of the other. So in particular, if I give you all the intervals, then you can actually get a root of tree structure. So first they consider this special case where the intervals form a laminar structure and there they give an approximation which is 0.5 minus lambda times 0.5 plus 1 over m. As you can see, if the lambda is actually close to 1, which it could be for very general instances, this algorithm doesn't work at all. So it only works for slackness which, I mean the approximation factor only makes sense for slackness lying between 0 to 1 minus 2 over m plus 2. Where m is the number of machines, by the way. So again, this is a special case that the authors remove this assumption of this laminar family and then you actually support both the approximation as well as in terms of the slackness. So earlier you could allow slackness close to 1 if you have a lot of machines, but now the maximum slackness that you can allow is like 1 fourth. However, you get an approximation algorithm. So for very low slackness, for instance, you get an approximation algorithm which is 1 over 8 for the general case. The rest of the talk I would be talking about this thing, the laminar case and how they manage to give this approximation. So this is the results that they get in the other problem, the machine minimization variant. Where you have a logarithmic approximation algorithm. Again, now there are two crucial assumptions. One is on the slackness parameter like before and also here on the interval. So you don't need to know what this t is, it's a bit technical. So there's an assumption on the interval as well as on the slackness parameter. You can remove this assumption on the interval but then you have a slight blow up in the approximation. But anyways, we won't be considering this variant. Okay, so what's the high level approach? So now just recollect. So we are in this case of this where the intervals from a laminar family. And we are looking at the max t version. So the algorithm has two phases on a high level. The first phase is it kind of finds a pseudo allocation. So what do I mean by pseudo allocation is I just look at the areas of, I mean I just look at the total resource requirement of the job. I actually don't look at the processing time and the resource requirement at any instant in time. I just look at the area of this rectangle. And I say as long as for every interval there are a set of jobs where the total area of these jobs, the total resource requirements required by these jobs are less than the total area of the resources available in this interval, then I would call it a feasible schedule. So clearly this is not a feasible schedule because you could have two jobs whose area could actually fit inside this interval. But it may not be feasible to schedule them. So it's kind of a pseudo allocation. So I start with something called a pseudo allocation. However, the hope is that I don't need an, I mean I won't. So I would only schedule them if the total area is less than some eta times the area available. So this eta could be something like half. And the hope is that if I can get a pseudo allocation which uses just, let's say, half the area, then I can maybe get a feasible allocation which uses the whole area. This is kind of the hope. So when I get a pseudo allocation, I want to guarantee that I don't move, because there's already, I mean although I'm giving it more power, I mean I'm kind of relaxing the constraint here but then I'm also reducing the area. So I hope that the optimum allocation will not suffer too much. So if I can still maintain an order eta approximation on the optimum and get a pseudo allocation, then in step two I would change it into a feasible allocation while still maintaining the approximation on the optimum. These are the two phases of the algorithm. So let's go to phase one. So the phase one, I mean I would start, of course, I mean, as usual I would start with a linear program. So the linear program is I want to maximize the total weight of the jobs that have been scheduled and here, so for instance, there's a typo here, there should not be any JNS, yeah. So here, for instance, you have to respect the area constraint. So you look at every interval and in every interval these resource constraints should be respected. So some of the areas, so AJs is actually the product of the PJNSJ. So some of the areas of the jobs that have been scheduled should be at most omega times the area of the interval. I mean, area of the total amount of resources available. And yeah, so XJ belongs to zero and one. So but again, if you solve this linear program, this will give you a fractional pseudo allocation. So I need to change this fractional pseudo allocation into an integral. So first, so I need to give you two guarantees. First of all, that the fractional pseudo allocation gives a good approximation on the optimum. This is actually pretty clear because take any optimum schedule. So any optimum schedule will satisfy these constraints with omega equals one. Right, because I mean, this is like a necessary condition for them to satisfy. So any optimum schedule will satisfy with omega equals one. So take an optimum allocation which is integral and then change every XJ to omega. So make every job fractional in the optimum allocation. And then you get a fractional allocation which satisfies this constraint. And which gives you an approximation of omega times opt. So first of all, there exists a pseudo fractional allocation, which gives a good guarantee on the optimum allocation. And now I need to transform this pseudo fractional allocation into a, I mean, a pseudo integral allocation or a pseudo allocation. So I need to tell you how we round these variables. So this rounding crucially uses the fact that the intervals form a laminar family and also the slackness parameter. So you would see that in right now. So how do I round this? I do a bottom up rounding. So right now I have a fractional allocation. So the red jobs over here indicate jobs which have been fractionally assigned and the blue jobs over here indicate the jobs that have been integrally assigned. And look at the interval of every job. And I would color this gray. So by coloring it gray, I mean, I mean, I'll tell you what I mean. And the goal is that as I round, I would be changing the color of these intervals from gray to black, whatever black means. I would just get there in a minute. And so I would do it in a, so gray kind of means that they're unassisted. So now I would color an interval black if the following property is satisfied. So we know that these intervals form a laminar family. So look at a job. So let's say, J, look at a job which corresponding to this interval. Look at any path from this job to any other job which, where the intervals form are inside this subtree. So take any path of intervals from this root to the leaf. The constraint that I want to maintain is that there is at most one fractional job along this path. So in this example, for instance, the leafs, it's trivially true because, well, there is only, I mean, one child there. And even if it's fractional, it is satisfied. So if you look at J5 or J6 here, if you look at the path here, it has only one fractional allocation, which is J11 or one fractional allocation along this path, which is J12. But if you look at something like here, no, because here, for instance, you have a path which has two fractional allocations. Here, if you look at J4, similarly, exactly same reason. So there's a path here which has two fractional allocations. If you look at, I mean, same reason why this one is also gray. So now I would like to transform all the gray intervals into black intervals in a bottom-up fashion. And clearly, I need to reduce the fractional allocations to do this. And this is how the rounding works. So, assumed by inductions that right now I have this, I only would process a gray interval. I would only like to change a gray interval to a black interval. If all of its subintervals are already black. And now, how do I do it here? So look at any gray interval. Then I can say that if you look at any path from the root to the leaf, there would be, I mean, if the constraint is actually violated. Then there are two fractional jobs, which is one is the root in itself, and the other is whatever you encounter in the path. Because from starting from every children of this node, I have the guarantee that any path would have exactly one fractional job. So if there is any sort of violation here, it is because the root itself is fractional. So now what do I do here? So I do kind of which is obvious. So I would try to reduce the fractional part of the root and increase the fractional part of its leaves, okay? And I would do it in a way such that if you look at the area inside this interval, it is conserved. So I would do it in a way such that the area is conserved so that the constraint corresponding to this interval is not violated. But at the same time, I need to argue that the optimum solution is, I mean, we don't go far away from the optimum. And this is kind of clear because these are knapsack type problems, right? So if you have, I mean, if in a time interval, if you have multiple fractional jobs and my goal is to maximize the sum of the weights of the jobs that are being scheduled. And if I have multiple fractional jobs in an interval, then I can say that all of these have the same weight to area ratio. I mean, assume otherwise then you can strictly improve the fractional part of the one with the maximum weight to area ratio and you would improve the optimum of the linear program. So I can assume that all of them have the same weight to area ratio. But as a result of which when you decrease the fractional part of the route and you increase the fractional parts of the leaves as while you're still conserving the area, I mean, the optimum value doesn't get affected at all. And as I keep doing this, I would get the invariant that I want to hit. So either the J1 becomes a zero, all of its children from J2 to J7 become integral. So in that case, I can turn this, I can color this black. So at the end of it, I would have a lot of black intervals. But for each black interval, I mean, if you look at any path, there could still be some fractional jobs. And these fractional jobs, you just round up. You just say that I make them integral. So when you do this, I mean, there's a bit more subtlety, but let's, okay. So when you do this, then your optimum can only increase. Because whatever was fractionally assigned is now being integrally assigned. I mean, because I'm rounding them up and the optimum value increases. But what about the feasibility? The feasibility could get violated. So by how much does it get violated? Well, not by much, because here is where I'm using the slackness assumption and it is laminarity. So look at any black interval. So let's say this was an interval. And inside this interval, I had three fractional jobs that I rounded up. But the invariant that I'm maintaining, I know that there are no fractional jobs in the interval corresponding to job J1, and same thing I can say for J2 and J3. And since they form a laminar structure, these intervals are actually disjoint. And I know for a fact that these jobs, I mean, will not be, I mean, these jobs have a pre, I mean, the processing time of these jobs is significantly smaller than their interval. Because by the slackness assumption. So the total amount by which the area shoots up is actually lambda, the slackness parameter times the total interval here. So kind of cheating, so I wanted to, I mean, I'm kind of relaxing here a little bit. Earlier I had omega, but now I have omega plus lambda over m. But for sufficiently small lambda, I mean, still I have enough space to actually hope that if I have a fractional, if I have a pseudo integral allocation, I can transform it into a feasible allocation with the same approximation guarantee. And this kind of, I mean, the high level intuition is, this kind of follows the classic bin packing routine. So in classic bin packing, the two approximation for bin packing, you throw items into the bins and the moment you come through, you look at a new bin and you throw the new item up. So this kind of uses two times the number of bins an optimum would use. So if you choose it somewhere close to half over here, you can kind of argue that if you have, I mean, two times the amount of space that you had here, you can transform it into a feasible allocation. So with that, you can, I mean, so with that, you will get the approximation guarantee that has been promised for, yeah, max t. And as I would like, I mean, to close with some open problems. So, I mean, here, for instance, for both the maximum throughput and the machine minimization, for instance, it's clearly the first thing that you want to do is to remove these assumptions, to remove the slackness assumptions. And also for this machine minimization, they had an additional assumption on the window size. So you need to remove these assumptions and what are the best algorithms that you can come up with? And what is the best approximation ratio you can come up with? As a matter of fact, I mean, even, this is even a very special case, because you're assuming all machines are identical. So even, I think, I think even if you change this resource requirements of these machines, that every machine has different amount of resources that it can accommodate at any instant in time, most likely the second phase is not going to work. So you have to come up with a different algorithm there, too. So yeah, I mean, the high level hope would be to come up with algorithms or hardness of approximations for the most general variance. Yeah, I mean, that's, well, that's it, but I want to say. Okay, for the given talk in the, in the end of the lecture, I may not be able to do justice, but yeah, okay. I mean, it's not fair to direct the questions to it, but really, it makes sense to have, visually, you talked about a job looking like a two-dimensional figure. But would it make sense to have more requirements of the job, like fitting in to. High dimensional bins. Yeah, I mean, that's, that's this one, right? This open problem. So that's, yeah, so if can you get at least a log times the dimensional approximation algorithm? I don't know. I mean, they're thinking it as an open problem. The pseudo solution in the analog for bin packing is that the total size of all the items is a lower amount on the number. Yeah, yeah, yeah, yeah, yeah, yeah, yeah, thanks.