 Hello everyone, I'm Quachin Fung. I'm not a graduate student of computer science at the nation of Taiwan Ocean University. Today I'm going to talk about brief history of Linux CPU scheduler. I will first introduce the development timeline of CPU schedulers, and then a brief introduction to each scheduler, not to bring the problems making which being superseded. This is the development timeline of CPU schedulers. Year 1991, the first ever Linux process scheduler came along with the initial release of the Linux kernel. Moving forward to 1999, Linux version 2.2 introduced scheduling classes in which both Schedule R and Schedule 54 appeared. The next one, 2001, Linux version 2.4 implemented the Big O of N scheduler. Moving forward to 2003, the Big O of one scheduler was introduced, which lives in the kernel for several years. Next, 2007, the completely failed scheduler was introduced, which essentially is the Big O of log N scheduler. Moving forward to 2014, Linux 3.14 introduced the scheduled data scheduler, which is based on the earliest data first scheduling algorithm and constant bandwidth server. Five years later, Linux OS scheduling was introduced in Linux version 5.0. So, this is a timeline of the CPU schedulers. We can even say that the evolution of the CPU scheduler is basically the evolution of the kernel itself, because, for example, multi-core processor didn't exist in the early years of the kernel. The very first CPU scheduler. The scheduler traverses the whole run queue to file a task with the highest time size to its queue. The problem with this scheduler is that both CPUs share the same run queue, which leads to bad scalability, and it has a limited number of tasks amount due to the nature of array. Therefore, the Big O of N scheduler was born. It is mainly to solve the task amount limitation problem. Still, the scheduler has to traverse the whole run queue to find the most suitable task to execute, making the scheduler brought it when there are many tasks presented in the system. Besides, the run queue is still shared across all CPUs. Coming up next is the Big O 1 scheduler. It is introduced by IngoMona. The data structure for a run queue is two sets of B-Map and priority queue, also known as Active Queue and Expand Queue. The scheduler picks the next task from the Active Queue using a B-Map to get index of the highest priority queue, then picks the first task of the queue. By leveraging corresponding CPU instruction, the getting index operation can be done in constant time. Starting from this scheduler, each CPU has its own run queue. We may think that it might be a perfect scheduler due to its Big O of 1 nature. However, in order to provide better interactivity, many heuristics have been added subsequently, making it possible to unevenly distribute the time size. The next scheduler, completely failed scheduler, inspired by Concurry VAR's Staircase Data Scheduler, which is a bit similar to the Big O of 1 scheduler. It also has a so-called Active Queue and Expand Queue, but it removes the heuristics added into the Big O of 1 scheduler, and the scheduling algorithm is also revised, making the scheduling latency being bounded. The Staircase Data Scheduler has once almost made its way into the mainline. Unfortunately, shortly after its introduction, IngoModernar came out with a completely different scheduler, which has later been merged into the mainline. Concurry VAR has then left the new screen of development for quite some time. The data structure for the run queue of CRS is a red-black tree. The scheduler picks the leftmost node, which corresponds to a must-staffed task at the time. The picking operation itself can be done in constant time, whereas the in queue operation has time complexity of Big O log n. The CRS uses a so-called virtual runtime to keep fairness across tasks in the system, which is essentially awaited total CPU time a task obtained. Per entity load tracking, introduced by Protune.atO on Linux version 3.8. Prior to the introduction of the PLT, Linux tracks the loading on a per run queue basis. The deficiency with this is that the scheduler will not be able to know exactly which task is contributing the load to the run queue, which may in turn lead to bad decisions. For example, migrating a line loading task to another CPU instead of the heavy loading one. The PLT is now used by one of the CPU frequency governors, MD ScheduleUtile, which can then adjust the CPU frequency more simply because of the detailed information. Another use case, low banser of the CFS. With PLT, it can then migrate tasks, space, and per task loading instead of choosing one brandy. The next scheduler is schedule DNS Schedule, introduced by GeoReady.atO on Linux version 3.14. The main purpose of this scheduler is to serve time-sensitive tasks. In the days without this schedule, tasks having precise timing requirements have to result to schedule RRO schedule 54 schedule. However, both of them schedule tasks with FIST priority, which results in these tasks may be deleted by real priority. Schedule DNS Schedule on the other hand guarantees the required timing by performing a scheduler BAT test prior to actually scheduling the task. In other words, if the timing requirement is not able to meet, the scheduler simply rejects to schedule a task. Coming up next is the energy aware scheduling. It's introduced mainly by ARM and NARO on Linux version 5.0. Its advantage is to reduce energy consumption especially on mobile devices. Without EAS, the scheduler will not aware of the capacity of each processor core, which varies a lot at least on big little architecture. The scheduler may intend to choose in appropriate core for the task. For example, giving an AO bound task a performance core. It is worth noting that EAS is coupled with PLT. That is, EAS uses information provided by PLT to evaluate performance versus energy trade-offs when doing task placement decision. We are writing a book on Linux CPU scheduling, which can be considered as the same-day materials of this talk. So far, it covers the necessary background about operating system concepts and the evolution of CPU scheduling in Linux kernel, along with the relevant implementations. Please contact me if you would like to read a draft of this book. These are my references. Thank you for attending the talk. If you have any questions, feel free to send me an email.