<?xml version="1.0" encoding="utf-8" ?><transcript><text start="0.03" dur="3.54">my name is dr. Mike Murphy I&amp;#39;m an</text><text start="2.1" dur="3.9">assistant professor of computer science</text><text start="3.57" dur="5.43">and information systems at Coastal</text><text start="6" dur="5.34">Carolina University in this lecture I&amp;#39;m</text><text start="9" dur="5.549">going to introduce the fundamentals of</text><text start="11.34" dur="9.18">operating systems what they are what</text><text start="14.549" dur="9.871">they do and why they&amp;#39;re important now</text><text start="20.52" dur="5.97">what&amp;#39;s an operating system well it&amp;#39;s a</text><text start="24.42" dur="5.34">layer of software that provides two</text><text start="26.49" dur="8.03">important services to a computer system</text><text start="29.76" dur="7.59">it provides abstraction and arbitration</text><text start="34.52" dur="5.23">abstraction means hiding the details of</text><text start="37.35" dur="4.59">different hardware configurations so</text><text start="39.75" dur="4.469">that each application doesn&amp;#39;t have to be</text><text start="41.94" dur="5.119">tailored for each possible device that</text><text start="44.219" dur="5.671">might be present on the system</text><text start="47.059" dur="5.14">arbitration means that the operating</text><text start="49.89" dur="5.309">system manages access to shared hardware</text><text start="52.199" dur="5.13">resources so that multiple applications</text><text start="55.199" dur="4.171">can run on the same hardware at the same</text><text start="57.329" dur="6.091">time without interfering with one</text><text start="59.37" dur="6.47">another now these Hardware resources</text><text start="63.42" dur="7.53">that need to be managed include the CPU</text><text start="65.84" dur="8.34">the hierarchy of memory all the</text><text start="70.95" dur="6.27">input/output devices on the system and</text><text start="74.18" dur="6.31">to some degree the power and system</text><text start="77.22" dur="5.039">management features in the system many</text><text start="80.49" dur="3.69">of these features are handled directly</text><text start="82.259" dur="4.021">by the hardware but the operating system</text><text start="84.18" dur="4.28">is involved particularly in energy</text><text start="86.28" dur="2.18">conservation</text><text start="88.64" dur="5.79">now the abstraction features of the</text><text start="91.71" dur="5.51">operating system allow hardware devices</text><text start="94.43" dur="8.829">manufactured by different manufacturers</text><text start="97.22" dur="9.429">to have a same interface within software</text><text start="103.259" dur="5.701">for applications to use these hardware</text><text start="106.649" dur="4.26">devices all have different low-level</text><text start="108.96" dur="4.82">instruction sets they all have</text><text start="110.909" dur="6.81">particularly particular capabilities</text><text start="113.78" dur="7.24">features and details that are unique to</text><text start="117.719" dur="5.671">each hardware device if we didn&amp;#39;t have a</text><text start="121.02" dur="4.709">common interface into these hardware</text><text start="123.39" dur="4.619">devices first of all our variety of</text><text start="125.729" dur="4.59">hardware might be limited but worse</text><text start="128.009" dur="4.681">every application on the system would</text><text start="130.319" dur="3.271">have to be programmed to use every</text><text start="132.69" dur="4.76">single device</text><text start="133.59" dur="7.289">system an example back in the 1990s</text><text start="137.45" dur="5.759">computer games often required internal</text><text start="140.879" dur="5.371">programming for specific video cards and</text><text start="143.209" dur="4.961">specific sound cards it was often</text><text start="146.25" dur="4.86">necessary to go into the settings for</text><text start="148.17" dur="5.039">each game and tell the game what type of</text><text start="151.11" dur="4.17">video card you had what type of sound</text><text start="153.209" dur="4.081">card you had for the purpose of</text><text start="155.28" dur="4.62">configuring the game to use that</text><text start="157.29" dur="4.44">particular hardware imagine if that had</text><text start="159.9" dur="4.71">to be done for every single application</text><text start="161.73" dur="5.45">on the system including say a calculator</text><text start="164.61" dur="5.879">or a web browser that would be a very</text><text start="167.18" dur="5.559">untenable situation in terms of being</text><text start="170.489" dur="7.291">able to make use of computers the way we</text><text start="172.739" dur="7.051">use them today also what if we could</text><text start="177.78" dur="5.069">only run one program at a time on a</text><text start="179.79" dur="5.97">computer system years and years ago that</text><text start="182.849" dur="4.521">was the case however a modern system is</text><text start="185.76" dur="3.839">running multiple applications</text><text start="187.37" dur="4.479">simultaneously and it&amp;#39;s up to the</text><text start="189.599" dur="3.901">operating system to ensure that all</text><text start="191.849" dur="5.371">these applications can access your</text><text start="193.5" dur="6.6">resources so each CPU is divided among</text><text start="197.22" dur="6.63">the different programs each program gets</text><text start="200.1" dur="8.4">access to memory input output as well as</text><text start="203.85" dur="7.38">disk and in an ideal world the operating</text><text start="208.5" dur="5.43">system also enforces policies that</text><text start="211.23" dur="5.25">isolate applications from each other so</text><text start="213.93" dur="4.74">that a crash in one application doesn&amp;#39;t</text><text start="216.48" dur="9">take down the entire system or other</text><text start="218.67" dur="8.129">applications now of these examples do we</text><text start="225.48" dur="3.68">have a situation where we have</text><text start="226.799" dur="5.101">abstraction or arbitration</text><text start="229.16" dur="6.04">well first examples supporting both</text><text start="231.9" dur="6.839">Intel and AMD processors this is an</text><text start="235.2" dur="5.849">example of abstraction we don&amp;#39;t have to</text><text start="238.739" dur="5.821">write separate software for an Intel</text><text start="241.049" dur="7.761">processor relative to an AMD processor</text><text start="244.56" dur="7.14">at least for 99.99% of applications</text><text start="248.81" dur="6.67">simply write the application once it</text><text start="251.7" dur="6.539">will run on either processor switching</text><text start="255.48" dur="6.15">between applications is an example of</text><text start="258.239" dur="6.311">arbitrating hardware resources among the</text><text start="261.63" dur="5.629">different applications on the system</text><text start="264.55" dur="5.32">separating memory allocated to different</text><text start="267.259" dur="6.451">applications is also an arbitration</text><text start="269.87" dur="5.849">activity it keeps one application from</text><text start="273.71" dur="4.759">overwriting the contents of memory that</text><text start="275.719" dur="5.611">being used by another application</text><text start="278.469" dur="5.531">enabling video conferencing software to</text><text start="281.33" dur="5.25">use different camera devices this would</text><text start="284" dur="4.409">be an example of abstraction the video</text><text start="286.58" dur="4.589">conferencing program just has to know</text><text start="288.409" dur="5.01">how to use a camera interface that the</text><text start="291.169" dur="3.541">operating system provides and then</text><text start="293.419" dur="4.53">different cameras from different</text><text start="294.71" dur="5.22">manufacturers can be used without having</text><text start="297.949" dur="4.68">to write the application the video</text><text start="299.93" dur="6.359">conferencing program to be able to talk</text><text start="302.629" dur="6.331">to each individual camera similarly</text><text start="306.289" dur="4.771">accessing two different hard disks for</text><text start="308.96" dur="4.35">two different manufacturers any</text><text start="311.06" dur="4.889">underlying detail differences between</text><text start="313.31" dur="4.8">those drives can be handled by using the</text><text start="315.949" dur="5.161">operating system to abstract away the</text><text start="318.11" dur="5.97">details sending and receiving messages</text><text start="321.11" dur="5.85">over a network is both abstraction and</text><text start="324.08" dur="4.709">arbitration on the one hand we&amp;#39;re</text><text start="326.96" dur="3.75">abstracting away the details of each</text><text start="328.789" dur="3.481">particular network card to be able to</text><text start="330.71" dur="4.079">send and receive the message on the</text><text start="332.27" dur="4.109">other hand we&amp;#39;re sharing that network</text><text start="334.789" dur="6.361">card among all the different</text><text start="336.379" dur="6.66">applications on the system now we can</text><text start="341.15" dur="4.829">think of the system in terms of layers</text><text start="343.039" dur="6.18">at the bottom of the layer cake we have</text><text start="345.979" dur="7.141">the hardware this is what executes the</text><text start="349.219" dur="6.6">software top three layers the operating</text><text start="353.12" dur="5.13">system is the middleman so to speak</text><text start="355.819" dur="4.261">between the applications and the</text><text start="358.25" dur="4.05">libraries and utilities upon which the</text><text start="360.08" dur="7.769">applications depend and the underlying</text><text start="362.3" dur="8.85">hardware specifically the core of the</text><text start="367.849" dur="7.021">operating system or the kernel so named</text><text start="371.15" dur="6.96">after say a kernel of corn is the</text><text start="374.87" dur="6.299">minimum piece of software that&amp;#39;s needed</text><text start="378.11" dur="6.089">to share the hardware between the</text><text start="381.169" dur="5.97">different applications whatever lives</text><text start="384.199" dur="7.01">outside the kernel that as software is</text><text start="387.139" dur="7.411">said to be in user space that means its</text><text start="391.209" dur="6.141">application code or sometimes even parts</text><text start="394.55" dur="6.16">of the operating system that are not</text><text start="397.35" dur="8.93">strictly necessary to share the hardware</text><text start="400.71" dur="8.46">or abstract away its details now</text><text start="406.28" dur="5.05">operating systems implement a common</text><text start="409.17" dur="4.65">mechanism for allowing applications to</text><text start="411.33" dur="6.21">access the hardware which is abstraction</text><text start="413.82" dur="5.97">and the applications make requests from</text><text start="417.54" dur="5.04">the operating system to go and access</text><text start="419.79" dur="5.13">the hardware or other features by making</text><text start="422.58" dur="4.53">system calls down into the operating</text><text start="424.92" dur="4.23">system this is called entering the</text><text start="427.11" dur="6.15">operating system or enter ating entering</text><text start="429.15" dur="6.84">the kernel from the top half operating</text><text start="433.26" dur="5.79">systems are also able to alert</text><text start="435.99" dur="5.07">applications that hardware has changed</text><text start="439.05" dur="4.29">perhaps a network packet has come in</text><text start="441.06" dur="7.2">perhaps the user has pressed the key on</text><text start="443.34" dur="7.07">the keyboard these alerts are delivered</text><text start="448.26" dur="4.62">via system called interrupts and</text><text start="450.41" dur="4.57">referred to entering the kernel from the</text><text start="452.88" dur="5.34">bottom half from the hardware side of</text><text start="454.98" dur="5.22">the operating system we should also note</text><text start="458.22" dur="4.71">that operating systems can manage and</text><text start="460.2" dur="6.84">terminate applications by sending</text><text start="462.93" dur="6.24">signals to those applications now there</text><text start="467.04" dur="4.47">are a wide variety of operating systems</text><text start="469.17" dur="5.19">out there they basically fall into two</text><text start="471.51" dur="3.69">categories Microsoft Windows which is</text><text start="474.36" dur="4.19">not Unix</text><text start="475.2" dur="6.39">and everything else which is Unix</text><text start="478.55" dur="5.8">Microsoft Windows systems non UNIX</text><text start="481.59" dur="5.49">systems are the most popular desktop</text><text start="484.35" dur="5.25">operating systems at the moment however</text><text start="487.08" dur="5.82">they&amp;#39;re rapidly being eclipsed by tablet</text><text start="489.6" dur="6.39">devices most of which are running UNIX</text><text start="492.9" dur="5.84">style systems Windows is typically</text><text start="495.99" dur="5.04">pre-installed by PC manufacturers which</text><text start="498.74" dur="5.89">accounts for a good portion of its</text><text start="501.03" dur="6.95">present popularity UNIX systems on the</text><text start="504.63" dur="6.42">other hand are installed by the end-user</text><text start="507.98" dur="6.61">except for commercial UNIX systems and</text><text start="511.05" dur="7.16">Mac OS 10 Mac OS 10 is probably the</text><text start="514.59" dur="6.27">best-selling UNIX system of date and</text><text start="518.21" dur="7.92">Linux which includes the Android</text><text start="520.86" dur="5.27">platform is probably second</text><text start="526.88" dur="4.34">now there are some mainframe systems out</text><text start="529.13" dur="5.73">there also some of which are UNIX like</text><text start="531.22" dur="6.549">some use custom operating systems there</text><text start="534.86" dur="5.399">are a variety of players in the embedded</text><text start="537.769" dur="5.911">systems market however that&amp;#39;s presently</text><text start="540.259" dur="7.08">dominated by Android with Apple&amp;#39;s iOS</text><text start="543.68" dur="4.519">which is based on Mac OS 10 in close</text><text start="547.339" dur="6.271">second</text><text start="548.199" dur="10.421">others Symbian blackberry OS tiny OS are</text><text start="553.61" dur="8.31">confined to smaller markets now the idea</text><text start="558.62" dur="6.6">behind the UNIX systems began within</text><text start="561.92" dur="5.58">this time sharing system started in 1964</text><text start="565.22" dur="4.83">called multics and the idea behind</text><text start="567.5" dur="4.8">multics was to make computing into a</text><text start="570.05" dur="5.31">remote service that could be accessed by</text><text start="572.3" dur="4.92">terminals using telephone lines it</text><text start="575.36" dur="4.349">wasn&amp;#39;t terribly successful because</text><text start="577.22" dur="5.369">dial-up connections of the time at about</text><text start="579.709" dur="5.731">9600 baud were rather slow however</text><text start="582.589" dur="4.74">canada actually did have a multix system</text><text start="585.44" dur="3.69">deployed and used it as part of their</text><text start="587.329" dur="5.07">national defense network until the year</text><text start="589.13" dur="5.72">2000 multics is remembered however</text><text start="592.399" dur="6.69">primarily for its influence as a</text><text start="594.85" dur="6.609">multi-user shared system and today with</text><text start="599.089" dur="4.891">cloud computing there&amp;#39;s some significant</text><text start="601.459" dur="4.98">parallels between some of the cloud</text><text start="603.98" dur="4.5">computing models that we use and some of</text><text start="606.439" dur="6.241">the ideas that were pioneered back in</text><text start="608.48" dur="7.799">the multics system now UNIX was actually</text><text start="612.68" dur="7.23">a play on the term multix it was</text><text start="616.279" dur="5.941">developed by Bell Labs in 1969 it&amp;#39;s</text><text start="619.91" dur="5.19">actually a trademarked term so some</text><text start="622.22" dur="5.549">authors use Star NEX to refer to the</text><text start="625.1" dur="4.64">family of systems and use the word UNIX</text><text start="627.769" dur="4.771">only for the commercial distributions</text><text start="629.74" dur="5.5">the original UNIX systems were</text><text start="632.54" dur="4.59">commercial distributions with free</text><text start="635.24" dur="7.05">variants BSD and Linux</text><text start="637.13" dur="8.97">coming along late 80s and early 90s now</text><text start="642.29" dur="5.94">UNIX systems are time sharing systems in</text><text start="646.1" dur="4.589">that they support multiple users at the</text><text start="648.23" dur="5.25">same time but they&amp;#39;re designed to be run</text><text start="650.689" dur="5.551">on local computer resources instead of</text><text start="653.48" dur="4.829">remote resources the Berkeley system</text><text start="656.24" dur="4.5">distribution which is based on a T&amp;#39;s</text><text start="658.309" dur="2.941">commercial distribution was one of the</text><text start="660.74" dur="2.37">first</text><text start="661.25" dur="5.009">opensource BS with one of the first</text><text start="663.11" dur="4.68">open-source UNIX systems to emerge it</text><text start="666.259" dur="5.551">was based on the commercial distribution</text><text start="667.79" dur="5.549">so AT&amp;amp;T sued UC Berkeley but an eventual</text><text start="671.81" dur="4.469">settlement allowed Berkeley to</text><text start="673.339" dur="5.881">distribute BSD freely in both source and</text><text start="676.279" dur="4.98">binary form today&amp;#39;s descendants of BSD</text><text start="679.22" dur="5.52">include FreeBSD OpenBSD</text><text start="681.259" dur="8.01">and Mac OS 10 which is based on the UNIX</text><text start="684.74" dur="6.12">bsd system Linux however was an</text><text start="689.269" dur="3.781">alternate approach to getting a</text><text start="690.86" dur="4.649">unix-like kernel running on a computer</text><text start="693.05" dur="3.93">system this was started by Lina&amp;#39;s</text><text start="695.509" dur="3.901">Torvalds as an undergrad at the</text><text start="696.98" dur="5.969">University of Helsinki released in</text><text start="699.41" dur="5.099">source code for him in 1991 and often</text><text start="702.949" dur="3.63">combined with a set of user space</text><text start="704.509" dur="4.351">utilities and libraries created by the</text><text start="706.579" dur="4.531">canoe project thus the resulting</text><text start="708.86" dur="5.31">combination is sometimes called canoe</text><text start="711.11" dur="5.01">slash Linux it has been commercially</text><text start="714.17" dur="4.409">successful for many device classes</text><text start="716.12" dur="5.969">including web servers network and</text><text start="718.579" dur="5.851">embedded devices and mobile phones some</text><text start="722.089" dur="5.461">people including myself use Linux as a</text><text start="724.43" dur="5.37">desktop platform and it does have the</text><text start="727.55" dur="4.8">benefit of having the one kernel that&amp;#39;s</text><text start="729.8" dur="6.24">scalable from embedded devices all the</text><text start="732.35" dur="5.82">way up to supercomputers valuable tool</text><text start="736.04" dur="8.06">for us in terms of learning about</text><text start="738.17" dur="9.45">operating systems now what is Linux well</text><text start="744.1" dur="6.76">generically the term Linux is a refers</text><text start="747.62" dur="7.38">to a class of operating systems that use</text><text start="750.86" dur="6.39">a common kernel now distributions of</text><text start="755" dur="4.29">Linux are operating environments as</text><text start="757.25" dur="4.399">they&amp;#39;re sometimes called are cobbled</text><text start="759.29" dur="5.099">together by taking the Linux kernel and</text><text start="761.649" dur="4.351">adding various different user space</text><text start="764.389" dur="4.861">tools to it</text><text start="766" dur="5.62">many of these core tools originated in</text><text start="769.25" dur="5.579">the ganoub project which originally</text><text start="771.62" dur="5.55">started MIT and thus systems are</text><text start="774.829" dur="5.341">sometimes called canoe slash Linux</text><text start="777.17" dur="5.279">instead of just Linux I typically</text><text start="780.17" dur="6">however say just Linux we&amp;#39;re referring</text><text start="782.449" dur="5.7">to either the kernel itself or to the</text><text start="786.17" dur="4.409">operating system distributions that are</text><text start="788.149" dur="5.261">based upon that kernel</text><text start="790.579" dur="4.75">the Linux kernel was originally started</text><text start="793.41" dur="4.14">as a hobby by an undergraduate at the</text><text start="795.329" dur="5.101">University of Helsinki a gentleman by</text><text start="797.55" dur="6.51">the name of Lina&amp;#39;s Torvalds he still</text><text start="800.43" dur="5.31">heads the carnal development project now</text><text start="804.06" dur="4.62">Linux is what&amp;#39;s called a mite a</text><text start="805.74" dur="5.699">monolithic kernel and that means that</text><text start="808.68" dur="6.48">the device drivers are built into the</text><text start="811.439" dur="5.551">kernel however it&amp;#39;s not a true</text><text start="815.16" dur="4.109">monolithic kernel because the device</text><text start="816.99" dur="4.8">drivers can be built as modules and</text><text start="819.269" dur="6.57">loaded and unloaded from the kernel at</text><text start="821.79" dur="10.14">runtime the latest stable version was</text><text start="825.839" dur="9.781">2.6 point 37 On January 5th of 2011 that</text><text start="831.93" dur="6.029">has since been updated to version 3.0 3</text><text start="835.62" dur="6.18">I believe was the latest version that</text><text start="837.959" dur="6.6">I&amp;#39;ve seen as of now which is late August</text><text start="841.8" dur="9.779">2011 the latest version can also be</text><text start="844.559" dur="8.94">checked by going to WWll org here is an</text><text start="851.579" dur="6.091">example of some of the main components</text><text start="853.499" dur="6.411">of Linux operating system at the lowest</text><text start="857.67" dur="5.13">layer we have the computer hardware and</text><text start="859.91" dur="5.469">on top of that Hardware we have the</text><text start="862.8" dur="6.409">Linux kernel Linux kernel contains a</text><text start="865.379" dur="7.56">number of built-in subsystems that</text><text start="869.209" dur="7.471">enable access to the computer hardware</text><text start="872.939" dur="7.07">these include drivers memory management</text><text start="876.68" dur="5.56">process management file systems</text><text start="880.009" dur="5.471">networking including sockets and</text><text start="882.24" dur="7.26">protocols and other utilities that the</text><text start="885.48" dur="7.5">kernel provides on top of the kernel we</text><text start="889.5" dur="6.059">have the goosy library upon which most</text><text start="892.98" dur="4.919">other libraries are based since C is</text><text start="895.559" dur="4.861">typically the lowest level system</text><text start="897.899" dur="6.06">language to which all other software is</text><text start="900.42" dur="6.839">eventually compiled on top of that we</text><text start="903.959" dur="5.88">have our compiler GCC some important</text><text start="907.259" dur="5.01">utilities called coreutils also part of</text><text start="909.839" dur="4.711">the canoe project and the born-again</text><text start="912.269" dur="5.341">shell or bash which provides a</text><text start="914.55" dur="5.73">command-line interface into the system</text><text start="917.61" dur="5.81">all we need in order to run a Linux</text><text start="920.28" dur="6.45">system and do useful things with it is</text><text start="923.42" dur="7.12">everything below this line where I&amp;#39;m</text><text start="926.73" dur="6.54">moving the mouse right here all the</text><text start="930.54" dur="5.34">software above this line is add-on</text><text start="933.27" dur="4.98">software it&amp;#39;s not strictly needed in</text><text start="935.88" dur="5.07">order to run a Linux system what makes</text><text start="938.25" dur="5.61">Linux systems more convenient or do</text><text start="940.95" dur="5.31">other things so for example we can add</text><text start="943.86" dur="6.33">network tools such as secure shell</text><text start="946.26" dur="6.18">server and generate a generic services</text><text start="950.19" dur="5.7">server called I net D that can launch</text><text start="952.44" dur="5.4">all kinds of different services we can</text><text start="955.89" dur="7.74">also run a lamp stack that stands for</text><text start="957.84" dur="7.98">Linux Apache MySQL PHP if we&amp;#39;re in a</text><text start="963.63" dur="5.18">desktop scenario we can also run a</text><text start="965.82" dur="6.26">graphic user interface this includes x11</text><text start="968.81" dur="5.85">specifically the x.org distribution and</text><text start="972.08" dur="5.31">one of a number of different</text><text start="974.66" dur="6.91">environments that can be run inside x11</text><text start="977.39" dur="8.64">popular ones include gnome KDE and Xfce</text><text start="981.57" dur="4.46">which isn&amp;#39;t on this particular graphic</text><text start="986.27" dur="6.36">now a large number of people have come</text><text start="989.94" dur="5.34">up with different distributions of Linux</text><text start="992.63" dur="5.5">these distributions contain different</text><text start="995.28" dur="6.51">sets of software but they all share the</text><text start="998.13" dur="6.24">Linux kernel in common we typically</text><text start="1001.79" dur="6.12">classify distributions by which package</text><text start="1004.37" dur="6.15">manager they use unlike Linux and Mac</text><text start="1007.91" dur="5.85">system unlike I&amp;#39;m sorry Windows and Mac</text><text start="1010.52" dur="5.37">systems where we install individual</text><text start="1013.76" dur="5.43">applications and then have to manually</text><text start="1015.89" dur="6.6">update each application by running</text><text start="1019.19" dur="6.09">separate installers Linux distributions</text><text start="1022.49" dur="5.49">typically use package managers which</text><text start="1025.28" dur="5.61">download and install extra software and</text><text start="1027.98" dur="7.65">update existing software with a single</text><text start="1030.89" dur="8.37">command these package managers include</text><text start="1035.63" dur="5.63">the Red Hat package manager or RPM which</text><text start="1039.26" dur="6.66">is used in red hat enterprise linux</text><text start="1041.26" dur="6.71">fedora and reeva openSUSE and a number</text><text start="1045.92" dur="5.17">of other distributions</text><text start="1047.97" dur="6.089">a competing package format is the</text><text start="1051.09" dur="5.309">advanced package tool this is used</text><text start="1054.059" dur="5.85">primarily by Debian and derivatives of</text><text start="1056.399" dur="6.931">Debian such as Ubuntu there are other</text><text start="1059.909" dur="8.13">binary formats however in use Slackware</text><text start="1063.33" dur="7.5">uses a tgz format with its own package</text><text start="1068.039" dur="4.89">management commands Arch Linux has a</text><text start="1070.83" dur="4.349">custom written package manager called</text><text start="1072.929" dur="5.75">pac-man and there are some others out</text><text start="1075.179" dur="6.36">there there are also source formats</text><text start="1078.679" dur="5.681">Gentoo is able to build everything from</text><text start="1081.539" dur="5.52">source using a builds Linux from scratch</text><text start="1084.36" dur="4.35">isn&amp;#39;t so much a distribution as it is a</text><text start="1087.059" dur="4.771">book that tells you how to put together</text><text start="1088.71" dur="5.76">your own Linux system by compiling</text><text start="1091.83" dur="4.949">sources for each particular application</text><text start="1094.47" dur="4.47">this is of course time-consuming and</text><text start="1096.779" dur="4.041">there are other source formats out there</text><text start="1098.94" dur="4.92">as well</text><text start="1100.82" dur="5.469">now in industry Red Hat Enterprise Linux</text><text start="1103.86" dur="4.13">is quite popular it&amp;#39;s a Linux</text><text start="1106.289" dur="4.291">distribution from Red Hat incorporated</text><text start="1107.99" dur="5.289">which is based in Research Triangle Park</text><text start="1110.58" dur="6.66">North Carolina that&amp;#39;s in the Durham area</text><text start="1113.279" dur="6.03">it is an open source distribution but</text><text start="1117.24" dur="4.679">the official product is sold with an</text><text start="1119.309" dur="5.521">update subscription accessed via per</text><text start="1121.919" dur="5.191">installation serial numbers therefore in</text><text start="1124.83" dur="4.89">order to get the update subscription and</text><text start="1127.11" dur="4.83">the installation media it&amp;#39;s necessary to</text><text start="1129.72" dur="5.4">purchase a subscription from Red Hat</text><text start="1131.94" dur="5.939">they do use the RPM package format in</text><text start="1135.12" dur="7.59">fact they invented it RPM stands for red</text><text start="1137.879" dur="7.04">hat packaged format sent OS is a free</text><text start="1142.71" dur="5.28">rebuild of Red Hat Enterprise Linux</text><text start="1144.919" dur="5.921">that&amp;#39;s widely used in academia industry</text><text start="1147.99" dur="5.059">on many high-performance computing</text><text start="1150.84" dur="5.16">systems and for many other applications</text><text start="1153.049" dur="7.5">this of course uses the RPM package</text><text start="1156" dur="7.259">format and is available at zero cost</text><text start="1160.549" dur="5.59">Fedora core is a community distribution</text><text start="1163.259" dur="5.01">that&amp;#39;s sponsored by Red Hat that grew</text><text start="1166.139" dur="3.78">out of an old product that Red Hat used</text><text start="1168.269" dur="3.921">to produce which was called Red Hat</text><text start="1169.919" dur="4.59">Linux without the enterprise part and</text><text start="1172.19" dur="4.63">Fedora is used to develop and test</text><text start="1174.509" dur="4.741">packages and infrastructure that are</text><text start="1176.82" dur="3.1">later incorporated into the Enterprise</text><text start="1179.25" dur="5.92">Linux</text><text start="1179.92" dur="7.5">products Debian is a fully open source</text><text start="1185.17" dur="5.04">distribution which avoids proprietary</text><text start="1187.42" dur="5.25">software and has an emphasis on security</text><text start="1190.21" dur="4.68">and stability the stable version of</text><text start="1192.67" dur="5.22">Debian is extremely stable because the</text><text start="1194.89" dur="5.16">packages are extremely well tested the</text><text start="1197.89" dur="5.58">flipside is is that those packages tend</text><text start="1200.05" dur="4.95">to be rather dated and thus an unstable</text><text start="1203.47" dur="4.53">version is available if more</text><text start="1205" dur="4.56">bleeding-edge packages are needed debian</text><text start="1208" dur="5.13">uses something called the advanced</text><text start="1209.56" dur="8.19">package tool or apt in order to manage</text><text start="1213.13" dur="6.21">its software Ubuntu is actually based on</text><text start="1217.75" dur="3.75">Debian it&amp;#39;s sponsored by a company</text><text start="1219.34" dur="4.83">called canonical Limited which is based</text><text start="1221.5" dur="4.5">in South Africa it&amp;#39;s designed to be easy</text><text start="1224.17" dur="3.74">to use and friendly to new users</text><text start="1226" dur="5.07">switching from competing platforms</text><text start="1227.91" dur="7.45">there&amp;#39;s a new release every six months</text><text start="1231.07" dur="6.39">and it&amp;#39;s often necessary to completely</text><text start="1235.36" dur="4.77">reinstall the operating system at each</text><text start="1237.46" dur="4.68">new release there is a functionality</text><text start="1240.13" dur="6.03">that&amp;#39;s built into the advanced packaging</text><text start="1242.14" dur="6.33">tool called dist upgrade which seems</text><text start="1246.16" dur="4.53">like a great idea on paper but in</text><text start="1248.47" dur="6.32">practice often doesn&amp;#39;t work right and</text><text start="1250.69" dur="7.56">leaves the system unstable Debian Ubuntu</text><text start="1254.79" dur="6.64">also has a package pinning policy thus</text><text start="1258.25" dur="6.06">once a package is released it&amp;#39;s only</text><text start="1261.43" dur="5.46">updated for security updates if new</text><text start="1264.31" dur="4.35">features are added one has to wait till</text><text start="1266.89" dur="4.79">the next version of Ubuntu comes out</text><text start="1268.66" dur="5.73">before getting the newly updated package</text><text start="1271.68" dur="6.43">now Arch Linux which is what I</text><text start="1274.39" dur="6.42">personally run is a minimalist framework</text><text start="1278.11" dur="4.98">for creating a custom system it&amp;#39;s not so</text><text start="1280.81" dur="5.04">much as a distribution as it is a set of</text><text start="1283.09" dur="5.09">tools that enables each individual to</text><text start="1285.85" dur="4.95">create his or her own installation</text><text start="1288.18" dur="4.81">customized to his or her own desires</text><text start="1290.8" dur="5.42">it&amp;#39;s a different philosophy from</text><text start="1292.99" dur="5.09">traditional distributions it is a</text><text start="1296.22" dur="4.15">completely written from scratch</text><text start="1298.08" dur="4.63">distribution it&amp;#39;s not based on any other</text><text start="1300.37" dur="4.29">and it is what&amp;#39;s called a rolling</text><text start="1302.71" dur="4.83">distribution there are not discrete</text><text start="1304.66" dur="5.22">versions of Arch Linux every time you</text><text start="1307.54" dur="3.96">run a system update on Arch Linux you</text><text start="1309.88" dur="3.04">get the latest version of the operating</text><text start="1311.5" dur="2.74">system</text><text start="1312.92" dur="3.9">along with the latest version of all</text><text start="1314.24" dur="6.21">application packages and all the latest</text><text start="1316.82" dur="6">bugs and the pac-man package manager&amp;#39;</text><text start="1320.45" dur="6.09">which was written from scratch for art&amp;#39;s</text><text start="1322.82" dur="5.19">linux is used to manage the software so</text><text start="1326.54" dur="4.17">these are just a few examples of</text><text start="1328.01" dur="5.13">different Linux systems different Linux</text><text start="1330.71" dur="8.43">distributions that can be used with the</text><text start="1333.14" dur="9.78">highly scalable Linux kernel a hard disk</text><text start="1339.14" dur="5.25">drive for access by multiple programs in</text><text start="1342.92" dur="3.6">particular we&amp;#39;ll talk about disk</text><text start="1344.39" dur="4.83">attachment talk about some of the</text><text start="1346.52" dur="5.88">properties of magnetic disks discuss</text><text start="1349.22" dur="7.11">disk addressing discuss partitioning and</text><text start="1352.4" dur="7.23">introduce solid-state drives now begin</text><text start="1356.33" dur="5.67">with disk attachment disks are attached</text><text start="1359.63" dur="4.62">to the motherboard via some kind of</text><text start="1362" dur="5.58">cable and the exact type of cable</text><text start="1364.25" dur="6">depends upon the bus in use on the</text><text start="1367.58" dur="4.92">system there are several different types</text><text start="1370.25" dur="3.96">of bus which are implemented by</text><text start="1372.5" dur="3.54">different chips attached to the</text><text start="1374.21" dur="5.73">motherboard on the different computer</text><text start="1376.04" dur="7.2">systems one common bus that was widely</text><text start="1379.94" dur="5.31">in use until the early 2000s on consumer</text><text start="1383.24" dur="6.54">grade Hardware was the integrated Drive</text><text start="1385.25" dur="9.12">electronics or IDE bus this has been</text><text start="1389.78" dur="7.29">since backronym parallel ata or pata we</text><text start="1394.37" dur="5.61">consisted of 40 to 80 ribbon cable</text><text start="1397.07" dur="4.92">connecting to a 40 pin connector to</text><text start="1399.98" dur="4.8">provide 40 simultaneous parallel</text><text start="1401.99" dur="5.3">channels of communication between the</text><text start="1404.78" dur="5.34">motherboard and the hard drive</text><text start="1407.29" dur="5.38">enterprise level systems of that time</text><text start="1410.12" dur="6.17">period typically used scuzzy or small</text><text start="1412.67" dur="6.81">computer system interface buses which</text><text start="1416.29" dur="5.89">consisted of cabling of 50 to 80 pin</text><text start="1419.48" dur="6.21">connectors between the hard disk and the</text><text start="1422.18" dur="6.15">motherboard scuzzy also defined a</text><text start="1425.69" dur="5.88">standard set of commands a standard</text><text start="1428.33" dur="6.99">protocol for interfacing with disks CDs</text><text start="1431.57" dur="6.36">and other types of storage devices this</text><text start="1435.32" dur="5.94">protocol was useful for recordable CD</text><text start="1437.93" dur="6.87">media and DVD ROM media and was</text><text start="1441.26" dur="5.46">implemented on the ata bus using the</text><text start="1444.8" dur="4.44">scuzzy protocol</text><text start="1446.72" dur="7.4">in a system known as a tapi or a TA</text><text start="1449.24" dur="7.11">packet interface now in modern times</text><text start="1454.12" dur="4.03">serial interfaces between the</text><text start="1456.35" dur="3.87">motherboard and the disk have replaced</text><text start="1458.15" dur="5.61">for the most part the parallel</text><text start="1460.22" dur="6.959">interfaces 488 style disks we have</text><text start="1463.76" dur="5.46">serial ata or SATA which replaces the 40</text><text start="1467.179" dur="4.891">pin connector with a 7 pin connector</text><text start="1469.22" dur="6.66">still uses the same protocol either a TA</text><text start="1472.07" dur="6.96">or tapi protocol and serial Attached</text><text start="1475.88" dur="5.159">scuzzy or SAS which uses the scuzzy</text><text start="1479.03" dur="5.97">protocol over a narrower channel</text><text start="1481.039" dur="6.061">consisting of 26 to 32 pins both of</text><text start="1485" dur="5.48">these new serial attachment mechanisms</text><text start="1487.1" dur="6.6">support higher bus transfer speeds</text><text start="1490.48" dur="5.38">enabling theoretically faster devices to</text><text start="1493.7" dur="4.109">be attached to the motherboard does not</text><text start="1495.86" dur="6.689">necessarily mean however that the disks</text><text start="1497.809" dur="7.051">have gotten that much faster now we</text><text start="1502.549" dur="5.581">still store large amounts of information</text><text start="1504.86" dur="5.309">using magnetic disks these are metallic</text><text start="1508.13" dur="4.71">or glass platters that are coated in a</text><text start="1510.169" dur="5.341">magnetic surface and a stack of these</text><text start="1512.84" dur="5.64">platters is rotated at high speed by an</text><text start="1515.51" dur="4.76">electric motor a stack of heads moves</text><text start="1518.48" dur="4.38">back and forth across the platters</text><text start="1520.27" dur="5.44">altering the magnetic fields in order to</text><text start="1522.86" dur="5.16">read and write data moving the heads</text><text start="1525.71" dur="4.44">back and forth results in seek time and</text><text start="1528.02" dur="3.899">waiting for the platter to rotate around</text><text start="1530.15" dur="5.909">to the correct position results in</text><text start="1531.919" dur="8.101">rotational delay historically magnetic</text><text start="1536.059" dur="6.36">media were addressed by geometry the</text><text start="1540.02" dur="5.34">smallest addressable unit of space on a</text><text start="1542.419" dur="5.821">hard disk is called a sector and this is</text><text start="1545.36" dur="5.549">typically 512 bytes at least on older</text><text start="1548.24" dur="7.309">discs though other sizes have been used</text><text start="1550.909" dur="8.76">and newer drives go up to 4 kilobytes</text><text start="1555.549" dur="6.101">tracks our circular paths at a constant</text><text start="1559.669" dur="6">radius from the center of the disk and</text><text start="1561.65" dur="6.93">each track is divided into sectors one</text><text start="1565.669" dur="5.791">head reads from a single track on a</text><text start="1568.58" dur="5.52">single side of each platter when you</text><text start="1571.46" dur="5.01">stack multiple heads up while using</text><text start="1574.1" dur="4.98">multiple tracks on the various sides of</text><text start="1576.47" dur="4.02">several different platters the result is</text><text start="1579.08" dur="4.47">what&amp;#39;s called a cylinder</text><text start="1580.49" dur="6.72">and historically accessing discs</text><text start="1583.55" dur="6.81">required accessing the particular data</text><text start="1587.21" dur="8.73">locations using cylinder heads sector or</text><text start="1590.36" dur="7.59">CHS geometry addressing as disks grew</text><text start="1595.94" dur="5.58">larger and faster however this type of</text><text start="1597.95" dur="5.91">addressing scheme became limited and so</text><text start="1601.52" dur="4.55">logical block addressing was put into</text><text start="1603.86" dur="4.89">use and it&amp;#39;s now the standard today</text><text start="1606.07" dur="4.96">logical block addressing or LBA gives</text><text start="1608.75" dur="4.35">each block on a disk its own logical</text><text start="1611.03" dur="4.26">address and leaves it up to the disk</text><text start="1613.1" dur="4.98">firmware to convert the logical</text><text start="1615.29" dur="6.66">addresses into physical locations on the</text><text start="1618.08" dur="6.12">disk current standards with logical</text><text start="1621.95" dur="4.83">block addressing on ata will allow</text><text start="1624.2" dur="6.359">enough space for disks up to 128</text><text start="1626.78" dur="6.66">petabytes operating systems normally</text><text start="1630.559" dur="7.831">implement these 48 bit addresses using</text><text start="1633.44" dur="6.93">64-bit data structures thus operating</text><text start="1638.39" dur="4.169">systems that support 64-bit disk</text><text start="1640.37" dur="5.46">addressing can generally support hard</text><text start="1642.559" dur="8.341">disks up to 8 ZB bytes of data assuming</text><text start="1645.83" dur="7.229">we&amp;#39;re using 512 byte sector sizes now</text><text start="1650.9" dur="4.26">regardless of the size of the disk it is</text><text start="1653.059" dur="5.221">convenient to partition the disk into</text><text start="1655.16" dur="7.26">multiple sections so that we can isolate</text><text start="1658.28" dur="6">data from each other we can isolate the</text><text start="1662.42" dur="3.45">main partition of the operating system</text><text start="1664.28" dur="3.779">from a partition we would use for</text><text start="1665.87" dur="4.38">swapping out pages of virtual memory for</text><text start="1668.059" dur="6.181">example and we can isolate that from</text><text start="1670.25" dur="6.69">user data also with early hard drives it</text><text start="1674.24" dur="5.97">was convenient to isolate partitions so</text><text start="1676.94" dur="4.95">as to minimize seek time by making it</text><text start="1680.21" dur="6.21">such that the head didn&amp;#39;t have to move</text><text start="1681.89" dur="7.26">as far in order to access data now there</text><text start="1686.42" dur="5.28">are two types of partition table or data</text><text start="1689.15" dur="4.83">structure that resides on the disk to</text><text start="1691.7" dur="5.34">indicate where on the disk the different</text><text start="1693.98" dur="5.13">partitions lie a common partition table</text><text start="1697.04" dur="4.08">type that&amp;#39;s in widespread use today is</text><text start="1699.11" dur="5.76">the master boot record based</text><text start="1701.12" dur="6.51">partitioning scheme and the way this</text><text start="1704.87" dur="5.64">works is that the BIOS on the system the</text><text start="1707.63" dur="6.37">basic input/output system actually loads</text><text start="1710.51" dur="7.81">the first 512 byte sector from the boot</text><text start="1714" dur="7.14">five at boot time and code stored in</text><text start="1718.32" dur="5.79">that 512-byte sector loads the rest of</text><text start="1721.14" dur="5.34">the system also within that sector is</text><text start="1724.11" dur="4.61">stored the partition table which is</text><text start="1726.48" dur="5.4">called a DA style partition table and</text><text start="1728.72" dur="5.46">the dos partition table still uses</text><text start="1731.88" dur="4.83">legacy cylinder head sector addressing</text><text start="1734.18" dur="4.99">supports a maximum of four primary</text><text start="1736.71" dur="4.65">partitions one of which can be an</text><text start="1739.17" dur="5.13">extended partition with logical drives</text><text start="1741.36" dur="5.76">in it and supports maximum partition</text><text start="1744.3" dur="6.39">sizes and maximum partition starting</text><text start="1747.12" dur="5.31">addresses at 2 TB bytes this is the</text><text start="1750.69" dur="3.87">default partitioning scheme for</text><text start="1752.43" dur="3.68">Microsoft Windows and most Linux</text><text start="1754.56" dur="4.65">distributions</text><text start="1756.11" dur="6.85">however as hard drives become larger and</text><text start="1759.21" dur="6.8">grow past 2 terabytes the gooood</text><text start="1762.96" dur="5.7">partition table or GPT starts to be used</text><text start="1766.01" dur="5.08">this is a larger partition table that</text><text start="1768.66" dur="6.12">can support disks or partitions up to 8</text><text start="1771.09" dur="6.63">ZB bytes in size for compatibility with</text><text start="1774.78" dur="5.55">old partitioning tools and prevent old</text><text start="1777.72" dur="4.55">tools from overwriting sections of the</text><text start="1780.33" dur="4.53">disk and seeing in this free space a</text><text start="1782.27" dur="5.73">protective or dummy Master Boot Record</text><text start="1784.86" dur="5.79">is retained at the beginning of the disk</text><text start="1788" dur="5.11">GPT is the default partitioning scheme</text><text start="1790.65" dur="6.24">and Mac OS 10 is an optional</text><text start="1793.11" dur="6.42">partitioning scheme in Linux and is</text><text start="1796.89" dur="6.65">supported in 64-bit versions of Windows</text><text start="1799.53" dur="7.28">7 Windows Vista and Windows Server 2008</text><text start="1803.54" dur="6.09">provided that the system uses the</text><text start="1806.81" dur="6.85">extensible firmware interface or efi</text><text start="1809.63" dur="6.25">instead of the legacy BIOS interface for</text><text start="1813.66" dur="5.43">Linux the grub to boot loader can use a</text><text start="1815.88" dur="6">legacy BIOS interface but requires a</text><text start="1819.09" dur="4.98">dedicated small partition on the hard</text><text start="1821.88" dur="7.47">drive in which to store the rest of the</text><text start="1824.07" dur="7.47">bootloader now hard drives and magnetic</text><text start="1829.35" dur="5.57">media are used for large quantities of</text><text start="1831.54" dur="6.57">space because of their relative low cost</text><text start="1834.92" dur="6.21">when high performance is required we</text><text start="1838.11" dur="6.03">prefer to use solid-state drives or SSDs</text><text start="1841.13" dur="5.14">these drives have no moving parts which</text><text start="1844.14" dur="3.66">makes them generally faster and less</text><text start="1846.27" dur="5.16">subject to physical damage</text><text start="1847.8" dur="6.38">mechanical hard disks most of these SSDs</text><text start="1851.43" dur="5.61">use NAND flash memory to store data and</text><text start="1854.18" dur="5.23">this is a storage mechanism that&amp;#39;s based</text><text start="1857.04" dur="5.67">on injecting or removing an electron</text><text start="1859.41" dur="5.97">from a flash cell injecting an electron</text><text start="1862.71" dur="5.64">into a flash cell changes its state from</text><text start="1865.38" dur="5.07">one to zero so this is backwards from</text><text start="1868.35" dur="4.14">what one would expect an empty flash</text><text start="1870.45" dur="5.76">cell actually has a state of 1 instead</text><text start="1872.49" dur="6.54">of 0 the membranes through which this</text><text start="1876.21" dur="6.63">eject this electron is injected and</text><text start="1879.03" dur="5.4">removed eventually wear out typically</text><text start="1882.84" dur="5.4">after anywhere from a hundred thousand</text><text start="1884.43" dur="6.06">to a million cycles furthermore the</text><text start="1888.24" dur="4.53">electrons tend to leak out over long</text><text start="1890.49" dur="5.55">time periods periods of many years</text><text start="1892.77" dur="5.52">causing flash to lose data which makes</text><text start="1896.04" dur="6.75">flash based memory systems unsuitable</text><text start="1898.29" dur="6.03">for long term backups in this diagram we</text><text start="1902.79" dur="4.11">can see how a flashed or how a</text><text start="1904.32" dur="5.97">solid-state drive using flash memory</text><text start="1906.9" dur="11.16">works blank flash memory stores the</text><text start="1910.29" dur="12.77">value 1 1 1 1 1 1 1 1 11 11 0 if I want</text><text start="1918.06" dur="9.3">to write the value 1 0 0 1 0 1 0 0 I</text><text start="1923.06" dur="8.89">have to pop electrons into the second</text><text start="1927.36" dur="8.21">third fifth seventh and eighth flash</text><text start="1931.95" dur="7.4">locations the 8-bit those bit locations</text><text start="1935.57" dur="7.93">I&amp;#39;m pretending we only have a byte here</text><text start="1939.35" dur="9.76">if later I wish to change that stored</text><text start="1943.5" dur="9.18">value to 1 0 1 0 0 0 1 1 I must first</text><text start="1949.11" dur="7.55">erase that block of flash memory before</text><text start="1952.68" dur="7.02">I can program the new data value</text><text start="1956.66" dur="6.28">generally I must erase flash memory in</text><text start="1959.7" dur="7.44">the size of an array stip eclis 4</text><text start="1962.94" dur="5.7">kilobytes waiting for this block erase</text><text start="1967.14" dur="4.47">your procedure causes something called</text><text start="1968.64" dur="5.34">write amplification where successive</text><text start="1971.61" dur="6.06">writes to an SSD become progressively</text><text start="1973.98" dur="6.84">slower to avoid this problem typically</text><text start="1977.67" dur="3.75">erase the SSD ahead of time whenever</text><text start="1980.82" dur="3.21">space</text><text start="1981.42" dur="5.49">has been freed on it and there&amp;#39;s an ata</text><text start="1984.03" dur="7.65">command called trim that facilitates</text><text start="1986.91" dur="7.5">this process one other issue that has to</text><text start="1991.68" dur="5.27">be dealt with with SSDs is the fact that</text><text start="1994.41" dur="6.24">each cell can only be written to and</text><text start="1996.95" dur="7.32">read from generally written to erased</text><text start="2000.65" dur="5.91">and written to a fixed number of times</text><text start="2004.27" dur="6.25">this is typically between a hundred</text><text start="2006.56" dur="8.19">thousand and a million so to spread out</text><text start="2010.52" dur="6.9">the writes across the entire SSD the SSD</text><text start="2014.75" dur="5.07">moves data around the drive as files are</text><text start="2017.42" dur="5.1">updated and it also reserves a certain</text><text start="2019.82" dur="4.89">amount of free space unused</text><text start="2022.52" dur="4.17">so that that free space can be swapped</text><text start="2024.71" dur="5.91">in and out with space that&amp;#39;s in use</text><text start="2026.69" dur="5.97">later this process called wear leveling</text><text start="2030.62" dur="4.05">has the advantage of dramatically</text><text start="2032.66" dur="4.41">increasing the useful life of the SSD</text><text start="2034.67" dur="5.21">and reducing write amplification</text><text start="2037.07" dur="5.1">whenever clean blocks are made available</text><text start="2039.88" dur="4.3">however in order for the write</text><text start="2042.17" dur="4.26">amplification reduction to be effective</text><text start="2044.18" dur="5.19">the operating system and the underlying</text><text start="2046.43" dur="6.42">file system must support the ATA trim</text><text start="2049.37" dur="5.25">command and furthermore it&amp;#39;s impossible</text><text start="2052.85" dur="4.079">to ensure that the disk is secure</text><text start="2054.62" dur="4.38">against forensic data recovery because</text><text start="2056.929" dur="5.461">it may not be possible to overwrite and</text><text start="2059" dur="5.88">properly erase the reserved cells of</text><text start="2062.39" dur="3.2">memory that have been taken out of</text><text start="2064.88" dur="3.75">service</text><text start="2065.59" dur="4.99">thus a used solid-state drive should be</text><text start="2068.63" dur="4.2">physically destroyed instead of</text><text start="2070.58" dur="4.44">attempting to resell it which can be an</text><text start="2072.83" dur="4.73">issue because a solid-state drive has a</text><text start="2075.02" dur="5.37">higher upfront cost</text><text start="2077.56" dur="5.65">so in summary disks are attached to the</text><text start="2080.39" dur="6.84">system via some kind of bus the newer</text><text start="2083.21" dur="5.88">bus styles are SATA and SAS these modern</text><text start="2087.23" dur="4.47">disks are addressed using logical block</text><text start="2089.09" dur="5.07">addressing they&amp;#39;re partitioned either</text><text start="2091.7" dur="6.03">with DOS partition tables or gooood</text><text start="2094.16" dur="6.06">partition tables GPT use will increase</text><text start="2097.73" dur="4.44">as the size of disks becomes larger</text><text start="2100.22" dur="4.86">owing to the limits of the dos partition</text><text start="2102.17" dur="5.04">table and solid state drives offer</text><text start="2105.08" dur="5.4">higher performance at a higher initial</text><text start="2107.21" dur="4.08">cost subject to the requirement of wear</text><text start="2110.48" dur="4.11">leveling</text><text start="2111.29" dur="6.23">and subject to not being able to be</text><text start="2114.59" dur="5.91">safely resold due to the inability to</text><text start="2117.52" dur="5.1">forensically secure the data on the</text><text start="2120.5" dur="2.12">drive</text><text start="2126.859" dur="5.97">in this lecture I&amp;#39;ll be</text><text start="2128.96" dur="6.659">disc scheduling I&amp;#39;ll be introducing the</text><text start="2132.829" dur="4.53">purpose of disc scheduling talking about</text><text start="2135.619" dur="4.381">some classical and historical to</text><text start="2137.359" dur="5.041">scheduling algorithms talking about</text><text start="2140" dur="4.2">native command queuing and disc</text><text start="2142.4" dur="3.57">schedulers that are currently in use in</text><text start="2144.2" dur="3.99">the Linux kernel and then I&amp;#39;ll talk a</text><text start="2145.97" dur="4.349">little bit about how i/o requests can be</text><text start="2148.19" dur="5.55">efficiently scheduled on solid state</text><text start="2150.319" dur="5.881">drives scheduling serves two purposes</text><text start="2153.74" dur="5.4">disc scheduling serves two purposes</text><text start="2156.2" dur="6.57">the first of these is to arbitrate disk</text><text start="2159.14" dur="6.15">access among different programs this</text><text start="2162.77" dur="5.25">ensures that competing programs have</text><text start="2165.29" dur="5.01">access to disk resources and that a</text><text start="2168.02" dur="4.5">single program cannot monopolize the</text><text start="2170.3" dur="4.11">disk resources in such a way as to</text><text start="2172.52" dur="5.61">prevent other programs from accessing</text><text start="2174.41" dur="5.939">the disk with mechanical hard drives</text><text start="2178.13" dur="3.75">scheduling algorithms historically have</text><text start="2180.349" dur="3.96">also attempted to improve disk</text><text start="2181.88" dur="4.89">performance by reducing the number of</text><text start="2184.309" dur="4.231">seeks required moving reducing the</text><text start="2186.77" dur="4.349">number of times the drive head needs to</text><text start="2188.54" dur="5.91">be moved if the drive head has to be</text><text start="2191.119" dur="5.881">moved too many times a lot of throughput</text><text start="2194.45" dur="7.49">can be lost from the disk because we&amp;#39;re</text><text start="2197" dur="8.549">waiting on all the seek times to occur</text><text start="2201.94" dur="5.859">the simplest scheduling algorithm would</text><text start="2205.549" dur="4.741">be the first-come first-serve algorithm</text><text start="2207.799" dur="5.731">which is implemented in linux as the no</text><text start="2210.29" dur="5.34">op scheduler this algorithm is extremely</text><text start="2213.53" dur="5.279">straightforward it simply consists of a</text><text start="2215.63" dur="6.09">FIFO queue into which new requests are</text><text start="2218.809" dur="7.53">added the requests are removed from the</text><text start="2221.72" dur="6.629">queue in order one by one and these</text><text start="2226.339" dur="5.131">requests are sent to the disk for</text><text start="2228.349" dur="4.441">processing now there&amp;#39;s no reordering of</text><text start="2231.47" dur="4.07">the queue this is a first-come</text><text start="2232.79" dur="5.25">first-serve ordering so back-to-back</text><text start="2235.54" dur="5.68">requests for different parts of the disk</text><text start="2238.04" dur="5.21">may have caused the drive head to move</text><text start="2241.22" dur="6.109">back and forth across the platters</text><text start="2243.25" dur="6.13">wasting quite a bit of time with seeks</text><text start="2247.329" dur="5.74">historically several attempts have been</text><text start="2249.38" dur="6.81">made to try to improve this behavior one</text><text start="2253.069" dur="6.061">example would be the scan algorithm or</text><text start="2256.19" dur="5.669">the elevator algorithm and in this</text><text start="2259.13" dur="3.22">algorithm the drive head only moves in</text><text start="2261.859" dur="3.461">wonder</text><text start="2262.35" dur="5.94">action it serves all the requests in</text><text start="2265.32" dur="5.34">that direction before moving back in the</text><text start="2268.29" dur="4.89">other direction this is called the</text><text start="2270.66" dur="5.04">elevator algorithm because it&amp;#39;s modeled</text><text start="2273.18" dur="4.47">after how an elevator works in a</text><text start="2275.7" dur="5.36">building the elevator leaves the ground</text><text start="2277.65" dur="7.38">floor moves to the highest floor</text><text start="2281.06" dur="6.31">stopping along the way to add passengers</text><text start="2285.03" dur="5">traveling up remove passengers at</text><text start="2287.37" dur="5.16">whichever floor they wish to stop on and</text><text start="2290.03" dur="4.06">then once the algorithm reaches the or</text><text start="2292.53" dur="4.67">once the elevator reaches the highest</text><text start="2294.09" dur="7.14">floor turns around and comes back down</text><text start="2297.2" dur="6.85">same process here in this example with</text><text start="2301.23" dur="5.94">the scan algorithm assuming that the</text><text start="2304.05" dur="6.39">head starts at sector 1 and this request</text><text start="2307.17" dur="7.14">for sector 50 comes in while sector 61</text><text start="2310.44" dur="8.22">is being processed the algorithm is</text><text start="2314.31" dur="11.46">going to process the requests in order 3</text><text start="2318.66" dur="11.1">12 32 40 40 to 60 180 497 and since that</text><text start="2325.77" dur="7.23">request for sector 50 came in while 61</text><text start="2329.76" dur="5.46">was being processed that request for</text><text start="2333" dur="4.71">sector 50 s going to have to wait until</text><text start="2335.22" dur="5.97">the head changes direction and returns</text><text start="2337.71" dur="5.43">to sector 50 now there are a few</text><text start="2341.19" dur="3.54">optimizations of the simple scan</text><text start="2343.14" dur="3.81">algorithm the original algorithm</text><text start="2344.73" dur="3.81">proposed that the head would move all</text><text start="2346.95" dur="3.57">the way from the beginning of the disk</text><text start="2348.54" dur="4.26">to the end of the disk and then all the</text><text start="2350.52" dur="4.53">way back to the beginning the look</text><text start="2352.8" dur="4.92">algorithm improves upon this behavior by</text><text start="2355.05" dur="5.58">moving the head only as far as the</text><text start="2357.72" dur="6.42">highest numbered request before changing</text><text start="2360.63" dur="5.46">directions and moving it back down the</text><text start="2364.14" dur="4.92">circular versions of the algorithm see</text><text start="2366.09" dur="5.43">scan and see look only serve requests</text><text start="2369.06" dur="4.89">moving in one direction so for example</text><text start="2371.52" dur="4.62">with circular scan the head would start</text><text start="2373.95" dur="3.32">at the first sector move all the way to</text><text start="2376.14" dur="4.17">the end of the disk</text><text start="2377.27" dur="4.27">servicing requests and then come all the</text><text start="2380.31" dur="3.99">way back down to the first sector</text><text start="2381.54" dur="5.94">without servicing any requests and start</text><text start="2384.3" dur="5.19">the process over again these are</text><text start="2387.48" dur="4.38">historical algorithms in the sense that</text><text start="2389.49" dur="5.52">with modern drives we have LBA or</text><text start="2391.86" dur="3.82">logical block addressing and so we don&amp;#39;t</text><text start="2395.01" dur="2.77">actually</text><text start="2395.68" dur="4.77">know where the disk is placing data</text><text start="2397.78" dur="5.37">physically this type of algorithm was</text><text start="2400.45" dur="5.34">used historically with cylinder head</text><text start="2403.15" dur="4.74">sector addressing where we knew the</text><text start="2405.79" dur="4.049">physical properties of the disk and the</text><text start="2407.89" dur="4.979">idea here was to reduce average seek</text><text start="2409.839" dur="5.041">time at no time did we know where the</text><text start="2412.869" dur="4.621">platter was that was up to the disk to</text><text start="2414.88" dur="6.77">figure out so there was no way to reduce</text><text start="2417.49" dur="6.15">rotational delay only minimize seek time</text><text start="2421.65" dur="3.909">another algorithm in attempts to</text><text start="2423.64" dur="5.939">minimize seek time is the shortest seek</text><text start="2425.559" dur="6.661">time first algorithm or SST F this</text><text start="2429.579" dur="5.76">algorithm actually orders requests by</text><text start="2432.22" dur="6.089">sector location so when the request for</text><text start="2435.339" dur="7.381">sector 50 comes in it&amp;#39;s kept in an</text><text start="2438.309" dur="7.02">ordered queue a priority queue and the</text><text start="2442.72" dur="4.649">next request to be served by the disk</text><text start="2445.329" dur="6.121">these requests are still sent out one at</text><text start="2447.369" dur="6.301">a time is chosen by looking at whatever</text><text start="2451.45" dur="5.97">request is closest to the current</text><text start="2453.67" dur="6.06">disposition historically this could</text><text start="2457.42" dur="5.399">result in starvation because if a bunch</text><text start="2459.73" dur="5.76">of requests come in for one piece of the</text><text start="2462.819" dur="4.831">disk requests for the remainder of the</text><text start="2465.49" dur="5.79">disk might not be processed for lengthy</text><text start="2467.65" dur="5.85">periods of time furthermore once again</text><text start="2471.28" dur="4.17">with logical block addressing the</text><text start="2473.5" dur="3.809">operating system doesn&amp;#39;t really know</text><text start="2475.45" dur="3.99">where the sectors are laid out on disk</text><text start="2477.309" dur="5.79">and so this algorithm doesn&amp;#39;t really</text><text start="2479.44" dur="5.73">work with modern hard drives also a</text><text start="2483.099" dur="4.801">solid state drives this algorithm</text><text start="2485.17" dur="4.62">assumes there&amp;#39;s a disk head which in the</text><text start="2487.9" dur="6.33">case of a non-mechanical drive there is</text><text start="2489.79" dur="6.18">not in the Linux kernel an approximation</text><text start="2494.23" dur="3.21">to shortest seek time first was</text><text start="2495.97" dur="4.56">implemented with the anticipatory</text><text start="2497.44" dur="5.639">scheduler this was the default scheduler</text><text start="2500.53" dur="6.269">from 2.6 point 0 through 2 point 6 point</text><text start="2503.079" dur="5.301">17 it was removed in 2.6 point 33</text><text start="2506.799" dur="4.32">because it&amp;#39;s obsolete</text><text start="2508.38" dur="4.84">the idea behind the anticipatory</text><text start="2511.119" dur="6.301">scheduler was to approximate shortest</text><text start="2513.22" dur="8.52">seek time first by ordering only the</text><text start="2517.42" dur="7.62">read requests into an ordered queue into</text><text start="2521.74" dur="5.04">a priority queue if the next read</text><text start="2525.04" dur="3.63">request was close to the current head</text><text start="2526.78" dur="3.43">position that request would be</text><text start="2528.67" dur="3.86">dispatched to me</text><text start="2530.21" dur="3.97">otherwise the scheduler would actually</text><text start="2532.53" dur="3.72">wait a few milliseconds to see if</text><text start="2534.18" dur="2.88">another request arrives for nearby</text><text start="2536.25" dur="3.51">location</text><text start="2537.06" dur="5.4">and starvation was avoided in its</text><text start="2539.76" dur="5.28">algorithm by placing expiration times on</text><text start="2542.46" dur="5.01">each request and adding preemption so</text><text start="2545.04" dur="4.31">that if a request was waiting too long</text><text start="2547.47" dur="5.13">it would go ahead and be serviced</text><text start="2549.35" dur="6.28">regardless of its location the idea here</text><text start="2552.6" dur="4.98">was to reduce overall seeking there was</text><text start="2555.63" dur="3.39">a separate queue for write requests</text><text start="2557.58" dur="3.54">because write requests could be</text><text start="2559.02" dur="4.71">performed asynchronously we did not have</text><text start="2561.12" dur="4.32">to wait on those requests to be</text><text start="2563.73" dur="4.89">completed before a process could</text><text start="2565.44" dur="5.52">continue doing useful work this</text><text start="2568.62" dur="5.88">algorithm was shown with low performance</text><text start="2570.96" dur="6.18">drives to improve performance on web</text><text start="2574.5" dur="4.56">server applications however it was shown</text><text start="2577.14" dur="4.11">to have poor performance for database</text><text start="2579.06" dur="4.59">loads where there were a lot of random</text><text start="2581.25" dur="4.77">reads and writes on the disk and with</text><text start="2583.65" dur="6.78">high performance disks this algorithm</text><text start="2586.02" dur="6.21">actually breaks down modern hard disks</text><text start="2590.43" dur="4.5">generally do qualify as high performance</text><text start="2592.23" dur="4.41">disks the reason being is that they</text><text start="2594.93" dur="3.87">implement something called native</text><text start="2596.64" dur="5.28">command queuing this is a feature of</text><text start="2598.8" dur="4.97">newer SATA drives and basically a native</text><text start="2601.92" dur="5.58">command queuing leaves scheduling of</text><text start="2603.77" dur="5.68">disk requests up to the disk itself the</text><text start="2607.5" dur="4.74">disk circuitry and firmware makes the</text><text start="2609.45" dur="5.94">decision about which requests to handle</text><text start="2612.24" dur="6.86">next to do this the disk has a built-in</text><text start="2615.39" dur="6.48">priority queue of about 32 entries and</text><text start="2619.1" dur="5.17">the disk is able to schedule its</text><text start="2621.87" dur="4.44">requests automatically taking into</text><text start="2624.27" dur="4.29">account both the seek time and the</text><text start="2626.31" dur="4.59">rotational delay since the disk knows</text><text start="2628.56" dur="4.26">the location of the platter this makes</text><text start="2630.9" dur="4.22">modern disks much more efficient and</text><text start="2632.82" dur="5.28">this works with logical block addressing</text><text start="2635.12" dur="5.08">the operating systems role with this</text><text start="2638.1" dur="5.46">type of hard disk is really more</text><text start="2640.2" dur="4.89">arbitration of the gist resources among</text><text start="2643.56" dur="6.18">the different programs running on the</text><text start="2645.09" dur="6.39">system one modern Linux scheduler that</text><text start="2649.74" dur="4.29">can be used for such arbitration is</text><text start="2651.48" dur="4.38">called the deadlines scheduler and in</text><text start="2654.03" dur="3.9">this scheduler the kernel maintains</text><text start="2655.86" dur="4.86">separate request queues for both read</text><text start="2657.93" dur="4.389">requests and write requests similar to</text><text start="2660.72" dur="3.759">the anticipatory scheduler</text><text start="2662.319" dur="4.74">reeds are prioritized over rights</text><text start="2664.479" dur="5.22">because process is typically block or</text><text start="2667.059" dur="4.04">stop and wait while waiting to read</text><text start="2669.699" dur="4.56">something from the disk</text><text start="2671.099" dur="4.48">thus rights can be done later at some</text><text start="2674.259" dur="4.17">point when it&amp;#39;s convenient for the</text><text start="2675.579" dur="4.801">operating system the waiting time in</text><text start="2678.429" dur="4.02">each queue with the deadline scheduler</text><text start="2680.38" dur="6.089">is used to determine which quest will be</text><text start="2682.449" dur="6.24">scheduled next a 500 millisecond request</text><text start="2686.469" dur="3.81">time as the goal for read request this</text><text start="2688.689" dur="4.92">is the time it would take to start the</text><text start="2690.279" dur="5.91">request with a 5 second goal to start a</text><text start="2693.609" dur="4.14">write requests this scheduler may</text><text start="2696.189" dur="4.201">improve system from responsiveness</text><text start="2697.749" dur="5.131">during periods of heavy disk i/o at the</text><text start="2700.39" dur="5.01">expense of data throughput since each</text><text start="2702.88" dur="5.219">request has a deadline and the longer</text><text start="2705.4" dur="4.74">request has been waiting the sooner will</text><text start="2708.099" dur="4.74">be scheduled this is especially useful</text><text start="2710.14" dur="4.559">for data based workloads because there</text><text start="2712.839" dur="2.43">are many requests for different parts of</text><text start="2714.699" dur="4.22">the disk</text><text start="2715.269" dur="6.06">however for web servers and other</text><text start="2718.919" dur="4.42">services that try to access large</text><text start="2721.329" dur="3.36">quantities of data located in the same</text><text start="2723.339" dur="4.53">location on the disk</text><text start="2724.689" dur="6.54">this particular scheduler can actually</text><text start="2727.869" dur="5.07">reduce total through play completely</text><text start="2731.229" dur="4.44">fair queuing is a somewhat different</text><text start="2732.939" dur="5.55">idea where instead of actually</text><text start="2735.669" dur="5.07">scheduling the i/o requests the</text><text start="2738.489" dur="5.22">completely fair queuing model schedules</text><text start="2740.739" dur="8.3">processes or running programs to have</text><text start="2743.709" dur="8.46">time slice access to each disk and</text><text start="2749.039" dur="7.15">essentially this cfq scheduler gives</text><text start="2752.169" dur="6.481">each process and i/o time slice and that</text><text start="2756.189" dur="4.98">process can do as much I owe on the disk</text><text start="2758.65" dur="6.149">as it would like within that time slice</text><text start="2761.169" dur="5.58">when the time slice expires the CFG</text><text start="2764.799" dur="4.73">scheduler moves on to the next process</text><text start="2766.749" dur="5.22">and gives it access time to the disk</text><text start="2769.529" dur="4.21">there is a little bit of similarity here</text><text start="2771.969" dur="4.411">to anticipatory scheduling since a</text><text start="2773.739" dur="4.8">process can send another request during</text><text start="2776.38" dur="3.899">its time slice and try to get that</text><text start="2778.539" dur="6.33">request in without having a lot of seek</text><text start="2780.279" dur="6.63">time however this algorithm can waste</text><text start="2784.869" dur="4.94">time if a process does not immediately</text><text start="2786.909" dur="5.82">turn around and send another request</text><text start="2789.809" dur="4.101">thus it can reduce the overall disc</text><text start="2792.729" dur="3.31">through put</text><text start="2793.91" dur="3.75">since there could be idle times while</text><text start="2796.039" dur="3.921">waiting to see if another request will</text><text start="2797.66" dur="5.01">come in before our time slice expires</text><text start="2799.96" dur="8.95">this has been the default scheduler in</text><text start="2802.67" dur="8.31">Linux since 2.6 point 18 now for solid</text><text start="2808.91" dur="4.649">state disks we have to choose a</text><text start="2810.98" dur="4.5">scheduler that accounts for the fact</text><text start="2813.559" dur="4.891">that there&amp;#39;s no seek time to worry about</text><text start="2815.48" dur="5.639">on the disk the anticipatory scheduler</text><text start="2818.45" dur="6.06">the elevator algorithms short of seek</text><text start="2821.119" dur="5.761">time first all of these algorithms can</text><text start="2824.51" dur="5.099">actually reduce performance on a</text><text start="2826.88" dur="5.52">solid-state drive because they make</text><text start="2829.609" dur="4.921">assumptions about minimizing head seek</text><text start="2832.4" dur="5.969">time and we have no heads to move with a</text><text start="2834.53" dur="6.21">solid-state disk completely fair queuing</text><text start="2838.369" dur="3.93">can also reduce disk performance because</text><text start="2840.74" dur="4.14">of idling at the end of a time slice</text><text start="2842.299" dur="4.5">that&amp;#39;s true for any disk so what do we</text><text start="2844.88" dur="3.03">do to maximize performance for a</text><text start="2846.799" dur="3.631">solid-state drive</text><text start="2847.91" dur="5.34">there are really two good choices for a</text><text start="2850.43" dur="5.28">solid-state drive there is the no op or</text><text start="2853.25" dur="5.579">FIFO scheduler this works well for</text><text start="2855.71" dur="5.25">general-purpose systems however when</text><text start="2858.829" dur="3.901">there are handy i/o workloads and we</text><text start="2860.96" dur="4.8">want to maintain system responsiveness</text><text start="2862.73" dur="4.77">the deadline algorithm is useful since</text><text start="2865.76" dur="5.539">other processes will have more</text><text start="2867.5" dur="6.779">opportunities to access the disk in</text><text start="2871.299" dur="5.05">summary operating systems are</text><text start="2874.279" dur="5.101">arbitrating disk access among different</text><text start="2876.349" dur="3.99">processes to prevent one process from</text><text start="2879.38" dur="3.54">monopolizing</text><text start="2880.339" dur="6.661">the disk and preventing other processes</text><text start="2882.92" dur="6.119">from having access on older discs with</text><text start="2887" dur="4.23">cylinder-head sector addressing the</text><text start="2889.039" dur="4.681">operating system was also attempting to</text><text start="2891.23" dur="5.67">reduced seek time thus improving</text><text start="2893.72" dur="5.25">aggregate disk performance however newer</text><text start="2896.9" dur="5.01">SATA disks with native command queuing</text><text start="2898.97" dur="6.92">schedule themselves to reduce both seek</text><text start="2901.91" dur="6.959">time and rotational delay thus</text><text start="2905.89" dur="7.06">algorithms that attempt to minimize seek</text><text start="2908.869" dur="5.571">time are unnecessary furthermore with</text><text start="2912.95" dur="4.05">solid-state drives</text><text start="2914.44" dur="5.05">scheduling mechanisms that are based on</text><text start="2917" dur="3.69">old mechanical assumptions can actually</text><text start="2919.49" dur="4.2">reduce performance</text><text start="2920.69" dur="4.69">so for SSDs were really only interested</text><text start="2923.69" dur="3.75">in arbitration</text><text start="2925.38" dur="2.06">you</text><text start="2930.349" dur="6.99">in this lecture I&amp;#39;m going to discuss</text><text start="2933.829" dur="5.9">file systems I&amp;#39;ll be providing an</text><text start="2937.339" dur="5.75">overview of the purpose of file systems</text><text start="2939.729" dur="5.56">discussing metadata that they store</text><text start="2943.089" dur="4.321">explaining how we create a file system</text><text start="2945.289" dur="4.92">through a process known as formatting</text><text start="2947.41" dur="5.069">talk about some issues of file systems</text><text start="2950.209" dur="4.86">including fragmentation and journaling</text><text start="2952.479" dur="4.8">briefly discuss some internal layouts</text><text start="2955.069" dur="4.65">used by different file systems and</text><text start="2957.279" dur="4.21">finally talk about mounting and</text><text start="2959.719" dur="6.48">unmounting file systems to make them</text><text start="2961.489" dur="6.691">available to users a file system is</text><text start="2966.199" dur="4.471">responsible for laying out data on a</text><text start="2968.18" dur="6.24">persistent storage device and ensuring</text><text start="2970.67" dur="5.669">that data can be retrieved reliably the</text><text start="2974.42" dur="4.98">file system is an abstraction of disk</text><text start="2976.339" dur="5.97">space it provides routines for querying</text><text start="2979.4" dur="5.73">opening and closing files and providing</text><text start="2982.309" dur="4.47">human readable names for files and some</text><text start="2985.13" dur="4.02">kind of organizational structure for</text><text start="2986.779" dur="5.43">files typically through directories or</text><text start="2989.15" dur="6.75">folders without a file system we would</text><text start="2992.209" dur="6.87">have to access disk by physical location</text><text start="2995.9" dur="5.879">or by address and each program would</text><text start="2999.079" dur="8.341">have to reserve certain address limits</text><text start="3001.779" dur="7.79">for its exclusive use file systems in</text><text start="3007.42" dur="5.49">addition to performing this abstraction</text><text start="3009.569" dur="6.13">also arbitrate disk space among</text><text start="3012.91" dur="5.429">different programs and different users</text><text start="3015.699" dur="4.441">of a computer system file permissions</text><text start="3018.339" dur="5.94">allow users to have a certain degree of</text><text start="3020.14" dur="7.02">privacy while file quotas ensure that</text><text start="3024.279" dur="5.22">one user does not monopolize the entire</text><text start="3027.16" dur="6.839">system by utilizing that all the disk</text><text start="3029.499" dur="6.691">space file systems are also responsible</text><text start="3033.999" dur="4.59">for storing metadata or information</text><text start="3036.19" dur="6.75">about each file this includes a file</text><text start="3038.589" dur="6.21">name file size who owns the file what</text><text start="3042.94" dur="4.23">group that owner belongs to what</text><text start="3044.799" dur="4.47">permissions exists for each different</text><text start="3047.17" dur="4.74">category of users on the system to</text><text start="3049.269" dur="5.671">access that file as well as to provide</text><text start="3051.91" dur="5.22">certain time stamps on UNIX we have the</text><text start="3054.94" dur="4.36">inode creation time the file</text><text start="3057.13" dur="5.76">modification time</text><text start="3059.3" dur="6.57">optionally the last file access time</text><text start="3062.89" dur="5.56">metadata records also include internal</text><text start="3065.87" dur="5.28">information that is important for the</text><text start="3068.45" dur="5.73">file system itself such as pointers to</text><text start="3071.15" dur="5.67">the actual data on disk and a reference</text><text start="3074.18" dur="5.13">count for how many different names refer</text><text start="3076.82" dur="8.43">to the same file system called hard</text><text start="3079.31" dur="8.61">links we create a file system by taking</text><text start="3085.25" dur="5.87">an empty partition or a partition that</text><text start="3087.92" dur="5.61">we&amp;#39;re ready to reuse and formatting it</text><text start="3091.12" dur="4.93">formatting quite simply as the process</text><text start="3093.53" dur="5.88">of making a new file system on an</text><text start="3096.05" dur="5.46">existing partition formatting typically</text><text start="3099.41" dur="3.96">destroys the structure of any file</text><text start="3101.51" dur="4.08">system that was previously installed on</text><text start="3103.37" dur="6.27">the partition thus when you format a</text><text start="3105.59" dur="6.77">partition you lose the access to its</text><text start="3109.64" dur="6.63">contents at least through standard tools</text><text start="3112.36" dur="7.45">however unless that free space unless</text><text start="3116.27" dur="5.22">that partition is securely erased the</text><text start="3119.81" dur="3.48">contents that were formerly on the</text><text start="3121.49" dur="5.73">partition can still be recovered using</text><text start="3123.29" dur="5.82">forensic tools the only safe way to</text><text start="3127.22" dur="4.8">switch between file systems on a single</text><text start="3129.11" dur="7.17">partition is to back the data that&amp;#39;s on</text><text start="3132.02" dur="5.97">that partition up to another disk format</text><text start="3136.28" dur="4.5">the partition using whatever the new</text><text start="3137.99" dur="4.86">file system would be and then restoring</text><text start="3140.78" dur="4.41">the data from the backup there is no</text><text start="3142.85" dur="6.57">safe way to change a file system type in</text><text start="3145.19" dur="8.25">place without losing data the file</text><text start="3149.42" dur="8.58">systems do suffer from a few issues over</text><text start="3153.44" dur="6.78">time sections of a file in a file system</text><text start="3158" dur="4.83">can become non contiguous in other words</text><text start="3160.22" dur="5.16">a file gets split over different parts</text><text start="3162.83" dur="5.4">of the disk and in the process that also</text><text start="3165.38" dur="5.34">splits up the free space so that when</text><text start="3168.23" dur="4.29">new files need to be allocated they have</text><text start="3170.72" dur="5.1">to be split up to take advantage of</text><text start="3172.52" dur="5.46">smaller blocks of free space this is a</text><text start="3175.82" dur="5.21">situation called fragmentation it&amp;#39;s</text><text start="3177.98" dur="5.73">worse than some file systems than others</text><text start="3181.03" dur="5.11">fragmented file systems were big problem</text><text start="3183.71" dur="4.65">with mechanical hard drives simply</text><text start="3186.14" dur="5.34">because a fragmented file requires a</text><text start="3188.36" dur="4.44">seek to move from the location of the</text><text start="3191.48" dur="3.12">first fragment</text><text start="3192.8" dur="4.05">the location of the next fragment and</text><text start="3194.6" dur="5.67">possibly further seeks if there are more</text><text start="3196.85" dur="4.95">fragments not so much a problem on solid</text><text start="3200.27" dur="4.74">state drives however since there&amp;#39;s no</text><text start="3201.8" dur="5.31">seat time and file systems can get</text><text start="3205.01" dur="4.8">around this problem either with offline</text><text start="3207.11" dur="5.61">defragmentation tools that the system</text><text start="3209.81" dur="4.56">administrator can run manually or they</text><text start="3212.72" dur="5.3">can use fragmentation avoidance</text><text start="3214.37" dur="7.14">strategies or automatic on-the-fly</text><text start="3218.02" dur="6.76">defragmentation another issue that can</text><text start="3221.51" dur="5.51">occur with file systems is that a single</text><text start="3224.78" dur="5.22">high-level file system operation</text><text start="3227.02" dur="6.16">typically requires several low-level</text><text start="3230" dur="5.94">steps in order to complete if the</text><text start="3233.18" dur="4.92">computer should crash or power be lost</text><text start="3235.94" dur="4.77">in the middle of those steps being</text><text start="3238.1" dur="5.88">performed the file system could be left</text><text start="3240.71" dur="5.18">in an inconsistent state a solution to</text><text start="3243.98" dur="5.58">this problem is called journaling and</text><text start="3245.89" dur="5.61">the way this works is by recording all</text><text start="3249.56" dur="4.29">the steps that are to be taken in</text><text start="3251.5" dur="5.23">something called a journal a special</text><text start="3253.85" dur="6.12">part of disk space reserved for this</text><text start="3256.73" dur="5.28">particular information records all the</text><text start="3259.97" dur="6.48">steps that are going to be taken prior</text><text start="3262.01" dur="6.42">to taking the steps thus if the system</text><text start="3266.45" dur="4.92">should crash in the middle of a file</text><text start="3268.43" dur="5.1">system high level operation all that</text><text start="3271.37" dur="4.38">needs to occur is the journal simply</text><text start="3273.53" dur="4.8">needs to be replayed next time the file</text><text start="3275.75" dur="5.37">system is mounted and the steps can be</text><text start="3278.33" dur="7.47">carried out again and leave the file</text><text start="3281.12" dur="7.14">system in a consistent State internally</text><text start="3285.8" dur="4.83">file systems may use one of several</text><text start="3288.26" dur="5.91">different approaches for storing data on</text><text start="3290.63" dur="6.3">the disk a simple layouts called the</text><text start="3294.17" dur="4.71">file allocation table which simply has a</text><text start="3296.93" dur="4.65">single table in each partition to store</text><text start="3298.88" dur="5.7">metadata and the addresses of data</text><text start="3301.58" dur="6.99">segments for each file file allocation</text><text start="3304.58" dur="5.73">table type storage methods are limited</text><text start="3308.57" dur="4.83">only in terms of how big the file</text><text start="3310.31" dur="5.19">allocation table can be and these</text><text start="3313.4" dur="6.18">limitations specify among other things</text><text start="3315.5" dur="6.56">the maximum size a file can be and how</text><text start="3319.58" dur="4.53">many files can be on the system</text><text start="3322.06" dur="3.73">another approach is to use something</text><text start="3324.11" dur="4.59">called eye nodes which are data</text><text start="3325.79" dur="5.73">structures on UNIX systems that contain</text><text start="3328.7" dur="7.23">the metadata for a file including</text><text start="3331.52" dur="6.54">pointers to the actual data I nodes do</text><text start="3335.93" dur="3.57">not store the file names however these</text><text start="3338.06" dur="4.56">are stored in a separate structure</text><text start="3339.5" dur="7.08">called a directory that map&amp;#39;s file names</text><text start="3342.62" dur="7.29">to inodes the maximum number of files of</text><text start="3346.58" dur="5.28">a single file system can hold is limited</text><text start="3349.91" dur="4.32">by the number of I nodes created when</text><text start="3351.86" dur="4.44">the file system is formatted this can be</text><text start="3354.23" dur="4.5">a particular problem for file systems</text><text start="3356.3" dur="4.98">that wind up storing a really large</text><text start="3358.73" dur="4.68">number of very small files there could</text><text start="3361.28" dur="5.55">be plenty of space left on the file</text><text start="3363.41" dur="5.94">system on the partition however if the</text><text start="3366.83" dur="5.69">number of AI nodes is exhausted then no</text><text start="3369.35" dur="6.3">more files would be able to be created</text><text start="3372.52" dur="5.53">another approach to storing files and</text><text start="3375.65" dur="5.25">laying out data on a file system is</text><text start="3378.05" dur="5.18">through the use of extents extents</text><text start="3380.9" dur="4.98">support larger maximum file sizes</text><text start="3383.23" dur="4.63">because they&amp;#39;re designed to allow files</text><text start="3385.88" dur="4.83">to be composed of several non contiguous</text><text start="3387.86" dur="4.74">blocks of space much like fragmenting</text><text start="3390.71" dur="5.37">that could occur at the file system that</text><text start="3392.6" dur="5.85">doesn&amp;#39;t use extents extent information</text><text start="3396.08" dur="4.5">is stored with the metadata in the file</text><text start="3398.45" dur="5.13">inode or some other type of data</text><text start="3400.58" dur="7.77">structure for file systems that support</text><text start="3403.58" dur="7.56">extents now regardless of how the file</text><text start="3408.35" dur="4.65">system lays out its data internally we</text><text start="3411.14" dur="5.01">need to make the file system available</text><text start="3413" dur="4.98">to users and programs on the computer to</text><text start="3416.15" dur="4.71">do this we perform a process called</text><text start="3417.98" dur="5.58">mounting mounting is the act of making a</text><text start="3420.86" dur="5.76">file system available to the users of</text><text start="3423.56" dur="5.55">the system the opposite process is</text><text start="3426.62" dur="6.18">called unmounting which is disconnecting</text><text start="3429.11" dur="6.9">a previously mounted file system UNIX</text><text start="3432.8" dur="5.21">systems mount file systems at mount</text><text start="3436.01" dur="4.77">points which are simply directories</text><text start="3438.01" dur="5.14">somewhere within the overall directory</text><text start="3440.78" dur="4.23">structure of the system so if I plug in</text><text start="3443.15" dur="4.77">a flash drive on a Linux machine for</text><text start="3445.01" dur="7.23">example that flash drive may be mounted</text><text start="3447.92" dur="6.96">at slash media slash my drive I can</text><text start="3452.24" dur="3.57">mount a large number of different file</text><text start="3454.88" dur="4.62">systems this way</text><text start="3455.81" dur="5.85">at the same time and I can make all of</text><text start="3459.5" dur="5.46">the different file systems appear to be</text><text start="3461.66" dur="5.46">part of one directory structure on the</text><text start="3464.96" dur="5.07">other hand Windows systems use Drive</text><text start="3467.12" dur="4.77">letters where the letter C is reserved</text><text start="3470.03" dur="4.44">for the system partition the one on</text><text start="3471.89" dur="6.33">which Windows is installed and a and B</text><text start="3474.47" dur="6.39">are reserved for floppy drives Windows</text><text start="3478.22" dur="3.66">supports a maximum of 26 file systems to</text><text start="3480.86" dur="2.49">be mounted at once</text><text start="3481.88" dur="3.6">simply because that&amp;#39;s the number of</text><text start="3483.35" dur="4.85">letters that are available to assigned</text><text start="3485.48" dur="6.12">drives to its assigned partitions</text><text start="3488.2" dur="6.1">so in summary file systems organized</text><text start="3491.6" dur="4.32">data store metadata provide an</text><text start="3494.3" dur="4.14">abstraction of the underlying storage</text><text start="3495.92" dur="5.61">medium and arbitrate access to the</text><text start="3498.44" dur="6.36">storage space we create file systems</text><text start="3501.53" dur="5.07">through a process known as formatting we</text><text start="3504.8" dur="3.95">can make file systems more robust</text><text start="3506.6" dur="6.27">against data loss during a power failure</text><text start="3508.75" dur="5.86">through the use of journaling internally</text><text start="3512.87" dur="3.6">file systems use various different</text><text start="3514.61" dur="4.14">mechanisms for laying the data out on</text><text start="3516.47" dur="4.65">disk but no matter how they work</text><text start="3518.75" dur="7.79">internally we can make them available to</text><text start="3521.12" dur="7.92">a running system by mounting them in</text><text start="3526.54" dur="4.84">this lecture I&amp;#39;m going to discuss</text><text start="3529.04" dur="5.22">features of the central processing unit</text><text start="3531.38" dur="5.7">or CPU that are useful for supporting</text><text start="3534.26" dur="6.18">multiple applications sharing a computer</text><text start="3537.08" dur="5.34">system simultaneously in particular I&amp;#39;ll</text><text start="3540.44" dur="3.69">introduce multi programming and discuss</text><text start="3542.42" dur="4.56">the hardware requirements to support</text><text start="3544.13" dur="6.41">multi programming I&amp;#39;ll discuss CPU</text><text start="3546.98" dur="6.63">privilege modes x86 protection rings</text><text start="3550.54" dur="6.82">mode switches and briefly introduced</text><text start="3553.61" dur="6.57">interrupts the first concept to</text><text start="3557.36" dur="4.74">introduce is multi programming and multi</text><text start="3560.18" dur="4.08">programming is simply the idea that we</text><text start="3562.1" dur="4.08">can run multiple processes or multiple</text><text start="3564.26" dur="5.76">instances of potentially several</text><text start="3566.18" dur="6.36">programs at the same time and we can do</text><text start="3570.02" dur="5.25">this by having the CPU switch quickly</text><text start="3572.54" dur="5.58">among the different processes enabling</text><text start="3575.27" dur="6.9">all of them to make forward progress per</text><text start="3578.12" dur="6.45">unit of human perceivable time the CPU</text><text start="3582.17" dur="4.53">will switch quickly enough to provide</text><text start="3584.57" dur="3.94">the illusion that all the processes are</text><text start="3586.7" dur="4.38">running at the same time</text><text start="3588.51" dur="5.91">even if we only have one processor core</text><text start="3591.08" dur="5.02">in the vast majority of modern computing</text><text start="3594.42" dur="4.38">systems with the exception of some</text><text start="3596.1" dur="6.45">special-purpose systems our multi</text><text start="3598.8" dur="6.72">programming system some of the old</text><text start="3602.55" dur="5.309">computer systems of the day were batch</text><text start="3605.52" dur="6.45">systems that only ran one sis one</text><text start="3607.859" dur="6.361">application at a time now in order to</text><text start="3611.97" dur="4.71">support multi programming we have to</text><text start="3614.22" dur="5.85">have certain features of our computer</text><text start="3616.68" dur="5.4">hardware first of these is an interrupt</text><text start="3620.07" dur="4.17">mechanism for enabling preemption of</text><text start="3622.08" dur="4.44">running processes whenever some kind of</text><text start="3624.24" dur="5.46">event occurs we have to have a way of</text><text start="3626.52" dur="5.88">stopping a process handling an event and</text><text start="3629.7" dur="4.59">then restarting the process we need to</text><text start="3632.4" dur="4.29">have a clock so that we know how long a</text><text start="3634.29" dur="4.95">process has been running and we need to</text><text start="3636.69" dur="4.41">have CPU protection levels to restrict</text><text start="3639.24" dur="4.17">access to certain instructions to</text><text start="3641.1" dur="4.35">prevent processes from hijacking the</text><text start="3643.41" dur="4.62">system or just trying to bypass the</text><text start="3645.45" dur="5.01">operating system altogether we also need</text><text start="3648.03" dur="4.2">to restrict access to memory in order to</text><text start="3650.46" dur="4.14">prevent reading and writing to memory</text><text start="3652.23" dur="4.28">that the particular process does not own</text><text start="3654.6" dur="5.64">belongs to somebody else</text><text start="3656.51" dur="7.109">- CPU protection levels are sufficient a</text><text start="3660.24" dur="5.94">protected mode and a privileged mode</text><text start="3663.619" dur="5.291">these modes are also called the</text><text start="3666.18" dur="6.96">supervisor mode or kernel mode and user</text><text start="3668.91" dur="7.41">mode in kernel mode all instructions on</text><text start="3673.14" dur="7.41">the CPU are enabled and the kernel can</text><text start="3676.32" dur="6.57">access all memory on system in user mode</text><text start="3680.55" dur="5.04">the CPU disables all the privileged</text><text start="3682.89" dur="5.85">instructions and restricts most direct</text><text start="3685.59" dur="5.16">memory operations thus a user program</text><text start="3688.74" dur="4.71">must make a system call to the operating</text><text start="3690.75" dur="6">system to request memory or perform</text><text start="3693.45" dur="5.909">other resource allocation tasks in this</text><text start="3696.75" dur="5.85">way the user processes are effectively</text><text start="3699.359" dur="7.201">sandboxed both from the system and from</text><text start="3702.6" dur="7.62">each other on the Intel based systems</text><text start="3706.56" dur="7.02">x86 and x86 64 systems there are</text><text start="3710.22" dur="5.79">actually four modes available these are</text><text start="3713.58" dur="5.19">implemented by what are known as x86</text><text start="3716.01" dur="6.18">protection rings which consists of four</text><text start="3718.77" dur="5.34">privilege levels numbered 0 through 3</text><text start="3722.19" dur="5.55">ring zero has the greatest number of</text><text start="3724.11" dur="6.42">privileges code executing in ring zero</text><text start="3727.74" dur="6.75">can execute any instruction the CPU</text><text start="3730.53" dur="7.02">provides and can access all memory rein</text><text start="3734.49" dur="5.4">3 has the fewest privileges all the</text><text start="3737.55" dur="4.2">instructions are restricted to the set</text><text start="3739.89" dur="4.56">of instructions that are relatively safe</text><text start="3741.75" dur="6.66">in practice most operating systems</text><text start="3744.45" dur="7.17">actually only use ring 0 and 3 os/2 and</text><text start="3748.41" dur="8.4">Xen are the notable exceptions that make</text><text start="3751.62" dur="8.01">use of ring 1 newer systems with</text><text start="3756.81" dur="6.15">virtualization extensions either the</text><text start="3759.63" dur="6.45">intel vt-x extensions or the AMD v</text><text start="3762.96" dur="5.73">extensions add an extra privileged level</text><text start="3766.08" dur="5.43">below ring 0 this is colloquially</text><text start="3768.69" dur="6.87">sometimes referred to as ring negative 1</text><text start="3771.51" dur="6.15">and this mode enables instructions that</text><text start="3775.56" dur="4.29">allow multiple operating systems to</text><text start="3777.66" dur="6.21">share the same processor these</text><text start="3779.85" dur="5.91">instructions help system support hosting</text><text start="3783.87" dur="5.61">multiple virtual machines at the same</text><text start="3785.76" dur="6.21">time now regardless of the number of</text><text start="3789.48" dur="4.8">modes available whenever we wish to</text><text start="3791.97" dur="3.66">change modes for whatever reason we have</text><text start="3794.28" dur="4.11">to perform something called a mode</text><text start="3795.63" dur="6.3">switch and that occurs whenever the CPU</text><text start="3798.39" dur="7.83">switches into user mode kernel mode or</text><text start="3801.93" dur="7.47">hypervisor mode or whenever an x86 CPU</text><text start="3806.22" dur="6.69">changes which protection ring is</text><text start="3809.4" dur="5.49">presently effective mode switches have</text><text start="3812.91" dur="3.84">the potential to be slow operations</text><text start="3814.89" dur="4.92">compared to other machine instructions</text><text start="3816.75" dur="5.07">depending upon the hardware a notable</text><text start="3819.81" dur="5.25">example was the first generation of</text><text start="3821.82" dur="4.95">Intel Core 2 series processors in which</text><text start="3825.06" dur="7.61">the mode switches into and out of</text><text start="3826.77" dur="8.49">hypervisor mode were quite slow one</text><text start="3832.67" dur="4.66">situation in which a mode switch might</text><text start="3835.26" dur="5.19">occur is when something called an</text><text start="3837.33" dur="5.49">interrupt happens and an interrupt is</text><text start="3840.45" dur="4.77">simply a situation in which the</text><text start="3842.82" dur="5.31">currently executing code is interrupted</text><text start="3845.22" dur="5.91">so that an event can be handled by the</text><text start="3848.13" dur="5.04">operating system interrupts can fall</text><text start="3851.13" dur="4.439">into two categories can have involuntary</text><text start="3853.17" dur="5.25">interrupts which are external</text><text start="3855.569" dur="4.8">to running processes these consist of</text><text start="3858.42" dur="3.389">things such as IO interrupts which are</text><text start="3860.369" dur="4.051">generated every time you press the key</text><text start="3861.809" dur="6.72">on the keyboard or perform any other IO</text><text start="3864.42" dur="6.539">task clock interrupts which are timer</text><text start="3868.529" dur="5.611">mechanisms that can be scheduled to go</text><text start="3870.959" dur="4.981">off in a particular time and page faults</text><text start="3874.14" dur="5.069">which have to do with the virtual memory</text><text start="3875.94" dur="4.049">subsystem interrupts can also be</text><text start="3879.209" dur="3.12">voluntary</text><text start="3879.989" dur="4.98">in other words created by a process</text><text start="3882.329" dur="5.93">that&amp;#39;s running system calls and</text><text start="3884.969" dur="6.57">exceptions such as setting fault or / 0</text><text start="3888.259" dur="7.24">actions can result in interrupts as well</text><text start="3891.539" dur="6.21">and the CPU provides hardware mechanisms</text><text start="3895.499" dur="5.27">for detecting when an interrupt is</text><text start="3897.749" dur="6.06">occurring and handling the interrupt</text><text start="3900.769" dur="4.84">so in summary multi programming systems</text><text start="3903.809" dur="4.41">allow multiple applications to run</text><text start="3905.609" dur="4.62">simultaneously implementing multi</text><text start="3908.219" dur="4.74">programming reports requires support</text><text start="3910.229" dur="4.86">from the hardware in particular we need</text><text start="3912.959" dur="3.421">CPU privileges we need a clock and we</text><text start="3915.089" dur="4.74">need some kind of interrupt handling</text><text start="3916.38" dur="4.949">mechanism the CPUs used in multi</text><text start="3919.829" dur="4.41">programming systems need to have at</text><text start="3921.329" dur="5.22">least two privileged modes Intel x86</text><text start="3924.239" dur="5.401">systems support four or five modes</text><text start="3926.549" dur="5.28">depending on the processor mode switches</text><text start="3929.64" dur="3.689">can be expensive in terms of performance</text><text start="3931.829" dur="4.591">so we don&amp;#39;t want to do them more than</text><text start="3933.329" dur="4.98">necessary and interrupts enable Hardware</text><text start="3936.42" dur="4.26">events to be delivered to applications</text><text start="3938.309" dur="4.5">and they allow applications to yield</text><text start="3940.68" dur="7.679">control of the system while waiting for</text><text start="3942.809" dur="6.841">events are waiting for services lecture</text><text start="3948.359" dur="4.2">I&amp;#39;m going to discuss kernel</text><text start="3949.65" dur="5.76">architectures I&amp;#39;ll begin by introducing</text><text start="3952.559" dur="5.15">the functions of the kernel explain the</text><text start="3955.41" dur="5.339">separation between mechanism and policy</text><text start="3957.709" dur="5.38">talk about some seminal early kernels in</text><text start="3960.749" dur="3.96">the history of computing and then</text><text start="3963.089" dur="5.91">introduce the differences between</text><text start="3964.709" dur="6.24">monolithic kernels and micro kernels now</text><text start="3968.999" dur="3.81">kernel provides two functions the same</text><text start="3970.949" dur="4.701">two functions is in the operating system</text><text start="3972.809" dur="5.64">it provides abstraction and arbitration</text><text start="3975.65" dur="5.349">the kernel provides abstraction in the</text><text start="3978.449" dur="5.701">sense that it provides a mechanism for</text><text start="3980.999" dur="8.03">programs to access Hardware a way to</text><text start="3984.15" dur="6.829">schedule multiple programs on the system</text><text start="3989.029" dur="5.16">and it provides some method for</text><text start="3990.979" dur="5.13">inter-process communication or IPC a way</text><text start="3994.189" dur="4.14">for programs to send messages to each</text><text start="3996.109" dur="6.69">other or send messages to hardware</text><text start="3998.329" dur="7.5">devices or out to the network kernels</text><text start="4002.799" dur="5.37">also provide abstraction mechanisms they</text><text start="4005.829" dur="4.29">ensure that a single process or running</text><text start="4008.169" dur="4.981">program can&amp;#39;t take over the entire</text><text start="4010.119" dur="4.89">system they enforce any kind of security</text><text start="4013.15" dur="4.139">requirements such as access privileges</text><text start="4015.009" dur="4.74">that might be in place on the system and</text><text start="4017.289" dur="4.95">they minimize the risk of a total system</text><text start="4019.749" dur="6.51">crash from a buggy application or device</text><text start="4022.239" dur="6.69">driver it&amp;#39;s important to distinguish</text><text start="4026.259" dur="4.41">between mechanism and policy when</text><text start="4028.929" dur="4.92">discussing the internal components of an</text><text start="4030.669" dur="5.731">operating system the mechanism put</text><text start="4033.849" dur="5.36">simply is the software methods that</text><text start="4036.4" dur="5.73">enable operations to be carried out an</text><text start="4039.209" dur="5.61">example of a mechanism would be code</text><text start="4042.13" dur="5.279">that implemented inside a device driver</text><text start="4044.819" dur="5.38">sends a message to a device that causes</text><text start="4047.409" dur="5.04">that device to blink a light enable a</text><text start="4050.199" dur="5.85">camera or perform some other Hardware</text><text start="4052.449" dur="5.28">level operation policy on the other hand</text><text start="4056.049" dur="3.96">is a set of software methods that</text><text start="4057.729" dur="5.79">enforce permissions access rules or</text><text start="4060.009" dur="6.45">other limits against applications so a</text><text start="4063.519" dur="5.73">policy for example would be something</text><text start="4066.459" dur="5.191">that said that only users who met</text><text start="4069.249" dur="4.71">certain criteria could send a message</text><text start="4071.65" dur="4.769">out to a hardware device to blink a</text><text start="4073.959" dur="5.34">light or enable a camera or perform some</text><text start="4076.419" dur="5.4">other Hardware function it&amp;#39;s a generally</text><text start="4079.299" dur="4.081">accepted principle of good design that</text><text start="4081.819" dur="3.48">mechanism and policies should be</text><text start="4083.38" dur="6.449">separated as much as possible</text><text start="4085.299" dur="7.32">an early kernel that separated mechanism</text><text start="4089.829" dur="6.87">and policy quite well was the Ragman</text><text start="4092.619" dur="7.111">central and RC 4,000 monitor kernel this</text><text start="4096.699" dur="6.031">Cardinal was developed in 1969 primarily</text><text start="4099.73" dur="5.579">by Perry brand Hansen for the Reagan</text><text start="4102.73" dur="4.589">Central and RC 4,000 computer system</text><text start="4105.309" dur="5.881">this was a computer system that was</text><text start="4107.319" dur="5.911">developed in Denmark and the central</text><text start="4111.19" dur="4.919">component of the system was a small</text><text start="4113.23" dur="5.969">nucleus as brink Hanson called it called</text><text start="4116.109" dur="5.311">monitor which allowed programs to send</text><text start="4119.199" dur="3.471">messages to each other and allowed</text><text start="4121.42" dur="4.45">programs to send</text><text start="4122.67" dur="6.6">receive buffers which were essentially</text><text start="4125.87" dur="5.5">types of messages for Hardware to and</text><text start="4129.27" dur="4.469">from different hardware devices in</text><text start="4131.37" dur="6.78">particular at that time they had a card</text><text start="4133.739" dur="9.06">reader our tape reader and a printing</text><text start="4138.15" dur="6.6">style output device other kernels with</text><text start="4142.799" dur="3.81">different scheduling mechanisms and</text><text start="4144.75" dur="4.98">other capabilities could be run under</text><text start="4146.609" dur="5.281">monitor in those days it was not clear</text><text start="4149.73" dur="4.77">that multi programming was really a</text><text start="4151.89" dur="4.77">desirable feature for computing thus</text><text start="4154.5" dur="5.219">someone could write a multi programming</text><text start="4156.66" dur="6.05">capable kernel and actually run that as</text><text start="4159.719" dur="5.191">a sub kernel under the monitor system</text><text start="4162.71" dur="4.299">importantly this was also the first</text><text start="4164.91" dur="4.5">system that allowed sub kernels and</text><text start="4167.009" dur="4.591">systems level software to be written in</text><text start="4169.41" dur="6.36">a high-level language in this case</text><text start="4171.6" dur="6.659">Pascal the system performance was</text><text start="4175.77" dur="6.239">actually quite awful Frank Hansen stated</text><text start="4178.259" dur="6.901">that the operating system itself was so</text><text start="4182.009" dur="5.79">slow at performing its IPC tasks that</text><text start="4185.16" dur="6.69">there were a number of issues with the</text><text start="4187.799" dur="7.19">system completing tasks on time however</text><text start="4191.85" dur="5.7">the system was stable and reliable</text><text start="4194.989" dur="5.411">making it successful in computer science</text><text start="4197.55" dur="7.71">history even if it was not a successful</text><text start="4200.4" dur="7.35">product commercially on the other hand</text><text start="4205.26" dur="4.59">the opposite extreme would be the UNIX</text><text start="4207.75" dur="4.41">kernel this was developed at Bell Labs</text><text start="4209.85" dur="4.65">by a team headed by Ken Thompson and</text><text start="4212.16" dur="5.64">Dennis Ritchie also starting in the late</text><text start="4214.5" dur="6.69">1960s the difference between the UNIX</text><text start="4217.8" dur="5.49">kernel and the RC 4000 monitor was that</text><text start="4221.19" dur="5.22">the UNIX kernels design implemented</text><text start="4223.29" dur="6.36">performance thus instead of having a</text><text start="4226.41" dur="7.19">very small kernel that simply provided</text><text start="4229.65" dur="7.35">an IPC mechanism in some basic resource</text><text start="4233.6" dur="6.01">collision-avoidance this kernel actually</text><text start="4237" dur="4.91">provided all the device drivers all the</text><text start="4239.61" dur="4.819">scheduling all the memory management</text><text start="4241.91" dur="6.64">including support for multi programming</text><text start="4244.429" dur="5.951">directly inside the kernel this kernel</text><text start="4248.55" dur="4.1">was an early example of what would later</text><text start="4250.38" dur="4.38">be called a monolithic kernel a</text><text start="4252.65" dur="3.58">monolithic kernel is a kernel that</text><text start="4254.76" dur="4.53">contains the entire</text><text start="4256.23" dur="4.89">breeding system in kernel-space runs all</text><text start="4259.29" dur="4.62">of the operating system code in</text><text start="4261.12" dur="6.78">privileged mode or ring zero on an x86</text><text start="4263.91" dur="6.15">system and divides the different</text><text start="4267.9" dur="4.05">functions of the operating system into</text><text start="4270.06" dur="4.92">subsystems of the kernel</text><text start="4271.95" dur="6.539">all of these subsystems however are run</text><text start="4274.98" dur="7.17">in the same memory space this has the</text><text start="4278.489" dur="6.541">advantage of higher performance but the</text><text start="4282.15" dur="4.8">disadvantage is that the kernel becomes</text><text start="4285.03" dur="4.59">less modular more difficult to maintain</text><text start="4286.95" dur="5.37">and the components are not separated</text><text start="4289.62" dur="4.59">very well so a crash in one component</text><text start="4292.32" dur="6.27">could in fact bring down the entire</text><text start="4294.21" dur="6.54">system the opposite of this the RC 4000</text><text start="4298.59" dur="5.04">style kernel is what we now call a</text><text start="4300.75" dur="5.43">microkernel and a microkernel basically</text><text start="4303.63" dur="5.55">contains the bare minimum of code is</text><text start="4306.18" dur="5.7">necessary in order to implement basic</text><text start="4309.18" dur="6.809">addressing inter-process communications</text><text start="4311.88" dur="6.93">and scheduling this basic amount of code</text><text start="4315.989" dur="6.421">runs in kernel space and everything else</text><text start="4318.81" dur="6.75">runs in user space often with lower</text><text start="4322.41" dur="6.15">privileges as a general rule of thumb</text><text start="4325.56" dur="6.12">micro kernels contain less than 10,000</text><text start="4328.56" dur="5.19">lines of code microkernel based</text><text start="4331.68" dur="4.59">operating systems tend to be quite</text><text start="4333.75" dur="4.92">modular because they divide the</text><text start="4336.27" dur="5.64">operating system functions between the</text><text start="4338.67" dur="6.84">kernel and a set of servers that run in</text><text start="4341.91" dur="5.67">user space however because many of the</text><text start="4345.51" dur="4.11">core functions of the operating system</text><text start="4347.58" dur="3.99">are performed by user space components</text><text start="4349.62" dur="4.89">which have to communicate with each</text><text start="4351.57" dur="6.75">other via the kernel performance does</text><text start="4354.51" dur="7.28">suffer thus most kernels that are in use</text><text start="4358.32" dur="6.36">today are a hybrid of these two designs</text><text start="4361.79" dur="5.38">I&amp;#39;m going to introduce Murphy&amp;#39;s Law of</text><text start="4364.68" dur="4.05">reality sort of an extension of the</text><text start="4367.17" dur="5.04">Murphy&amp;#39;s laws with which you may be</text><text start="4368.73" dur="6.21">familiar and my definition of Murphy&amp;#39;s</text><text start="4372.21" dur="4.8">Law of reality is simply that reality is</text><text start="4374.94" dur="4.41">the hazy space between the extremes of</text><text start="4377.01" dur="4.53">competing academic theories in which</text><text start="4379.35" dur="5.85">everything is wrong in some way at least</text><text start="4381.54" dur="5.61">according to the theories this idea of a</text><text start="4385.2" dur="4.41">hybrid kernel architecture is a</text><text start="4387.15" dur="2.94">controversial one some people do not</text><text start="4389.61" dur="3.75">like</text><text start="4390.09" dur="5.7">use this terminology at all many people</text><text start="4393.36" dur="5.12">prefer to keep the binary classification</text><text start="4395.79" dur="5.75">of monolithic kernel and microkernel</text><text start="4398.48" dur="5.5">however if we look at modern kernels</text><text start="4401.54" dur="4.81">typically the monolithic versions of</text><text start="4403.98" dur="4.38">modern kernels are broken into modules</text><text start="4406.35" dur="4.14">that can be loaded and unloaded at</text><text start="4408.36" dur="5.43">runtime this helps to increase</text><text start="4410.49" dur="5.16">maintainability of the kernel and true</text><text start="4413.79" dur="4.02">micro kernels today would have</text><text start="4415.65" dur="5.01">unacceptable performance thus</text><text start="4417.81" dur="4.68">microkernel based systems typically have</text><text start="4420.66" dur="4.26">some of the features of monolithic</text><text start="4422.49" dur="4.89">kernels such as more device drivers and</text><text start="4424.92" dur="6.48">other code that runs inside the kernels</text><text start="4427.38" dur="6.81">memory space some examples of different</text><text start="4431.4" dur="5.31">types of kernels for monolithic kernels</text><text start="4434.19" dur="4.26">in addition to the system 5 UNIX kernel</text><text start="4436.71" dur="4.05">which is a descendant from the original</text><text start="4438.45" dur="6.47">units kernel we have the Linux kernel</text><text start="4440.76" dur="8.85">BSD ms-dos and windows 9x kernels</text><text start="4444.92" dur="6.7">Windows NT XP Vista and 7 if you don&amp;#39;t</text><text start="4449.61" dur="4.62">prefer to use the hybrid terminology</text><text start="4451.62" dur="5.94">would also qualify as monolithic kernels</text><text start="4454.23" dur="6.63">and the Mac OS 10 kernel falls into the</text><text start="4457.56" dur="5.79">same category in terms of micro kernels</text><text start="4460.86" dur="4.5">the RC 4000 monitor kernel would have</text><text start="4463.35" dur="3.72">been the earliest however there have</text><text start="4465.36" dur="6.44">been plenty other examples including</text><text start="4467.07" dur="7.68">Mach L for the MIT exokernel project and</text><text start="4471.8" dur="4.99">the idea at least behind the Windows NT</text><text start="4474.75" dur="4.86">kernel which was based upon a</text><text start="4476.79" dur="5.13">microkernel design the same is true of</text><text start="4479.61" dur="4.5">the Mac OS 10 kernel since that was</text><text start="4481.92" dur="4.14">originally based on the Mach microkernel</text><text start="4484.11" dur="4.32">however those have been heavily modified</text><text start="4486.06" dur="6.54">and now have many properties of</text><text start="4488.43" dur="7.14">monolithic kernels also so in summary</text><text start="4492.6" dur="5.01">the kernel is the minimum layer of</text><text start="4495.57" dur="3.96">software inside the operating system</text><text start="4497.61" dur="3.75">that provides the basic foundations for</text><text start="4499.53" dur="4.32">abstracting away details of the hardware</text><text start="4501.36" dur="5.97">and arbitrating between multiple</text><text start="4503.85" dur="6.57">applications when the bare absolute bare</text><text start="4507.33" dur="7.2">minimum implementations are used we call</text><text start="4510.42" dur="5.7">the result a microkernel monolithic</text><text start="4514.53" dur="3.6">kernels on the other hand have all their</text><text start="4516.12" dur="4.8">major OS components contained within</text><text start="4518.13" dur="5.089">them running everything inside kernel</text><text start="4520.92" dur="5.21">space to improve performance</text><text start="4523.219" dur="5.341">two early influential kernels were the</text><text start="4526.13" dur="4.92">RC 4000 monitor an example of a</text><text start="4528.56" dur="4.29">microkernel and the original UNIX kernel</text><text start="4531.05" dur="5.939">which was an example of a monolithic</text><text start="4532.85" dur="6.24">kernel in practice however most modern</text><text start="4536.989" dur="4.261">operating system kernels are hybrids of</text><text start="4539.09" dur="5.3">the two designs and have features of</text><text start="4541.25" dur="3.14">both kernel types</text><text start="4547.289" dur="6.421">system via the command line in part 1 of</text><text start="4551.25" dur="5.639">this lecture I&amp;#39;m going to discuss</text><text start="4553.71" dur="5.909">command line operation introduced pads</text><text start="4556.889" dur="5.58">in the file system discuss the file</text><text start="4559.619" dur="5.25">system hierarchy talk about the contents</text><text start="4562.469" dur="5.52">of the root directory give an overview</text><text start="4564.869" dur="5.071">of several top level directories briefly</text><text start="4567.989" dur="6.57">discuss configuration files and</text><text start="4569.94" dur="7.29">introduce the concept of man pages now</text><text start="4574.559" dur="5.191">in Linux the command line is the older</text><text start="4577.23" dur="5.25">type of interface user interface to the</text><text start="4579.75" dur="5.07">system it is a text mode interface that</text><text start="4582.48" dur="5.699">predates the development of graphical</text><text start="4584.82" dur="5.52">user interfaces or gooeys the command</text><text start="4588.179" dur="4.44">line uses the keyboard exclusively in</text><text start="4590.34" dur="4.98">general there is no use of the mouse and</text><text start="4592.619" dur="4.92">it works by having you type in a command</text><text start="4595.32" dur="4.5">followed by any arguments to that</text><text start="4597.539" dur="4.471">command and then pressing enter the</text><text start="4599.82" dur="4.29">command runs and if there are any</text><text start="4602.01" dur="4.259">results to be displayed the result of</text><text start="4604.11" dur="5.58">the command is displayed below where you</text><text start="4606.269" dur="5.281">entered the command at all times the</text><text start="4609.69" dur="4.02">command shell or the program that</text><text start="4611.55" dur="4.439">processes the commands that you&amp;#39;re</text><text start="4613.71" dur="4.71">entering is in something called a</text><text start="4615.989" dur="4.471">working directory that working directory</text><text start="4618.42" dur="4.319">is a location on the file system in</text><text start="4620.46" dur="5.159">which any file that you create by</text><text start="4622.739" dur="5.041">default will be placed and you can open</text><text start="4625.619" dur="4.17">files by default without having to give</text><text start="4627.78" dur="4.379">any kind of path information for them</text><text start="4629.789" dur="5.37">you can figure out what the current</text><text start="4632.159" dur="6.241">working directory is by using the PWD or</text><text start="4635.159" dur="6.17">print working directory command you can</text><text start="4638.4" dur="6">change directories using the command CD</text><text start="4641.329" dur="5.37">running the command CD with no arguments</text><text start="4644.4" dur="4.77">will take you to your home directory and</text><text start="4646.699" dur="5.621">you can list the contents of any</text><text start="4649.17" dur="4.949">directory with the LS command it&amp;#39;s</text><text start="4652.32" dur="4.739">typical that someone has been using</text><text start="4654.119" dur="6.12">Linux for some time will compulsively</text><text start="4657.059" dur="5.37">issue the LS command every time the CD</text><text start="4660.239" dur="4.201">command is used so that you can maintain</text><text start="4662.429" dur="6.06">some kind of awareness about how the</text><text start="4664.44" dur="6.06">filesystem is structured paths in the</text><text start="4668.489" dur="4.561">filesystem can be given in one of two</text><text start="4670.5" dur="5.099">ways they can be given as an absolute</text><text start="4673.05" dur="6.21">path meaning that they have a leading</text><text start="4675.599" dur="4.071">slash such as slash @ c slash in it to</text><text start="4679.26" dur="4.94">have</text><text start="4679.67" dur="7.14">or /opt slash condors /bin relative</text><text start="4684.2" dur="5.04">paths on the other hand are relative to</text><text start="4686.81" dur="7.26">whatever current working directory the</text><text start="4689.24" dur="8.34">shell is in thus a relative path my file</text><text start="4694.07" dur="6.66">see is going to refer to a file named my</text><text start="4697.58" dur="6.63">file dot C located in the current</text><text start="4700.73" dur="7.05">working directory the path tests slash</text><text start="4704.21" dur="6.48">final slash P underscore all opps is</text><text start="4707.78" dur="6.84">going to refer to a file that is in the</text><text start="4710.69" dur="6.18">final subdirectory of a tests directory</text><text start="4714.62" dur="5.61">that can be found in the current working</text><text start="4716.87" dur="5.85">directory there are also some special</text><text start="4720.23" dur="5.93">relative path name components that can</text><text start="4722.72" dur="5.87">be used a single period a single dot</text><text start="4726.16" dur="6.46">references the current working directory</text><text start="4728.59" dur="5.58">two periods or two dots references the</text><text start="4732.62" dur="6.21">parent directory</text><text start="4734.17" dur="8.74">thus the path dot dot slash foo slash</text><text start="4738.83" dur="6.36">dot slash bar means go up one directory</text><text start="4742.91" dur="4.92">from the current working directory then</text><text start="4745.19" dur="7.98">go down into its sub directory foo and</text><text start="4747.83" dur="8.13">access a file called AR notice that the</text><text start="4753.17" dur="6">single dot alone as part of a directory</text><text start="4755.96" dur="9.27">path generally has no effect because it</text><text start="4759.17" dur="8.28">refers to the current directory now all</text><text start="4765.23" dur="3.99">directories on a UNIX system are</text><text start="4767.45" dur="4.59">organized according to the filesystem</text><text start="4769.22" dur="5.15">hierarchy and there is a specification</text><text start="4772.04" dur="6.48">available for the filesystem hierarchy</text><text start="4774.37" dur="6.73">which different distributions follow two</text><text start="4778.52" dur="4.64">different levels and the standards get</text><text start="4781.1" dur="6.18">interpreted a bit between systems and</text><text start="4783.16" dur="6.1">distributions in any case however the</text><text start="4787.28" dur="4.47">root directory of a filesystem is</text><text start="4789.26" dur="5.19">located at a path consisting of a single</text><text start="4791.75" dur="5.52">forward slash and you can change to the</text><text start="4794.45" dur="5.49">root directory by running C D and then</text><text start="4797.27" dur="6.48">as the argument to C D just a single</text><text start="4799.94" dur="7.26">forward slash and in this screenshot we</text><text start="4803.75" dur="7.26">can see we&amp;#39;re on a system I&amp;#39;ve run CD</text><text start="4807.2" dur="5.97">slash and then run LS to look at the</text><text start="4811.01" dur="4.63">contents of the redirect</text><text start="4813.17" dur="4.83">there are a number of top-level</text><text start="4815.64" dur="5.28">directories that are fairly standard</text><text start="4818" dur="3.94">inside the root directory on a normal</text><text start="4820.92" dur="4.47">Linux system</text><text start="4821.94" dur="5.79">these include slash bin which is where</text><text start="4825.39" dur="5.43">the minimum set of programs needed to</text><text start="4827.73" dur="6.57">work with the system is located /dev</text><text start="4830.82" dur="7.83">which contains entries corresponding to</text><text start="4834.3" dur="6.06">hardware devices on the system /xe which</text><text start="4838.65" dur="4.62">contains the system level configuration</text><text start="4840.36" dur="4.53">files /opt</text><text start="4843.27" dur="3.3">which contains optional software</text><text start="4844.89" dur="5.07">typically added by the system</text><text start="4846.57" dur="7.2">administrator slash USR often pronounced</text><text start="4849.96" dur="7.17">slash user which contains software hits</text><text start="4853.77" dur="5.85">ships with the Linux distribution slash</text><text start="4857.13" dur="6.24">var which contains rapidly changing</text><text start="4859.62" dur="6.24">files such as log files / Lib which</text><text start="4863.37" dur="6.33">contains system libraries used by other</text><text start="4865.86" dur="6.21">programs slash media into which</text><text start="4869.7" dur="6.539">automatic mounting systems will mount</text><text start="4872.07" dur="6.9">removable drives and devices /mnt</text><text start="4876.239" dur="4.621">often ran a slash mount which is where</text><text start="4878.97" dur="4.47">the system administrator can manually</text><text start="4880.86" dur="6.36">mount removable devices or network</text><text start="4883.44" dur="6.299">shares / root not to be confused with</text><text start="4887.22" dur="4.74">the root directory is actually the home</text><text start="4889.739" dur="6.481">directory for the super user or root</text><text start="4891.96" dur="6.69">user / S Pen contains super user</text><text start="4896.22" dur="3.99">binaries or programs that are intended</text><text start="4898.65" dur="4.589">to be run only by the system</text><text start="4900.21" dur="6.51">administrator and slash temp is where</text><text start="4903.239" dur="5.731">temporary files are typically stored now</text><text start="4906.72" dur="4.59">as mentioned a moment ago system-wide</text><text start="4908.97" dur="5.88">configuration files generally go in</text><text start="4911.31" dur="6.66">slash Etsy two important examples of</text><text start="4914.85" dur="5.22">files in slash Etsy include slash Etsy</text><text start="4917.97" dur="4.86">slash init tab which sets the default</text><text start="4920.07" dur="4.68">run level or whether the system is going</text><text start="4922.83" dur="5.52">to boot into text mode or graphical mode</text><text start="4924.75" dur="6.57">and slash Etsy slash fstab which</text><text start="4928.35" dur="6.09">contains the filesystem table indicating</text><text start="4931.32" dur="5.13">to the kernel where to find different</text><text start="4934.44" dur="5.67">file systems that need to be mounted at</text><text start="4936.45" dur="6.15">boot time configuration files in general</text><text start="4940.11" dur="4.95">on Linux can be edited with a text</text><text start="4942.6" dur="3.96">editor on a pure command-line system</text><text start="4945.06" dur="4.29">this might be the VI to</text><text start="4946.56" dur="5.1">steady der VI is the one text editor</text><text start="4949.35" dur="6.75">that is generally expected to be present</text><text start="4951.66" dur="7.11">on any UNIX system if you need help on a</text><text start="4956.1" dur="4.98">command or some configuration file</text><text start="4958.77" dur="5.04">within the system there is a built-in</text><text start="4961.08" dur="6.6">help facility called man pages or manual</text><text start="4963.81" dur="5.97">pages man pages allow you to get help on</text><text start="4967.68" dur="4.92">particular commands and other features</text><text start="4969.78" dur="5.22">of the system simply by typing man a</text><text start="4972.6" dur="4.34">space and then the command or feature</text><text start="4975" dur="4.68">and on which you would like to get help</text><text start="4976.94" dur="5.02">to navigate through a man page you can</text><text start="4979.68" dur="4.71">use the up and down arrow keys and then</text><text start="4981.96" dur="5.31">use the Q key to exit the manual</text><text start="4984.39" dur="5.81">facility in the next part of the lecture</text><text start="4987.27" dur="6.09">I&amp;#39;ll discuss file permissions and other</text><text start="4990.2" dur="5.49">basic issues related to running a Linux</text><text start="4993.36" dur="2.33">system</text><text start="4997.409" dur="5.451">sure I will continue discussing the</text><text start="4999.87" dur="5.19">basics of utilizing a Linux system in</text><text start="5002.86" dur="5.62">particular we&amp;#39;ll talk about file</text><text start="5005.06" dur="6.389">ownership file permissions listing</text><text start="5008.48" dur="5.909">running processes finding libraries used</text><text start="5011.449" dur="5.641">by a program determining the absolute</text><text start="5014.389" dur="6.391">path of a program and discuss the role</text><text start="5017.09" dur="5.28">of the super user on the system first</text><text start="5020.78" dur="4.919">we&amp;#39;ll start with file ownership and</text><text start="5022.37" dur="6.98">permissions each file on a Linux system</text><text start="5025.699" dur="7.411">is going to have an owner a group and</text><text start="5029.35" dur="7.21">access permissions this information can</text><text start="5033.11" dur="8.52">all be found by running the LS command</text><text start="5036.56" dur="7.65">with a dash lowercase L argument it is</text><text start="5041.63" dur="4.98">possible to change which user on the</text><text start="5044.21" dur="5.4">system is the owner of the file using</text><text start="5046.61" dur="7.17">the CH own command or change owner CH</text><text start="5049.61" dur="8.25">owm the change group chgrp command</text><text start="5053.78" dur="7.26">allows you to change which group a file</text><text start="5057.86" dur="5.22">will be associated with and permissions</text><text start="5061.04" dur="7.28">for a file can be changed with a CH</text><text start="5063.08" dur="9.26">Ahmad or CH mo d for change mode command</text><text start="5068.32" dur="8.879">each file has three types of permissions</text><text start="5072.34" dur="4.859">for each of the three levels of access</text><text start="5077.71" dur="6.94">access controls are set with a bitwise</text><text start="5080.389" dur="6.871">or of permission bits a value of 1 is</text><text start="5084.65" dur="4.11">used for execute permissions meaning</text><text start="5087.26" dur="5.55">that the file can be treated like a</text><text start="5088.76" dur="6.959">program and executed directly a value of</text><text start="5092.81" dur="4.679">2 is used for write permissions meaning</text><text start="5095.719" dur="6.511">that someone could change or delete a</text><text start="5097.489" dur="6.991">file and a value of 4 is read permission</text><text start="5102.23" dur="6.239">being able to read the contents of a</text><text start="5104.48" dur="7.469">file individual file permissions are</text><text start="5108.469" dur="7.201">supported for 3 classes of user the</text><text start="5111.949" dur="6.741">owner of the file members of the group</text><text start="5115.67" dur="6.779">with which the file is associated and</text><text start="5118.69" dur="8.199">so-called world permissions or all users</text><text start="5122.449" dur="8.441">on the system changing file permissions</text><text start="5126.889" dur="7.661">can be done with a CH mo D command and</text><text start="5130.89" dur="6.39">by example a file can be set to be world</text><text start="5134.55" dur="5.37">readable and executable but writable</text><text start="5137.28" dur="8.19">only by its owner using the command</text><text start="5139.92" dur="9.15">chmod space 755 space the name of the</text><text start="5145.47" dur="5.91">file a file could be set to be right</text><text start="5149.07" dur="4.74">protected in readable only by the owner</text><text start="5151.38" dur="6.08">and group by running the same chmod</text><text start="5153.81" dur="8.88">command only with permissions for 4-0 in</text><text start="5157.46" dur="9.25">the first example 755 means permission 7</text><text start="5162.69" dur="9.39">which is 4 plus 2 plus 1 meaning read +</text><text start="5166.71" dur="9.09">write + execute for the owner that&amp;#39;s the</text><text start="5172.08" dur="5.19">first number of the 3 the first 5 the</text><text start="5175.8" dur="3.99">second number of the three sets</text><text start="5177.27" dur="7.17">permissions for the group which is 4</text><text start="5179.79" dur="8.34">read + 1 execute makes 5 and the same</text><text start="5184.44" dur="7.17">for read plus 1 execute for the world</text><text start="5188.13" dur="7.88">which is the third number in the 4 4 0</text><text start="5191.61" dur="7.65">case the 4 means read only for the owner</text><text start="5196.01" dur="6.22">the second four means read only for the</text><text start="5199.26" dur="5.25">group and zero means no access</text><text start="5202.23" dur="4.26">whatsoever by people who are not members</text><text start="5204.51" dur="6.12">of the group with which the files</text><text start="5206.49" dur="6.87">associated permissions on a directory</text><text start="5210.63" dur="5.31">works slightly differently if a</text><text start="5213.36" dur="4.68">directory has execute permissions the</text><text start="5215.94" dur="5.52">contents of the directory can be</text><text start="5218.04" dur="5.4">traversed allowing access to readable</text><text start="5221.46" dur="5.49">files and subdirectories within the</text><text start="5223.44" dur="6.33">directory itself read permissions on a</text><text start="5226.95" dur="5.7">directory are necessary in order for a</text><text start="5229.77" dur="4.95">directory to be listed but for someone</text><text start="5232.65" dur="4.86">to obtain a directory listing with LS</text><text start="5234.72" dur="6.65">they must have both read permissions and</text><text start="5237.51" dur="6.72">execute permissions for that directory</text><text start="5241.37" dur="5.32">files can be deleted on a Linux system</text><text start="5244.23" dur="6.15">with the RM command RM is short for</text><text start="5246.69" dur="8.28">remove entire directories can be deleted</text><text start="5250.38" dur="8.22">using RM dash RF and then supplying the</text><text start="5254.97" dur="6.27">directory name to delete this can be an</text><text start="5258.6" dur="5.61">extremely dangerous command if run</text><text start="5261.24" dur="7.499">incorrectly for example running</text><text start="5264.21" dur="6.72">RM dash RF / as the root user will</text><text start="5268.739" dur="4.561">delete every single file on the system</text><text start="5270.93" dur="4.85">and it will not ask first this will</text><text start="5273.3" dur="5.939">leave the system in a destroyed state</text><text start="5275.78" dur="6.4">there is also no undelete function on</text><text start="5279.239" dur="5.431">most UNIX file systems once the RM</text><text start="5282.18" dur="4.71">command has been used the file is simply</text><text start="5284.67" dur="6.83">gone and unless that file has been</text><text start="5286.89" dur="4.61">backed up it is not easily recoverable</text><text start="5291.77" dur="6.25">when working with a Linux system it&amp;#39;s</text><text start="5295.44" dur="5.66">often useful to see what programs or</text><text start="5298.02" dur="5.34">what processes are running on the system</text><text start="5301.1" dur="4.24">there are two commands that can be</text><text start="5303.36" dur="5.34">utilized in order to find this</text><text start="5305.34" dur="7.05">information the first of these is the PS</text><text start="5308.7" dur="6.63">command which stands for processes PS</text><text start="5312.39" dur="4.89">space ax will allow you to see all</text><text start="5315.33" dur="5.34">processes currently running on the</text><text start="5317.28" dur="9.33">system to find out which user owns each</text><text start="5320.67" dur="8.61">process run PS space ax the top command</text><text start="5326.61" dur="4.05">is a useful command for getting current</text><text start="5329.28" dur="3.77">information about the state of the</text><text start="5330.66" dur="5.55">system including running processes</text><text start="5333.05" dur="6.4">amount of available memory and amount of</text><text start="5336.21" dur="5.61">CPU that&amp;#39;s being utilized this command</text><text start="5339.45" dur="6.24">displays the processes that are using</text><text start="5341.82" dur="6.63">the most CPU time and updates itself</text><text start="5345.69" dur="7.14">every second or two this command can be</text><text start="5348.45" dur="7.05">exited by hitting the Q key to see what</text><text start="5352.83" dur="5.97">dynamic libraries are used by a binary</text><text start="5355.5" dur="6.63">program not a script but an actual</text><text start="5358.8" dur="6.54">compiled binary application one can use</text><text start="5362.13" dur="6.12">the l DD command so if one were to run</text><text start="5365.34" dur="6.06">the l DD command on for example slash</text><text start="5368.25" dur="5.04">user slash bin slash events for systems</text><text start="5371.4" dur="4.589">that have the kinome PDF reader</text><text start="5373.29" dur="4.679">installed one would find out which</text><text start="5375.989" dur="5.431">dynamic libraries were needed in order</text><text start="5377.969" dur="5.671">for that application to run it&amp;#39;s also</text><text start="5381.42" dur="4.799">possible to find out the absolute path</text><text start="5383.64" dur="4.11">name of an installed program to see</text><text start="5386.219" dur="4.651">where it actually resides on the</text><text start="5387.75" dur="6.48">filesystem this is done with the which</text><text start="5390.87" dur="5.91">command for example typing which events</text><text start="5394.23" dur="3.3">should display slash user slash bin</text><text start="5396.78" dur="3.81">slide</text><text start="5397.53" dur="4.98">events on most Linux distributions</text><text start="5400.59" dur="5.87">provided of course that software is</text><text start="5402.51" dur="6.69">installed finally there is one</text><text start="5406.46" dur="6.1">administrative user on the Linux system</text><text start="5409.2" dur="7.59">called the super user it has the account</text><text start="5412.56" dur="7.89">name root and has privileges to read</text><text start="5416.79" dur="6.41">write or delete any file change system</text><text start="5420.45" dur="5.61">settings and install software and</text><text start="5423.2" dur="5.26">perform other administrative tasks that</text><text start="5426.06" dur="6">are normally forbidden of ordinary users</text><text start="5428.46" dur="8.16">on some Linux distributions such as</text><text start="5432.06" dur="8.16">Ubuntu and on Mac OS 10 the root user is</text><text start="5436.62" dur="5.46">disabled by default however many other</text><text start="5440.22" dur="4.68">distributions including Red Hat</text><text start="5442.08" dur="6.83">Enterprise Linux CentOS and Arch Linux</text><text start="5444.9" dur="6.69">leave the root user account enabled on</text><text start="5448.91" dur="5.41">systems where the root user account has</text><text start="5451.59" dur="4.53">been disabled or on systems where the</text><text start="5454.32" dur="4.2">administrator would like to disable the</text><text start="5456.12" dur="4.8">root account the sudo command allows</text><text start="5458.52" dur="5.58">authorized regular users to run a</text><text start="5460.92" dur="5.37">command as the super user so for example</text><text start="5464.1" dur="6.35">we could find the listing of routes home</text><text start="5466.29" dur="6.96">directory by typing sudo LS slash root</text><text start="5470.45" dur="7">to switch to the super user temporarily</text><text start="5473.25" dur="6.96">on one of these systems sudo su dash and</text><text start="5477.45" dur="6.06">entering the users password would allow</text><text start="5480.21" dur="4.8">the user temporarily to become the root</text><text start="5483.51" dur="3.93">user even if the root account is</text><text start="5485.01" dur="6.21">disabled on systems where the root</text><text start="5487.44" dur="6.24">account is enabled su dash has the same</text><text start="5491.22" dur="4.38">effect except that routes password must</text><text start="5493.68" dur="6.09">be entered instead of the user&amp;#39;s</text><text start="5495.6" dur="8.94">password discuss interrupts and device</text><text start="5499.77" dur="9.09">input output when hardware devices on a</text><text start="5504.54" dur="6">computer produce events we need some way</text><text start="5508.86" dur="3.6">of being able to handle those events</text><text start="5510.54" dur="5.04">within the operating system and deliver</text><text start="5512.46" dur="5.19">them to applications and hardware</text><text start="5515.58" dur="4.41">devices are going to produce events at</text><text start="5517.65" dur="4.92">times and end patterns that we don&amp;#39;t</text><text start="5519.99" dur="4.74">know about in advance for example we</text><text start="5522.57" dur="4.35">don&amp;#39;t know which keys the user is going</text><text start="5524.73" dur="5.13">to press on the keyboard until the user</text><text start="5526.92" dur="4.35">actually presses those keys if a cat</text><text start="5529.86" dur="3.45">walks across the key</text><text start="5531.27" dur="3.18">we&amp;#39;re gonna see a completely different</text><text start="5533.31" dur="4.02">pattern of keypresses</text><text start="5534.45" dur="6.93">from which we would expect to see with</text><text start="5537.33" dur="5.88">the human user similarly if we have</text><text start="5541.38" dur="3.48">incoming network packets if we&amp;#39;re</text><text start="5543.21" dur="4.32">running a server application or even</text><text start="5544.86" dur="4.89">just a workstation and we have messages</text><text start="5547.53" dur="4.82">coming in from the network we don&amp;#39;t know</text><text start="5549.75" dur="5.13">the order and timing of those messages</text><text start="5552.35" dur="5.11">we also don&amp;#39;t know when the mouse is</text><text start="5554.88" dur="3.99">going to be moved or when any other of a</text><text start="5557.46" dur="4.65">whole bunch of hardware events is going</text><text start="5558.87" dur="5.31">to occur so how can we get the</text><text start="5562.11" dur="4.32">information generated by these events</text><text start="5564.18" dur="6.39">and make it available to our</text><text start="5566.43" dur="7.23">applications for use well we have two</text><text start="5570.57" dur="5.46">options first option is that we can poll</text><text start="5573.66" dur="4.59">each device we can ask each device if it</text><text start="5576.03" dur="5.91">has any new information and retrieve</text><text start="5578.25" dur="5.67">that information or we can let the</text><text start="5581.94" dur="4.59">devices send a signal whenever they have</text><text start="5583.92" dur="5.52">information and have the operating</text><text start="5586.53" dur="5.19">system stop whatever it&amp;#39;s doing and pick</text><text start="5589.44" dur="6.42">up that information this is called an</text><text start="5591.72" dur="6.6">interrupt the polling model of input</text><text start="5595.86" dur="4.71">involves the OS periodically polling</text><text start="5598.32" dur="4.29">each device for information so every so</text><text start="5600.57" dur="3.84">often the CPU is going to send a message</text><text start="5602.61" dur="4.4">to each hardware device in the system</text><text start="5604.41" dur="4.95">and say hey you have any data for me and</text><text start="5607.01" dur="4.42">most of the time the device is going to</text><text start="5609.36" dur="4.92">send back no don&amp;#39;t really have any data</text><text start="5611.43" dur="6">for you other times the device is going</text><text start="5614.28" dur="6.24">to send back some data it&amp;#39;s a really</text><text start="5617.43" dur="5.34">simple design extremely simple to</text><text start="5620.52" dur="5.25">implement but there are a number of</text><text start="5622.77" dur="5.07">problems with polling first problem is</text><text start="5625.77" dur="4.05">is that most of the time when you&amp;#39;re</text><text start="5627.84" dur="5.1">polling the devices are not going to</text><text start="5629.82" dur="4.8">have any input data deliver thus polling</text><text start="5632.94" dur="4.86">is going to waste a whole lot of CPU</text><text start="5634.62" dur="5.58">time the second issue that occurs is</text><text start="5637.8" dur="4.77">high latency if I press a key on the</text><text start="5640.2" dur="4.41">keyboard that keystroke is not going to</text><text start="5642.57" dur="4.83">get transmitted to the computer until</text><text start="5644.61" dur="4.79">the next time the CPU pulls the keyboard</text><text start="5647.4" dur="6.21">to ask which keys have been pressed if</text><text start="5649.4" dur="7.54">that time is set to be really short I&amp;#39;ll</text><text start="5653.61" dur="5.81">have good responsiveness but the CPU is</text><text start="5656.94" dur="5.4">not going to get any useful work done on</text><text start="5659.42" dur="4.69">the other hand if we set that time</text><text start="5662.34" dur="2.589">length to be long enough for the CPU to</text><text start="5664.11" dur="2.89">get some work</text><text start="5664.929" dur="4.531">there&amp;#39;s going to be a noticeable lag</text><text start="5667" dur="4.969">between the time I press a key and the</text><text start="5669.46" dur="6.33">time a character appears on the screen</text><text start="5671.969" dur="5.591">since the device must wait for a polling</text><text start="5675.79" dur="3.78">interval because it can translate input</text><text start="5677.56" dur="6">we&amp;#39;re going to have a high latency</text><text start="5679.57" dur="5.73">situation and again shortening that</text><text start="5683.56" dur="4.74">polling interval to try to reduce the</text><text start="5685.3" dur="6.35">latency simply wastes a whole lot of CPU</text><text start="5688.3" dur="6.3">time checking devices that have no input</text><text start="5691.65" dur="4.93">so a better mechanism is to use a system</text><text start="5694.6" dur="4.829">called interrupts and with interrupts</text><text start="5696.58" dur="4.74">the hardware devices actually signal the</text><text start="5699.429" dur="4.711">operating system whenever events occur</text><text start="5701.32" dur="4.56">or more precisely the signal the CPU and</text><text start="5704.14" dur="5.25">then it&amp;#39;s up to the operating system to</text><text start="5705.88" dur="4.98">receive and handle that signal what the</text><text start="5709.39" dur="3.539">operating system will do is it will</text><text start="5710.86" dur="3.87">preempt any running process in other</text><text start="5712.929" dur="4.5">words it will switch what we call</text><text start="5714.73" dur="5.16">context away from that running process</text><text start="5717.429" dur="5.671">to handle the event basically it will</text><text start="5719.89" dur="5.519">move the program counter of the CPU to</text><text start="5723.1" dur="6.27">the code to handle that particular</text><text start="5725.409" dur="5.671">interrupt this allows for a more</text><text start="5729.37" dur="4.2">responsive system than we could ever</text><text start="5731.08" dur="4.74">achieve through polling without having</text><text start="5733.57" dur="5.91">to waste a whole bunch of time asking</text><text start="5735.82" dur="5.58">idle devices for data however this does</text><text start="5739.48" dur="4.92">require a more complex implementation</text><text start="5741.4" dur="6.14">and that implementation complexity</text><text start="5744.4" dur="6.39">begins at the hardware level</text><text start="5747.54" dur="5.199">specifically within the CPU we need to</text><text start="5750.79" dur="5.25">have a mechanism for checking and</text><text start="5752.739" dur="5.401">responding to interrupts and this</text><text start="5756.04" dur="5.73">mechanism is implemented as part of the</text><text start="5758.14" dur="5.82">CPUs fetch execute cycle in the process</text><text start="5761.77" dur="3.83">of fetch execute the CPU is going to</text><text start="5763.96" dur="5.13">fetch an instruction from memory</text><text start="5765.6" dur="6.55">increment the program counter execute</text><text start="5769.09" dur="5.94">that instruction but instead of simply</text><text start="5772.15" dur="4.71">going back to the next fetch the CPU</text><text start="5775.03" dur="4.23">actually has to have additional hardware</text><text start="5776.86" dur="2.85">to check to see if an interrupt event is</text><text start="5779.26" dur="4.14">pending</text><text start="5779.71" dur="5.79">if there is an interrupt pending then</text><text start="5783.4" dur="4.35">the CPU has to be switched to kernel</text><text start="5785.5" dur="4.52">mode if it&amp;#39;s running in user mode so the</text><text start="5787.75" dur="5.159">privilege level needs to be escalated</text><text start="5790.02" dur="6.1">save the program counter by pushing it</text><text start="5792.909" dur="5.621">onto the stack and load a program</text><text start="5796.12" dur="4.81">counter from a fixed memory location</text><text start="5798.53" dur="6.69">and that fixed memory location is called</text><text start="5800.93" dur="6.18">the interrupt vector table or ivt so we</text><text start="5805.22" dur="4.8">load the program counter from the I VT</text><text start="5807.11" dur="6.27">and then the CPU goes and executes that</text><text start="5810.02" dur="5.91">new instruction the next time the fetch</text><text start="5813.38" dur="4.38">execute cycle resumes so the CPU</text><text start="5815.93" dur="4.35">actually moves from executing program</text><text start="5817.76" dur="4.38">code to executing code from the</text><text start="5820.28" dur="5.28">interrupt handler for the particular</text><text start="5822.14" dur="6.69">event if no interrupt is pending at the</text><text start="5825.56" dur="6.96">end of and executes then we simply go</text><text start="5828.83" dur="6.39">back to the next instruction fetch the</text><text start="5832.52" dur="7.2">interrupt vector table consists of an</text><text start="5835.22" dur="7.38">array of addresses of handlers each</text><text start="5839.72" dur="5.31">element in this array essentially gives</text><text start="5842.6" dur="5.67">the program counter location for the</text><text start="5845.03" dur="5.25">handler for a particular interrupt this</text><text start="5848.27" dur="4.44">handler is going to be in a subsystem of</text><text start="5850.28" dur="4.47">the kernel from on a lithic kernel or</text><text start="5852.71" dur="5.85">this handler might invoke a call to an</text><text start="5854.75" dur="6.42">external server for a microkernel in any</text><text start="5858.56" dur="5.49">case however the first handler by</text><text start="5861.17" dur="6.89">convention element 0 of the array is</text><text start="5864.05" dur="7.05">always the handler for the clock then</text><text start="5868.06" dur="7.45">handlers for different devices are in</text><text start="5871.1" dur="7.11">the array after the clock Handler so the</text><text start="5875.51" dur="5.07">interrupt vector is always mapped into</text><text start="5878.21" dur="5.13">the user part of memory it&amp;#39;s always</text><text start="5880.58" dur="4.56">available at all times so that the</text><text start="5883.34" dur="6.54">kernel can go and look up interrupts</text><text start="5885.14" dur="7.47">information whenever is necessary an</text><text start="5889.88" dur="5.39">interrupt is processed by branching the</text><text start="5892.61" dur="5.03">program counter to the interrupt handler</text><text start="5895.27" dur="4.84">executing interrupt handling code and</text><text start="5897.64" dur="4.24">then at the end of the interrupt</text><text start="5900.11" dur="3.99">handling code there will be an</text><text start="5901.88" dur="4.89">instruction to return from the interrupt</text><text start="5904.1" dur="5.61">in the intel assembly language this is</text><text start="5906.77" dur="4.77">the IRET instruction which loads the</text><text start="5909.71" dur="4.29">processed program counter back from</text><text start="5911.54" dur="5.31">memory it pops the stack to get the</text><text start="5914" dur="5.25">original program counter back and goes</text><text start="5916.85" dur="4.38">ahead and changes the CPU back to user</text><text start="5919.25" dur="5.73">mode so it removes the privilege</text><text start="5921.23" dur="6.84">escalation the interrupt handling</text><text start="5924.98" dur="6.33">mechanism is thus able to handle events</text><text start="5928.07" dur="4.02">from hardware devices without having to</text><text start="5931.31" dur="3">pull</text><text start="5932.09" dur="5.279">device individual be discussing</text><text start="5934.31" dur="6.389">interrupt controllers in particular I&amp;#39;ll</text><text start="5937.369" dur="5.821">introduce the old and new mechanisms for</text><text start="5940.699" dur="5.971">delivering interrupts from hardware</text><text start="5943.19" dur="5.13">devices to the CPU these methods include</text><text start="5946.67" dur="4.35">the original programmable interrupt</text><text start="5948.32" dur="4.799">controllers and the new advanced</text><text start="5951.02" dur="6.179">programmable interrupt controller with</text><text start="5953.119" dur="5.941">message signaled interrupts interrupt</text><text start="5957.199" dur="3.9">controllers provide an interface for</text><text start="5959.06" dur="5.22">hardware to signal the CPU whenever</text><text start="5961.099" dur="6.481">device needs attention it&amp;#39;s important to</text><text start="5964.28" dur="5.399">note that this signal only includes a</text><text start="5967.58" dur="5.159">message that essentially says hey I&amp;#39;m a</text><text start="5969.679" dur="5.101">device I need attention the CPU</text><text start="5972.739" dur="4.65">historically then actually does have to</text><text start="5974.78" dur="7.14">go and pull the device to get any data</text><text start="5977.389" dur="6.21">that the device may have the older</text><text start="5981.92" dur="3.779">mechanism for performing this operation</text><text start="5983.599" dur="4.681">was called a programmable interrupt</text><text start="5985.699" dur="5.101">controller or pick and it actually</text><text start="5988.28" dur="5.879">required dedicated lines to be added to</text><text start="5990.8" dur="6.089">the motherboard the eisah or industry</text><text start="5994.159" dur="5.641">standard architecture bus which dates</text><text start="5996.889" dur="7.02">back all the way to the first pc back in</text><text start="5999.8" dur="6.68">1987 and older versions of the pci or</text><text start="6003.909" dur="7.02">peripheral component interconnect bus</text><text start="6006.48" dur="6.639">utilized this mechanism the new</text><text start="6010.929" dur="4.71">mechanism or the advanced programmable</text><text start="6013.119" dur="5.58">interrupt controller is used on PCI</text><text start="6015.639" dur="6.991">Express devices and some newer PCI</text><text start="6018.699" dur="5.181">devices now the old controller or the</text><text start="6022.63" dur="4.65">programmable interrupt controller</text><text start="6023.88" dur="6.16">actually consisted of two programmable</text><text start="6027.28" dur="5.01">interrupt controller chips that were</text><text start="6030.04" dur="4.639">attached to each other with one of the</text><text start="6032.29" dur="5.309">chips being attached to the CPU the</text><text start="6034.679" dur="6.971">so-called master chip was the one</text><text start="6037.599" dur="8.6">attached to the CPU and pin two of that</text><text start="6041.65" dur="9.299">master chip was attached to a slave chip</text><text start="6046.199" dur="7.42">each pin on each of the two chips allows</text><text start="6050.949" dur="6.17">for sixteen interrupts numbers to be</text><text start="6053.619" dur="6.841">created interrupts 0 through 7 are</text><text start="6057.119" dur="6.901">correspond to the pins of the master</text><text start="6060.46" dur="5.48">chip and interrupts 8 through 15</text><text start="6064.02" dur="4.5">correspond to the pins</text><text start="6065.94" dur="5.43">of the slave ship now it should be noted</text><text start="6068.52" dur="6.87">that since pen two of the master chip</text><text start="6071.37" dur="6.45">handles the slave chip that the master</text><text start="6075.39" dur="5.34">programmable interrupt controller only</text><text start="6077.82" dur="6.57">supports an effective seven interrupts</text><text start="6080.73" dur="7.739">so there are only 15 usable interrupts</text><text start="6084.39" dur="7.349">are Dwyer lines for devices and these</text><text start="6088.469" dur="7.411">are numbered 0 through 15 but we have to</text><text start="6091.739" dur="7.471">skip the number 2 now historically pin</text><text start="6095.88" dur="5.25">number 0 which corresponds in software</text><text start="6099.21" dur="5.7">terms to what we call interrupt request</text><text start="6101.13" dur="7.53">line or irq zero was connected to the</text><text start="6104.91" dur="6.69">timer interrupt request line 1 was</text><text start="6108.66" dur="5.43">connected to the keyboard different</text><text start="6111.6" dur="4.47">eisah and PCI devices could then use the</text><text start="6114.09" dur="4.879">remainder of the master chip by</text><text start="6116.07" dur="6.45">connecting to irq lines 3 through 7 on</text><text start="6118.969" dur="6.851">the slave chip pin 0 which corresponds</text><text start="6122.52" dur="7.98">to ir q 8 was connected to the real-time</text><text start="6125.82" dur="10.11">clock pen 4 corresponding to ir q 12 was</text><text start="6130.5" dur="7.41">connected to a ps2 mouse pin 5 or ir q</text><text start="6135.93" dur="5.73">13 connected to the math coprocessor</text><text start="6137.91" dur="8.37">which was a separate component from the</text><text start="6141.66" dur="7.559">main cpu in earlier pcs and then pin 6</text><text start="6146.28" dur="5.73">and 7 corresponding to ir q lines 14 and</text><text start="6149.219" dur="5.701">15 connected to the ide controllers</text><text start="6152.01" dur="7.41">these were used for disk and eventually</text><text start="6154.92" dur="8.13">for optical devices this left pins 1</text><text start="6159.42" dur="7.049">through 3 on the slave controller or IR</text><text start="6163.05" dur="4.939">cues 9 through 11 available for hardware</text><text start="6166.469" dur="4.411">devices</text><text start="6167.989" dur="5.861">now these interrupt lines on the</text><text start="6170.88" dur="5.91">motherboard were actually circuit traces</text><text start="6173.85" dur="6.3">these were conductive paths edged into</text><text start="6176.79" dur="8.04">the motherboard that allowed interrupts</text><text start="6180.15" dur="7.92">to be received from devices there were</text><text start="6184.83" dur="6.03">15 lines available of the 16 that could</text><text start="6188.07" dur="5.16">be used by devices with line 0 and 1</text><text start="6190.86" dur="4.98">reserved for the timer and a ps2</text><text start="6193.23" dur="5.19">keyboard respectively actually even</text><text start="6195.84" dur="4.64">before the ps2 reservation the original</text><text start="6198.42" dur="5.73">80 keyboard</text><text start="6200.48" dur="5.89">eisah and pci add-in devices actually</text><text start="6204.15" dur="4.53">had to share in uruk request lines and</text><text start="6206.37" dur="4.2">this sharing could lead to Hardware</text><text start="6208.68" dur="5.25">conflicts that could lock up the system</text><text start="6210.57" dur="5.79">it was thus up to the system owner to</text><text start="6213.93" dur="5.61">manage the sharing by setting little</text><text start="6216.36" dur="6.44">jumpers on the Adhan cards so that the</text><text start="6219.54" dur="5.97">cards were using different irq lines</text><text start="6222.8" dur="4.78">there were also performance issues when</text><text start="6225.51" dur="3.69">irq lines were shared because the</text><text start="6227.58" dur="4.26">operating system actually had to pull</text><text start="6229.2" dur="4.41">each device sharing an IR cue to</text><text start="6231.84" dur="3.87">determine which device it was that</text><text start="6233.61" dur="4.29">raised the interrupt polling was still</text><text start="6235.71" dur="4.41">necessary in order to receive any kind</text><text start="6237.9" dur="3.57">of data from the device regardless of</text><text start="6240.12" dur="5.97">whether it was sharing an interrupt line</text><text start="6241.47" dur="6.15">or not on modern systems a completely</text><text start="6246.09" dur="6.36">different interrupt mechanism is used</text><text start="6247.62" dur="7.02">and this mechanism has a set of memory</text><text start="6252.45" dur="3.93">registers on what&amp;#39;s called an advanced</text><text start="6254.64" dur="4.23">programmable interrupt controller in</text><text start="6256.38" dur="5.01">this set of memory registers is</text><text start="6258.87" dur="5.19">connected to a single shared bus that</text><text start="6261.39" dur="5.55">each device on the system can use to</text><text start="6264.06" dur="4.62">raise an interrupt message by writing</text><text start="6266.94" dur="4.17">that message into one of the memory</text><text start="6268.68" dur="4.98">registers these are called message</text><text start="6271.11" dur="6.69">signaled interrupts using the MSI and</text><text start="6273.66" dur="7.59">MSI X specifications essentially each</text><text start="6277.8" dur="6.27">device here I have a timer RTC USB host</text><text start="6281.25" dur="6.84">controller SATA controller is attached</text><text start="6284.07" dur="7.41">to the bus and indicates its interest in</text><text start="6288.09" dur="7.05">raising and interrupts to the apec by</text><text start="6291.48" dur="6.57">sending a message over that bus now this</text><text start="6295.14" dur="6.06">message does not contain any data it&amp;#39;s</text><text start="6298.05" dur="7.14">only a request for attention if the CPU</text><text start="6301.2" dur="6.99">has to be involved in the operation of</text><text start="6305.19" dur="5.28">sending or receiving information then</text><text start="6308.19" dur="4.61">the CPU actually has to contact the</text><text start="6310.47" dur="5.52">device in other words pull it directly</text><text start="6312.8" dur="6.4">there is a way around this called direct</text><text start="6315.99" dur="5.79">memory access or DMA transfers which are</text><text start="6319.2" dur="6.66">used extensively on PCI Express devices</text><text start="6321.78" dur="6.57">the register on the apec stores the</text><text start="6325.86" dur="5.13">request for attention until such time as</text><text start="6328.35" dur="4.11">the operating system handles the</text><text start="6330.99" dur="4.47">interrupt request and then</text><text start="6332.46" dur="5.009">message is cleared from the APEC this is</text><text start="6335.46" dur="4.95">the only interrupt mechanism that&amp;#39;s</text><text start="6337.469" dur="6.241">available on PCI Express buses there are</text><text start="6340.41" dur="6.059">no hardware interrupt lines however a</text><text start="6343.71" dur="5.13">number of motherboards still have</text><text start="6346.469" dur="6.871">interrupt lines physical interrupt lines</text><text start="6348.84" dur="6.93">and have physical pick pens so that they</text><text start="6353.34" dur="4.59">can support legacy devices there a</text><text start="6355.77" dur="6">number of specialty legacy devices still</text><text start="6357.93" dur="6.12">in use that need to be supported message</text><text start="6361.77" dur="4.29">signaled interrupts do solve a number of</text><text start="6364.05" dur="4.59">problems with interrupt request sharing</text><text start="6366.06" dur="6.14">the original specification allows each</text><text start="6368.64" dur="7.44">device to use any one of 32 IR key lines</text><text start="6372.2" dur="8.2">the MSI X specification will allow each</text><text start="6376.08" dur="6.389">device to use up to 2048 virtual lines</text><text start="6380.4" dur="5.37">virtual interrupt requests offers</text><text start="6382.469" dur="6.241">essentially and this allows for less</text><text start="6385.77" dur="5.969">contention and reduces the need to share</text><text start="6388.71" dur="5.7">interrupt request numbers by device thus</text><text start="6391.739" dur="4.891">reduces the amount of time necessary for</text><text start="6394.41" dur="6.03">the CPU to determine which device wanted</text><text start="6396.63" dur="5.4">attention so main thing to take away</text><text start="6400.44" dur="3.779">from this is that the interrupt</text><text start="6402.03" dur="5.04">controller and the interrupt request</text><text start="6404.219" dur="4.531">mechanism only allows the device to</text><text start="6407.07" dur="4.95">raise a signal that says it wants</text><text start="6408.75" dur="5.76">attention it&amp;#39;s up to the CPU or on</text><text start="6412.02" dur="4.65">certain buses up the device and the</text><text start="6414.51" dur="5.82">memory controller to get the information</text><text start="6416.67" dur="5.28">out of that device and into memory and</text><text start="6420.33" dur="4.68">interrupt handling at the hardware level</text><text start="6421.95" dur="5.94">and then move on to features provided by</text><text start="6425.01" dur="4.89">the CPU and finally features of the</text><text start="6427.89" dur="6.15">operating system for handling interrupts</text><text start="6429.9" dur="6.18">at the hardware level devices are</text><text start="6434.04" dur="5.22">connected either via traces on the</text><text start="6436.08" dur="5.58">motherboard or via a shared messaging</text><text start="6439.26" dur="6.18">bus to the advanced programmable</text><text start="6441.66" dur="5.55">interrupt controller the CPU checks for</text><text start="6445.44" dur="4.38">hardware interrupt signals from this</text><text start="6447.21" dur="5.49">controller after each user mode</text><text start="6449.82" dur="5.85">instruction is processed so after each</text><text start="6452.7" dur="6.06">instruction is executed running some</text><text start="6455.67" dur="4.71">particular program on the system the CPU</text><text start="6458.76" dur="3.75">actually checks to see if there are any</text><text start="6460.38" dur="4.969">interrupts that need to be processed if</text><text start="6462.51" dur="5.089">an interrupt signal is present</text><text start="6465.349" dur="4.821">then a colonel routine is called by the</text><text start="6467.599" dur="5.881">cpu in order to handle this interrupt</text><text start="6470.17" dur="5.549">the interrupt dispatch routine if it&amp;#39;s</text><text start="6473.48" dur="5.28">not implemented directly in hardware is</text><text start="6475.719" dur="5.321">actually a compact and fast routine that</text><text start="6478.76" dur="4.439">could be implanted in the kernel often</text><text start="6481.04" dur="5.909">coded in assembly language it has to be</text><text start="6483.199" dur="7.081">that fast the specific interrupt handler</text><text start="6486.949" dur="5.431">is always a kernel routine or in the</text><text start="6490.28" dur="4.919">case of a micro kernel and external</text><text start="6492.38" dur="4.68">server routine and this specific handler</text><text start="6495.199" dur="4.851">depends upon the type of interrupts</text><text start="6497.06" dur="6.72">received these are typically coded in C</text><text start="6500.05" dur="5.589">so once again the fetch execute cycle we</text><text start="6503.78" dur="6">check for an interrupt pending after</text><text start="6505.639" dur="6.301">each instruction is executed if there is</text><text start="6509.78" dur="5.129">an interrupt pending we escalate</text><text start="6511.94" dur="4.77">privilege to kernel mode push the</text><text start="6514.909" dur="3.901">program counter onto the stack in other</text><text start="6516.71" dur="4.23">words save it so we can resume from that</text><text start="6518.81" dur="4.98">point in whatever program were</text><text start="6520.94" dur="5.219">interrupting and then go and handle the</text><text start="6523.79" dur="4.559">interrupts we do this by loading the</text><text start="6526.159" dur="5.281">program counter the new program counter</text><text start="6528.349" dur="5.191">that is from a fixed memory location</text><text start="6531.44" dur="6.12">provided to us by the interrupt vector</text><text start="6533.54" dur="5.88">table the interrupt vector table gives</text><text start="6537.56" dur="4.409">us the address of all the different</text><text start="6539.42" dur="5.34">interrupt handlers we need a separate</text><text start="6541.969" dur="4.681">handler for each type of interrupts the</text><text start="6544.76" dur="5.04">clock requires different logic from the</text><text start="6546.65" dur="5.279">keyboard for example other devices such</text><text start="6549.8" dur="4.62">as say a webcam attached to your</text><text start="6551.929" dur="5.131">computer needs different logic in order</text><text start="6554.42" dur="4.77">to process messages from it so we have</text><text start="6557.06" dur="5.909">different interrupt handlers for each of</text><text start="6559.19" dur="6.389">these different devices the table simply</text><text start="6562.969" dur="5.701">stores the addresses of each handler and</text><text start="6565.579" dur="5.16">in our monolithic kernel case the</text><text start="6568.67" dur="5.84">handlers are actually part of the kernel</text><text start="6570.739" dur="7.38">and are mapped into kernel memory space</text><text start="6574.51" dur="6.43">the interrupt vector table consists of a</text><text start="6578.119" dur="5.941">list of each of these addresses for</text><text start="6580.94" dur="6.57">kernel handlers and conceptually is</text><text start="6584.06" dur="5.369">mapped into both kernel memory and user</text><text start="6587.51" dur="4.919">memory so that it can be accessed</text><text start="6589.429" dur="6.241">quickly and historically they started at</text><text start="6592.429" dur="5.971">address 0 however the mappings are</text><text start="6595.67" dur="6.569">different depending on the architecture</text><text start="6598.4" dur="5.91">on intel-based systems we have something</text><text start="6602.239" dur="4.141">called the interrupt descriptor table</text><text start="6604.31" dur="4.95">and the IDT</text><text start="6606.38" dur="5.22">provides special instructions and data</text><text start="6609.26" dur="5.13">structures that are actually managed by</text><text start="6611.6" dur="4.74">the CPU itself so the interrupt handling</text><text start="6614.39" dur="3.72">can be as fast as possible and</text><text start="6616.34" dur="4.62">protection rings can be changed</text><text start="6618.11" dur="7.02">automatically the IDT is simply a</text><text start="6620.96" dur="6.69">reserved block of RAM used by the CPU to</text><text start="6625.13" dur="6.9">jump quickly to a specific interrupt</text><text start="6627.65" dur="7.47">handler this IDT is mapped into kernel</text><text start="6632.03" dur="5.52">space which was originally beginning at</text><text start="6635.12" dur="4.65">address 0 but this mapping is actually</text><text start="6637.55" dur="3.96">flexible with modern CPUs and can be</text><text start="6639.77" dur="5.55">mapped into other parts of the memory</text><text start="6641.51" dur="6.12">space the first 32 entries of the IDT</text><text start="6645.32" dur="4.71">are actually not used for interrupts per</text><text start="6647.63" dur="5.04">se but they&amp;#39;re actually used for CPU</text><text start="6650.03" dur="5.37">fault handlers and then the interrupt</text><text start="6652.67" dur="7.549">vector table part of the data structure</text><text start="6655.4" dur="7.47">begins after the cpu fault handler table</text><text start="6660.219" dur="5.171">when we actually go to handle interrupts</text><text start="6662.87" dur="4.92">the handling occurs in the kernel and</text><text start="6665.39" dur="4.65">this is done with two levels of</text><text start="6667.79" dur="4.55">interrupt handling the fast interrupt</text><text start="6670.04" dur="5.31">handler and the slow interrupt handler</text><text start="6672.34" dur="5.95">the fast interrupt handler is the piece</text><text start="6675.35" dur="4.5">of code that&amp;#39;s invoked directly from the</text><text start="6678.29" dur="3.449">interrupt vector table whenever an</text><text start="6679.85" dur="4.619">interrupt occurs this is the piece of</text><text start="6681.739" dur="6.301">code that the CPU is just going to jump</text><text start="6684.469" dur="5.73">to when an interrupt occurs fast</text><text start="6688.04" dur="3.81">handlers execute in real-time and</text><text start="6690.199" dur="3.811">they&amp;#39;re called fast interrupt handlers</text><text start="6691.85" dur="3.75">because they need to be fast the</text><text start="6694.01" dur="4.85">execution of one of these interrupt</text><text start="6695.6" dur="5.91">handlers needs to be short if any</text><text start="6698.86" dur="4.359">large-scale data transfer needs to occur</text><text start="6701.51" dur="4.83">say we need to get a lot of data from</text><text start="6703.219" dur="5.101">the device all at once this operation is</text><text start="6706.34" dur="4.14">handled by having the fast interrupt</text><text start="6708.32" dur="4.29">handler in queues something called a</text><text start="6710.48" dur="5.13">task into the operating systems task</text><text start="6712.61" dur="4.14">queue whenever all the fast interrupt</text><text start="6715.61" dur="3.629">handlers for all the different</text><text start="6716.75" dur="4.409">interrupts are done executing the CPU</text><text start="6719.239" dur="5.091">will go and check the task queue and</text><text start="6721.159" dur="6.121">execute any tasks that are present there</text><text start="6724.33" dur="4.69">the part of interrupt handling that goes</text><text start="6727.28" dur="3.75">into the task queue is called the slow</text><text start="6729.02" dur="3.13">interrupt handler and it&amp;#39;s called this</text><text start="6731.03" dur="3.01">because it&amp;#39;s not</text><text start="6732.15" dur="5.49">you&amp;#39;d immediately and it can be</text><text start="6734.04" dur="5.16">interrupted by other devices so what</text><text start="6737.64" dur="3.51">happens if an interrupt request is</text><text start="6739.2" dur="3.95">received while we&amp;#39;re still processing</text><text start="6741.15" dur="4.29">and interrupts from a previous request</text><text start="6743.15" dur="4.51">well there&amp;#39;s no problem if we&amp;#39;re in the</text><text start="6745.44" dur="4.89">slow interrupt handler because this</text><text start="6747.66" dur="4.74">processing is done in such a way that we</text><text start="6750.33" dur="4.59">can stop this processing and handle the</text><text start="6752.4" dur="5.01">new interrupts if necessary what happens</text><text start="6754.92" dur="6.57">if we are still running a fast interrupt</text><text start="6757.41" dur="6.06">handler well the new interrupt handler</text><text start="6761.49" dur="3.66">could be executed before the first</text><text start="6763.47" dur="3.29">interrupt handler is finished and this</text><text start="6765.15" dur="3.84">could cause some major problems</text><text start="6766.76" dur="4.39">especially if we get a new interrupt</text><text start="6768.99" dur="3.75">from a device that&amp;#39;s sharing the same</text><text start="6771.15" dur="5.49">interrupt line as the one that we&amp;#39;re</text><text start="6772.74" dur="6.42">handling so what we do is we make fast</text><text start="6776.64" dur="6.09">interrupt handling atomic that is we</text><text start="6779.16" dur="5.46">make it uninterruptible on a single</text><text start="6782.73" dur="4.41">course system this is as simple as</text><text start="6784.62" dur="4.19">disabling interrupts as long as we&amp;#39;re</text><text start="6787.14" dur="3.81">running a fast interrupt handler on</text><text start="6788.81" dur="4.12">multi-core systems there are actual</text><text start="6790.95" dur="4.89">special machine instructions to</text><text start="6792.93" dur="7.08">facilitate atomic operations that are</text><text start="6795.84" dur="7.23">pegged to one CPU core these atomic</text><text start="6800.01" dur="5.16">interrupt handling operations will run</text><text start="6803.07" dur="5.52">to completion without interruption by</text><text start="6805.17" dur="6.69">any other interrupt or any other request</text><text start="6808.59" dur="5.19">on the system thus the longer an</text><text start="6811.86" dur="4.05">interrupt handler takes to run the</text><text start="6813.78" dur="6.09">longer the system will be unresponsive</text><text start="6815.91" dur="5.7">to any new interrupts so what happens if</text><text start="6819.87" dur="5.13">a fast interrupt handler is coded in</text><text start="6821.61" dur="5.94">such a way that it takes too long other</text><text start="6825" dur="4.19">devices might be requesting attention at</text><text start="6827.55" dur="6.12">the same time that this long-running</text><text start="6829.19" dur="6.67">interrupt handler is executing worse the</text><text start="6833.67" dur="4.17">same device that generated the original</text><text start="6835.86" dur="4.47">interrupt might now have more data to</text><text start="6837.84" dur="4.26">deliver to the OS before all the</text><text start="6840.33" dur="4.13">previous data is completely received</text><text start="6842.1" dur="5.91">this could cause hardware failures</text><text start="6844.46" dur="5.62">buffer overflows dropped data dropped</text><text start="6848.01" dur="5.67">messages all kinds of issues the</text><text start="6850.08" dur="5.34">hardware level however inside the</text><text start="6853.68" dur="3.54">operating system this could lead to</text><text start="6855.42" dur="5.37">something called an interrupt storm</text><text start="6857.22" dur="5.55">which is really bad an interrupt storm</text><text start="6860.79" dur="4.2">occurs whenever another interrupt is</text><text start="6862.77" dur="3.06">always waiting to be processed whenever</text><text start="6864.99" dur="3.27">a faster</text><text start="6865.83" dur="3.81">handler finishes its execution that</text><text start="6868.26" dur="3.9">could occur either because the fast</text><text start="6869.64" dur="4.17">interrupt handler is too long and needs</text><text start="6872.16" dur="5.64">to be split into a fast and a slow</text><text start="6873.81" dur="6.03">handler this can also occur if hardware</text><text start="6877.8" dur="6.06">has certain bugs that cause it to raise</text><text start="6879.84" dur="5.58">spurious interrupts if the operating</text><text start="6883.86" dur="3.72">system is perpetually handling</text><text start="6885.42" dur="2.64">interrupts it never runs any application</text><text start="6887.58" dur="4.2">code</text><text start="6888.06" dur="7.08">thus it never appears to respond to any</text><text start="6891.78" dur="5.28">user inputs the result of the situation</text><text start="6895.14" dur="3.57">is something called a live lock the</text><text start="6897.06" dur="3.87">system is still running it&amp;#39;s still</text><text start="6898.71" dur="5.04">processing all these interrupts however</text><text start="6900.93" dur="4.68">it&amp;#39;s not doing any useful work thus to</text><text start="6903.75" dur="4.86">the user the system appears to be frozen</text><text start="6905.61" dur="5.46">and when an interrupt storm occurs in</text><text start="6908.61" dur="5.64">this live lock situation results the</text><text start="6911.07" dur="6.32">typical way out of this problem involves</text><text start="6914.25" dur="5.67">judicious use of the power button so</text><text start="6917.39" dur="4.3">interrupt handling is an important</text><text start="6919.92" dur="4.23">concept in order to support multi</text><text start="6921.69" dur="4.05">programming systems an interrupt</text><text start="6924.15" dur="4.56">handling when these interrupt messages</text><text start="6925.74" dur="5.25">come through from our dwyer is divided</text><text start="6928.71" dur="5.16">into two types of handler so that we</text><text start="6930.99" dur="4.86">don&amp;#39;t get the interrupt storm the fast</text><text start="6933.87" dur="3.99">interrupt handler executes atomically</text><text start="6935.85" dur="5.64">without being interrupted by anything</text><text start="6937.86" dur="5.85">else but it must be fast must enter that</text><text start="6941.49" dur="4.26">interrupt handler do some very short</text><text start="6943.71" dur="4.92">operations and immediately exit that</text><text start="6945.75" dur="4.92">handler if any long-running operations</text><text start="6948.63" dur="4.14">need to occur as a result of a device</text><text start="6950.67" dur="4.11">interrupt request we need to handle</text><text start="6952.77" dur="4.94">those operations inside the slow</text><text start="6954.78" dur="2.93">interrupt handler</text><text start="6959.199" dur="6.811">resources and how the system and its</text><text start="6961.63" dur="7.5">processes view random access memory in</text><text start="6966.01" dur="5.97">order to run any process or instance of</text><text start="6969.13" dur="5.58">a program on a computer system we need</text><text start="6971.98" dur="6">to provide two critical resources access</text><text start="6974.71" dur="5.91">to the CPU and access to random access</text><text start="6977.98" dur="5.88">memory or RAM for storing and</text><text start="6980.62" dur="5.13">manipulating data RAM is a type of</text><text start="6983.86" dur="4.2">dedicated Hardware memory that is</text><text start="6985.75" dur="4.41">attached to the motherboard it is</text><text start="6988.06" dur="5.52">separate from persistent storage or hard</text><text start="6990.16" dur="5.34">disk space Ram is also volatile which</text><text start="6993.58" dur="3.75">means that it loses its contents</text><text start="6995.5" dur="4.44">whenever power is interrupted to the RAM</text><text start="6997.33" dur="6.63">modules including whenever the computer</text><text start="6999.94" dur="5.91">is turned off at a low level the</text><text start="7003.96" dur="4.02">computer hardware presents memory to the</text><text start="7005.85" dur="5.28">operating system as one large block of</text><text start="7007.98" dur="5.34">space this space is divided into bytes</text><text start="7011.13" dur="4.97">and each byte in memory has a unique</text><text start="7013.32" dur="5.31">address that can be used to access it a</text><text start="7016.1" dur="5.32">32-bit architecture provides enough of</text><text start="7018.63" dur="6.18">these byte addresses to utilize up to 4</text><text start="7021.42" dur="5.31">gigabytes of RAM above that amount there</text><text start="7024.81" dur="4.08">is not enough space than a 32-bit number</text><text start="7026.73" dur="4.83">to store the address of any memory</text><text start="7028.89" dur="5.82">locations in excess of 4 gigabytes</text><text start="7031.56" dur="5.639">thus a 64-bit architecture is required</text><text start="7034.71" dur="7.23">for systems with more than 4 gigabytes</text><text start="7037.199" dur="6.721">of RAM each process or running instance</text><text start="7041.94" dur="4.98">of a program on a system uses a</text><text start="7043.92" dur="5.91">different view of memory process memory</text><text start="7046.92" dur="6.39">is divided into several pieces the stack</text><text start="7049.83" dur="6.389">the heap global variables and the text</text><text start="7053.31" dur="4.92">segment the stack is used to store</text><text start="7056.219" dur="3.661">automatic variables or variables that</text><text start="7058.23" dur="4.98">are local to functions in the C</text><text start="7059.88" dur="5.76">programming language space on the heap</text><text start="7063.21" dur="5.219">is manually allocated and D allocated by</text><text start="7065.64" dur="6.87">the programmer when writing C code using</text><text start="7068.429" dur="6.151">the functions malloc and free free space</text><text start="7072.51" dur="4.2">for extra data is located between the</text><text start="7074.58" dur="4.86">stack and the heap and the stack and the</text><text start="7076.71" dur="4.41">heap grow toward one another global</text><text start="7079.44" dur="3.239">variables are provided with their own</text><text start="7081.12" dur="4.59">section of memory which is allocated</text><text start="7082.679" dur="5.101">between the heap and text segment the</text><text start="7085.71" dur="4.739">text segment is used to store program</text><text start="7087.78" dur="3.93">code this segment of memory is read-only</text><text start="7090.449" dur="5.311">and cannot be</text><text start="7091.71" dur="6.54">changed as I mentioned in the previous</text><text start="7095.76" dur="4.92">slide automatic variables are placed</text><text start="7098.25" dur="4.05">onto the stack this placement is</text><text start="7100.68" dur="4.83">performed by the compiler when the</text><text start="7102.3" dur="4.89">program is built placement of data onto</text><text start="7105.51" dur="3.63">the heap is historically performed by</text><text start="7107.19" dur="4.049">the programmer although many of these</text><text start="7109.14" dur="5.09">operations are now automated in modern</text><text start="7111.239" dur="5.281">dynamic languages in the C and C++</text><text start="7114.23" dur="4.87">languages the programmer must explicitly</text><text start="7116.52" dur="5.88">request and release or allocate and</text><text start="7119.1" dur="5.01">deallocate heap space in C these</text><text start="7122.4" dur="4.08">operations are performed using the</text><text start="7124.11" dur="5.39">malloc and free functions while the new</text><text start="7126.48" dur="5.489">and delete operators are used in C++ in</text><text start="7129.5" dur="4.78">Java the programmer must explicitly</text><text start="7131.969" dur="3.031">allocate space on the heap using the new</text><text start="7134.28" dur="3.87">keyword</text><text start="7135" dur="5.4">however the Java Runtime automatically</text><text start="7138.15" dur="4.65">determines which heap allocations are no</text><text start="7140.4" dur="4.86">longer in use and frees those locations</text><text start="7142.8" dur="6.12">automatically this process is called</text><text start="7145.26" dur="5.61">garbage collection Python provides above</text><text start="7148.92" dur="4.35">automatic allocation and garbage</text><text start="7150.87" dur="4.23">collection whenever a data structure</text><text start="7153.27" dur="3.74">requires heap space the Python</text><text start="7155.1" dur="4.23">interpreter allocates it automatically</text><text start="7157.01" dur="4.24">once the data structure is no longer</text><text start="7159.33" dur="3.54">used by the program the garbage</text><text start="7161.25" dur="6.75">collector deallocate sit without</text><text start="7162.87" dur="6.78">programmer intervention use of heap</text><text start="7168" dur="3.27">memory and processes presents a</text><text start="7169.65" dur="3.33">challenge to the operating system</text><text start="7171.27" dur="3.77">because these allocations and</text><text start="7172.98" dur="4.65">deallocations are not known in advance</text><text start="7175.04" dur="4.9">the compiler is able to track and report</text><text start="7177.63" dur="3.75">the amount of stack space needed since</text><text start="7179.94" dur="3.18">the number of local variables in a</text><text start="7181.38" dur="4.71">function never changes in a language</text><text start="7183.12" dur="4.56">like C however the number of heap</text><text start="7186.09" dur="4.5">allocations may vary from program</text><text start="7187.68" dur="4.83">execution to program execution making it</text><text start="7190.59" dur="4.61">impossible to know exactly how much</text><text start="7192.51" dur="6.03">space should be allocated in advance</text><text start="7195.2" dur="5.47">worse heap memory allocations tend to be</text><text start="7198.54" dur="3.99">small and frequent so the process of</text><text start="7200.67" dur="5.37">allocating memory from this section</text><text start="7202.53" dur="5.67">needs to be fast this speed and dynamic</text><text start="7206.04" dur="3.99">availability needs to be maintained</text><text start="7208.2" dur="3.9">while the operating system shares the</text><text start="7210.03" dur="5.31">computer&amp;#39;s RAM among multiple processes</text><text start="7212.1" dur="5.369">at the same time in addition to sharing</text><text start="7215.34" dur="5.01">memory among processes the kernel has</text><text start="7217.469" dur="4.5">memory requirements of its own aside</text><text start="7220.35" dur="4.05">from the kernel code and its own</text><text start="7221.969" dur="3.161">automatic variables the kernel is filled</text><text start="7224.4" dur="2.83">with data</text><text start="7225.13" dur="4.65">that dynamically grow in strength during</text><text start="7227.23" dur="4.5">the course of system operation these</text><text start="7229.78" dur="4.58">data structures are numerous and include</text><text start="7231.73" dur="5.82">process control blocks the ready list</text><text start="7234.36" dur="6.43">scheduling use device access tables and</text><text start="7237.55" dur="5.609">many other types of structure unlike</text><text start="7240.79" dur="4.05">some processes however all memory</text><text start="7243.159" dur="3.631">allocation and de-allocation in the</text><text start="7244.84" dur="4.25">kernel is performed by the kernel</text><text start="7246.79" dur="4.71">programmers in the Linux kernel</text><text start="7249.09" dur="7.06">programmers can use one of a number of</text><text start="7251.5" dur="10.08">functions including K malloc KZ Alec VM</text><text start="7256.15" dur="7.44">malloc K free and V free the first issue</text><text start="7261.58" dur="4.05">that must be solved by the kernel is</text><text start="7263.59" dur="4.859">sharing memory between itself and a user</text><text start="7265.63" dur="4.47">space process this sharing is</text><text start="7268.449" dur="3.781">accomplished first by dividing the</text><text start="7270.1" dur="5.07">memory into two regions kernel memory</text><text start="7272.23" dur="5.46">and user memory the kernel keeps its</text><text start="7275.17" dur="4.77">data structures program code global</text><text start="7277.69" dur="4.71">variables and automatic variables and</text><text start="7279.94" dur="5.94">kernel memory isolating these items from</text><text start="7282.4" dur="5.19">the running process process memory is</text><text start="7285.88" dur="3.569">placed in a separate region from the</text><text start="7287.59" dur="4.92">kernel but the kernel is always</text><text start="7289.449" dur="4.741">available in memory this mapping is</text><text start="7292.51" dur="4.35">necessary to maintain performance</text><text start="7294.19" dur="4.32">whenever an interrupt occurs the process</text><text start="7296.86" dur="3.9">makes the system call to the kernel or</text><text start="7298.51" dur="6.72">the process experiences a fault that</text><text start="7300.76" dur="6.209">must be handled by the kernel so how do</text><text start="7305.23" dur="4.44">we go about running multiple processes</text><text start="7306.969" dur="4.681">at the same time well the first way we</text><text start="7309.67" dur="4.319">can consider which is used in some types</text><text start="7311.65" dur="4.38">of embedded systems is to divide the</text><text start="7313.989" dur="4.861">memory into fixed sized chunks called</text><text start="7316.03" dur="4.919">partitions a memory partition is a</text><text start="7318.85" dur="5.16">single region of RAM that is provided to</text><text start="7320.949" dur="5.19">a single process both the program code</text><text start="7324.01" dur="4.35">and data associated with each process</text><text start="7326.139" dur="6.33">must fit within this pre allocated space</text><text start="7328.36" dur="7.2">if a process grows too large it will run</text><text start="7332.469" dur="5.101">out of memory and crash furthermore the</text><text start="7335.56" dur="4.05">maximum number of concurrent processes</text><text start="7337.57" dur="4.2">called the degree of multi-programming</text><text start="7339.61" dur="5.279">is limited by the number of partitions</text><text start="7341.77" dur="5.1">available once all memory partitions are</text><text start="7344.889" dur="5.52">used the system cannot start any new</text><text start="7346.87" dur="5.64">processes this situation is exacerbated</text><text start="7350.409" dur="3.921">by the fact that small processes may not</text><text start="7352.51" dur="5.66">be using their entire partitions</text><text start="7354.33" dur="3.84">resulting in wasted memory</text><text start="7358.63" dur="4.2">this lecture I will introduce dynamic</text><text start="7360.85" dur="4.14">memory allocation at the system level</text><text start="7362.83" dur="3.72">this method of memory allocation</text><text start="7364.99" dur="4.14">improves the degree of multiprogramming</text><text start="7366.55" dur="5.13">that a system can provide by allocating</text><text start="7369.13" dur="6.5">memory to processes as needed instead of</text><text start="7371.68" dur="3.95">ahead of time and fixed size chunks</text><text start="7375.66" dur="5.02">fixed memory partitioning is fast and</text><text start="7378.49" dur="3.93">efficient in terms of overhead but it</text><text start="7380.68" dur="3.48">wastes space and limits the number of</text><text start="7382.42" dur="3.8">concurrent processes to the number of</text><text start="7384.16" dur="4.53">partitions that will fit into RAM</text><text start="7386.22" dur="4.6">dynamic memory allocation resolves this</text><text start="7388.69" dur="3.3">issue by allocating memory to processes</text><text start="7390.82" dur="3.63">as it is needed</text><text start="7391.99" dur="4.11">this mechanism increases the degree of</text><text start="7394.45" dur="3.72">multi-programming that the kernel fits</text><text start="7396.1" dur="5.07">supported but this improvement comes at</text><text start="7398.17" dur="4.86">a cost of greater complexity in addition</text><text start="7401.17" dur="3.66">there are trade-offs present in dynamic</text><text start="7403.03" dur="3.96">memory allocation that directly affect</text><text start="7404.83" dur="4.74">the performance of the system these</text><text start="7406.99" dur="5.04">issues include efficiency methods for</text><text start="7409.57" dur="3.75">tracking free space algorithms for</text><text start="7412.03" dur="6.33">determining where to make the next</text><text start="7413.32" dur="6.63">allocation and memory fragmentation the</text><text start="7418.36" dur="4.32">primary issue with dynamic memory</text><text start="7419.95" dur="5.01">allocation is fragmentation during</text><text start="7422.68" dur="5.82">execution processes allocate and free</text><text start="7424.96" dur="5.88">memory and chunks of varying sizes over</text><text start="7428.5" dur="4.8">time regions of free space within memory</text><text start="7430.84" dur="4.47">become non contiguous with sections of</text><text start="7433.3" dur="4.71">allocated memory in between sections of</text><text start="7435.31" dur="4.8">available memory this fragmentation</text><text start="7438.01" dur="4.59">could lead to serious performance issues</text><text start="7440.11" dur="4.86">since program execution speed will drop</text><text start="7442.6" dur="4.14">dramatically if every data structure</text><text start="7444.97" dur="5.04">must be implemented as a linked list of</text><text start="7446.74" dur="4.92">small pieces furthermore algorithms that</text><text start="7450.01" dur="3.72">try to perform on-the-fly memory</text><text start="7451.66" dur="4.89">defragmentation are extremely complex to</text><text start="7453.73" dur="7.11">implement and would also severely impact</text><text start="7456.55" dur="5.91">system performance since it is</text><text start="7460.84" dur="3.87">impractical to avoid or fix</text><text start="7462.46" dur="3.96">fragmentation memory fragments are</text><text start="7464.71" dur="4.62">significant concern when space is</text><text start="7466.42" dur="4.41">dynamically allocated small fragments of</text><text start="7469.33" dur="4.53">memory are useless to programs</text><text start="7470.83" dur="4.74">particularly C and C++ programs which</text><text start="7473.86" dur="4.07">requires structures and objects to be</text><text start="7475.57" dur="4.74">allocated in contiguous memory regions</text><text start="7477.93" dur="4.36">these structures are likely to be in</text><text start="7480.31" dur="3.89">various sizes that are not efficient to</text><text start="7482.29" dur="4.56">utilize for memory management purposes</text><text start="7484.2" dur="4.15">so the systems will allocate a few more</text><text start="7486.85" dur="3.78">bytes and the structures actually</text><text start="7488.35" dur="4.079">require in order to improve efficiency</text><text start="7490.63" dur="4.2">of the memory tracking systems</text><text start="7492.429" dur="4.261">furthermore structures and objects will</text><text start="7494.83" dur="4.05">be allocated and freed numerous times</text><text start="7496.69" dur="5.06">during program execution further</text><text start="7498.88" dur="5.069">fragmenting the RAM overtime</text><text start="7501.75" dur="4.51">fragmentation can waste large portions</text><text start="7503.949" dur="3.931">of the system memory limiting the degree</text><text start="7506.26" dur="5.66">of multi programming by making it</text><text start="7507.88" dur="6.41">impossible for new processes to start</text><text start="7511.92" dur="5.68">there are two types of fragmentation</text><text start="7514.29" dur="5.38">external and internal external</text><text start="7517.6" dur="5.19">fragmentation occurs when free space and</text><text start="7519.67" dur="5.16">memory is broken into small pieces over</text><text start="7522.79" dur="3.9">time as blocks of memory are allocated</text><text start="7524.83" dur="4.4">and D allocated this type of</text><text start="7526.69" dur="5.16">fragmentation tends to become worse</text><text start="7529.23" dur="4.9">external fragmentation is so bad with</text><text start="7531.85" dur="4.59">some allocation algorithms that for</text><text start="7534.13" dur="5.31">every n blocks of memory that are</text><text start="7536.44" dur="5.15">allocated to a process the system wastes</text><text start="7539.44" dur="4.95">another half in blocks and fragments</text><text start="7541.59" dur="7.69">with these algorithms up to a third of</text><text start="7544.39" dur="6.99">system memory becomes unusable the other</text><text start="7549.28" dur="4.439">type of fragmentation is called internal</text><text start="7551.38" dur="4.5">fragmentation internal fragmentation</text><text start="7553.719" dur="4.351">occurs because memory is normally</text><text start="7555.88" dur="5.1">allocated to processes in some fixed</text><text start="7558.07" dur="4.89">block size the block size is normally a</text><text start="7560.98" dur="3.33">power of two in order to make the kernel</text><text start="7562.96" dur="4.77">memory tracking structures more</text><text start="7564.31" dur="5.46">efficient processes tend to request</text><text start="7567.73" dur="5.25">pieces of memory and sizes that do not</text><text start="7569.77" dur="5.37">fit neatly into these blocks thus it is</text><text start="7572.98" dur="5.19">often the case that parts of a block are</text><text start="7575.14" dur="5.61">wasted as a result for small memory</text><text start="7578.17" dur="5.15">requests large portions of the block may</text><text start="7580.75" dur="2.57">be wasted</text><text start="7583.51" dur="4.86">when a process requests heap memory the</text><text start="7586.39" dur="4.01">kernel must find a chunk of free space</text><text start="7588.37" dur="4.71">large enough to accommodate the request</text><text start="7590.4" dur="5.38">this chunk cannot be smaller than the</text><text start="7593.08" dur="5.3">requested size since the process expects</text><text start="7595.78" dur="4.95">to use all the space it is requesting</text><text start="7598.38" dur="4.6">since the memory is divided into blocks</text><text start="7600.73" dur="3.99">for efficiency the chunk of memory</text><text start="7602.98" dur="4.32">returned to the process is normally</text><text start="7604.72" dur="4.44">larger than the amount requested unless</text><text start="7607.3" dur="4.189">the process happens to request a whole</text><text start="7609.16" dur="5.16">number multiple of the block size I&amp;#39;m</text><text start="7611.489" dur="4.891">now going to introduce four classical</text><text start="7614.32" dur="5.85">algorithms for dynamic memory allocation</text><text start="7616.38" dur="6">best fit worst fit first fit and next</text><text start="7620.17" dur="2.21">fit</text><text start="7622.559" dur="4.14">the first classic algorithm for dynamic</text><text start="7625.079" dur="4.41">memory allocation is the best fit</text><text start="7626.699" dur="4.5">algorithm in this algorithm the kernel</text><text start="7629.489" dur="3.271">searches for the smallest chunk of free</text><text start="7631.199" dur="4.65">space that is big enough to accommodate</text><text start="7632.76" dur="5.37">the memory request although best fit</text><text start="7635.849" dur="5.04">minimizes internal fragmentation by</text><text start="7638.13" dur="5.21">avoiding over allocation external</text><text start="7640.889" dur="4.77">fragmentation is a major problem an</text><text start="7643.34" dur="4.06">attempt to reduce the external</text><text start="7645.659" dur="3.451">fragmentation and the best fit algorithm</text><text start="7647.4" dur="4.46">is observed in the somewhat</text><text start="7649.11" dur="5.489">counterintuitive worst fit algorithm</text><text start="7651.86" dur="4.39">using the worst fit algorithm the kernel</text><text start="7654.599" dur="4.411">finds and allocates the largest</text><text start="7656.25" dur="4.349">available chunk of free space provided</text><text start="7659.01" dur="4.47">it is large enough to accommodate the</text><text start="7660.599" dur="5.1">request in theory this allocation</text><text start="7663.48" dur="4.53">strategy leaves larger and thus</text><text start="7665.699" dur="5.4">potentially more useable chunks of free</text><text start="7668.01" dur="5.52">space available in practice however this</text><text start="7671.099" dur="5.761">algorithm still fragments badly both</text><text start="7673.53" dur="5.37">internally and externally aside from the</text><text start="7676.86" dur="4.14">fragmentation both of these algorithms</text><text start="7678.9" dur="4.049">are impractical in actual kernel</text><text start="7681" dur="4.05">implementations because they must</text><text start="7682.949" dur="4.681">perform a search of the entire free list</text><text start="7685.05" dur="6.689">to find the smallest or largest chunk of</text><text start="7687.63" dur="6.12">memory the need to search the free list</text><text start="7691.739" dur="4.681">is eliminated using the first fit or</text><text start="7693.75" dur="4.53">next fit algorithm in the first fit</text><text start="7696.42" dur="3.75">algorithm the kernel simply finds and</text><text start="7698.28" dur="4.1">allocates the first chunk of memory that</text><text start="7700.17" dur="5.79">is large enough to satisfy the request</text><text start="7702.38" dur="6.339">this approach does result in internal</text><text start="7705.96" dur="4.65">fragmentation and it also tends to</text><text start="7708.719" dur="3.871">create small fragments of free space</text><text start="7710.61" dur="6.21">that accumulate at the start of the free</text><text start="7712.59" dur="6.54">list reducing performance over time the</text><text start="7716.82" dur="4.649">next fit algorithm avoids the external</text><text start="7719.13" dur="4.199">fragment accumulation by starting the</text><text start="7721.469" dur="5.041">search for the next chunk of memory from</text><text start="7723.329" dur="5.46">the most recent allocation in practice</text><text start="7726.51" dur="4.14">only the first fit algorithm is used in</text><text start="7728.789" dur="4.711">the Linux kernel and then only for</text><text start="7730.65" dur="4.77">embedded devices this algorithm is</text><text start="7733.5" dur="4.86">called the slob alligator which stands</text><text start="7735.42" dur="4.29">for simple list of blocks for non</text><text start="7738.36" dur="3.39">embedded systems that have the</text><text start="7739.71" dur="4.619">computational power for more complex</text><text start="7741.75" dur="4.05">algorithm slab allocation is used</text><text start="7744.329" dur="6.931">instead of any of these simple</text><text start="7745.8" dur="7.469">algorithms memory allocation I will</text><text start="7751.26" dur="4.2">introduce the power of two methods with</text><text start="7753.269" dur="3.18">the buddy system and coalescence for</text><text start="7755.46" dur="3.96">allocating memory to</text><text start="7756.449" dur="4.8">processes then I will introduce slab</text><text start="7759.42" dur="3.36">allocation which is used within the</text><text start="7761.249" dur="5.1">kernel to allocate kernel data</text><text start="7762.78" dur="5.219">structures efficiently in the previous</text><text start="7766.349" dur="3.931">lecture I introduced the classic</text><text start="7767.999" dur="6.12">algorithms for memory allocation best</text><text start="7770.28" dur="5.52">fit worst fit first fit and next fit now</text><text start="7774.119" dur="3.69">I want to introduce algorithms that are</text><text start="7775.8" dur="4.009">actually used within OS kernels to</text><text start="7777.809" dur="4.531">perform memory allocations efficiently</text><text start="7779.809" dur="4.451">these algorithms are called power of two</text><text start="7782.34" dur="3.899">methods and they work by maintaining</text><text start="7784.26" dur="4.14">information about allocated and free</text><text start="7786.239" dur="5.67">blocks in a binary tree instead of a</text><text start="7788.4" dur="6.199">list at the top level memory is divided</text><text start="7791.909" dur="5.19">into large blocks called super blocks as</text><text start="7794.599" dur="4.39">processes request memory these super</text><text start="7797.099" dur="3.69">blocks are divided into smaller sub</text><text start="7798.989" dur="4.411">blocks from which the memory is</text><text start="7800.789" dur="4.801">allocated sub blocks can be further</text><text start="7803.4" dur="5.969">divided creating a hierarchy of block</text><text start="7805.59" dur="5.699">sizes algorithms based on this method</text><text start="7809.369" dur="4.62">are relatively fast and scaled a</text><text start="7811.289" dur="4.741">multiple parallel CPU cores these</text><text start="7813.989" dur="4.92">algorithms also reduce external</text><text start="7816.03" dur="6.029">fragmentation by coalescing free blocks</text><text start="7818.909" dur="4.98">as I will discuss in a few moments some</text><text start="7822.059" dur="6.091">internal fragmentation does still occur</text><text start="7823.889" dur="6.42">however in this diagram we can see three</text><text start="7828.15" dur="4.949">super blocks two of which are partially</text><text start="7830.309" dur="4.89">in use each of the first two super</text><text start="7833.099" dur="4.65">blocks has been divided into three sub</text><text start="7835.199" dur="4.201">blocks and the third sub block of the</text><text start="7837.749" dur="5.22">first super block has been further</text><text start="7839.4" dur="5.429">divided when a process requests memory</text><text start="7842.969" dur="3.721">the kernel must perform a search of the</text><text start="7844.829" dur="4.491">tree to find an appropriately sized</text><text start="7846.69" dur="4.98">block to handle the request in</text><text start="7849.32" dur="4.21">performing the search the colonel might</text><text start="7851.67" dur="3.929">subdivide an existing block into a</text><text start="7853.53" dur="4.83">smaller block to reduce the amount of</text><text start="7855.599" dur="5.52">internal fragmentation since this is a</text><text start="7858.36" dur="4.02">search process a reasonably powerful CPU</text><text start="7861.119" dur="6.511">is required to make this operation</text><text start="7862.38" dur="7.109">efficient another improvement to memory</text><text start="7867.63" dur="4.56">management is to utilize a buddy system</text><text start="7869.489" dur="4.831">in the power of two strategy in this</text><text start="7872.19" dur="4.92">allocation system to limits are chosen</text><text start="7874.32" dur="6.179">to be powers of two the upper limit U</text><text start="7877.11" dur="5.73">and the lower limit L the super blocks</text><text start="7880.499" dur="4.62">are the blocks of size U and these</text><text start="7882.84" dur="6.27">blocks can be subdivided into blocks as</text><text start="7885.119" dur="4.951">small as L bytes there are trade-offs in</text><text start="7889.11" dur="4.77">picking the size</text><text start="7890.07" dur="6.359">to use 4l a smaller size produces less</text><text start="7893.88" dur="4.739">internal fragmentation since the block</text><text start="7896.429" dur="5.341">size more closely matches the smallest</text><text start="7898.619" dur="5.611">request sizes from the processes however</text><text start="7901.77" dur="4.02">a smaller size for L means that there</text><text start="7904.23" dur="3.989">are more total blocks to be tracked</text><text start="7905.79" dur="5.25">which increases the size of the binary</text><text start="7908.219" dur="6.721">tree using more ram to store the tree</text><text start="7911.04" dur="6.63">and increasing the search time on the</text><text start="7914.94" dur="4.799">other hand a larger size 4l reduces the</text><text start="7917.67" dur="4.1">search time and makes the tree smaller</text><text start="7919.739" dur="6.061">but the amount of internal fragmentation</text><text start="7921.77" dur="6.159">increases in addition to the block size</text><text start="7925.8" dur="5.04">limits the buddy system also uses a</text><text start="7927.929" dur="5.101">technique called coalescence whenever a</text><text start="7930.84" dur="4.049">process frees a block the kernel checks</text><text start="7933.03" dur="5.28">to see if either neighboring blocks is</text><text start="7934.889" dur="6.121">free also if one or more neighbors or</text><text start="7938.31" dur="5.13">buddy blocks are free the block is</text><text start="7941.01" dur="4.589">coalesced into a larger block reducing</text><text start="7943.44" dur="4.62">external fragmentation the coalescence</text><text start="7945.599" dur="4.321">algorithm is efficient since the maximum</text><text start="7948.06" dur="4.619">number of coalescence operations that</text><text start="7949.92" dur="6.06">must be performed is equal to the base 2</text><text start="7952.679" dur="5.551">logarithm of U divided by L by</text><text start="7955.98" dur="4.469">properties of logarithms this value is</text><text start="7958.23" dur="7.29">equivalent to the base to log of U minus</text><text start="7960.449" dur="7.891">the base J log of L thus for example a</text><text start="7965.52" dur="7.139">system with a maximum block size U of</text><text start="7968.34" dur="7.08">4096 bytes or 2 to the 12 and a minimum</text><text start="7972.659" dur="5.971">block size of 512 bytes or 2 to the 9</text><text start="7975.42" dur="6.469">will require at most 3 coalescence</text><text start="7978.63" dur="6.39">operations to recreate the super block</text><text start="7981.889" dur="6.281">returning a single 512 byte block whose</text><text start="7985.02" dur="5.04">neighboring 512 byte buddy is free will</text><text start="7988.17" dur="4.94">cause the two blocks to be coalesced</text><text start="7990.06" dur="5.88">into a single thousand 24 byte block if</text><text start="7993.11" dur="6.16">the neighboring thousand 24 byte block</text><text start="7995.94" dur="7.98">is free the 2024 byte blocks will be</text><text start="7999.27" dur="8.46">coalesced into a 2048 byte block then if</text><text start="8003.92" dur="6.659">a buddy 2048 byte block is free the</text><text start="8007.73" dur="5.25">third coalescence will produce the 4096</text><text start="8010.579" dur="4.381">bytes super block the power of two</text><text start="8012.98" dur="3.929">methods are useful for allocating memory</text><text start="8014.96" dur="4.71">to processes where some internal</text><text start="8016.909" dur="4.681">fragmentation is acceptable however</text><text start="8019.67" dur="3.75">within the kernel it is preferable to</text><text start="8021.59" dur="4.76">minimize both internal and extra</text><text start="8023.42" dur="5.52">fragmentation to avoid wasting space</text><text start="8026.35" dur="4.389">this conservative approach is needed</text><text start="8028.94" dur="4.38">since the kernel is always mapped into</text><text start="8030.739" dur="4.23">main memory an efficient solution for</text><text start="8033.32" dur="3.87">allocating kernel memory is to use a</text><text start="8034.969" dur="4.861">slab allocation algorithm in which</text><text start="8037.19" dur="6.36">kernel memory is arranged into fixed</text><text start="8039.83" dur="6.119">size slabs each slab is divided into</text><text start="8043.55" dur="4.55">region size for specific types of kernel</text><text start="8045.949" dur="5.131">objects including file descriptors</text><text start="8048.1" dur="5.4">semaphores process control structures</text><text start="8051.08" dur="4.619">and other internal data structures</text><text start="8053.5" dur="3.989">initial layout of these slabs is</text><text start="8055.699" dur="5.431">performed at compile time</text><text start="8057.489" dur="5.921">at runtime several of each of the</text><text start="8061.13" dur="5.25">different slab layouts are pre allocated</text><text start="8063.41" dur="5.67">into caches whenever the kernel requires</text><text start="8066.38" dur="4.47">a new data structure space for the data</text><text start="8069.08" dur="4.86">structure is simply taken from the slab</text><text start="8070.85" dur="5.34">cache if the slab cache starts to run</text><text start="8073.94" dur="5.029">out of a certain slab layout it</text><text start="8076.19" dur="5.37">automatically provisions extras</text><text start="8078.969" dur="5.411">graphically slabs can be represented in</text><text start="8081.56" dur="4.619">a manner shown here in this example we</text><text start="8084.38" dur="4.589">have two pre-allocated copies of the</text><text start="8086.179" dur="4.651">same slab layout in which each slab can</text><text start="8088.969" dur="4.951">hold a single instance of each of five</text><text start="8090.83" dur="5.01">different kernel objects some wasted</text><text start="8093.92" dur="3.509">memory does occur with this arrangement</text><text start="8095.84" dur="3.75">since there might be a larger number of</text><text start="8097.429" dur="5.011">one type of object than of another type</text><text start="8099.59" dur="4.5">of object however this approach is</text><text start="8102.44" dur="5.37">generally more efficient in terms of</text><text start="8104.09" dur="6.12">kernel space utilization slab allocation</text><text start="8107.81" dur="5.34">does require more CPU power than does a</text><text start="8110.21" dur="5.52">classical method such as first fit thus</text><text start="8113.15" dur="7.86">in some embedded environments the slab</text><text start="8115.73" dur="7.32">alligator might be preferable if slab</text><text start="8121.01" dur="3.93">allocation is chosen over the slab</text><text start="8123.05" dur="4.14">alligator the Linux kernel has two</text><text start="8124.94" dur="4.83">choices of slab alligator the first</text><text start="8127.19" dur="5.31">choice is the original slab alligator</text><text start="8129.77" dur="5.67">which was the default alligator until</text><text start="8132.5" dur="4.949">kernel version 2.6 point 23 this</text><text start="8135.44" dur="5.1">alligator performed well on shared</text><text start="8137.449" dur="5.071">memory systems with few CPU cores but</text><text start="8140.54" dur="3.99">wasted considerable memory space when</text><text start="8142.52" dur="4.62">used on extremely large shared memory</text><text start="8144.53" dur="6.51">systems such as those found in graphics</text><text start="8147.14" dur="6.21">rendering farms to reduce the space</text><text start="8151.04" dur="4.5">waste on large-scale SMA systems</text><text start="8153.35" dur="4.32">Christophe Lemaitre at Silicon Graphics</text><text start="8155.54" dur="4.8">developed a new alligator</text><text start="8157.67" dur="4.2">called slow which reduces the size of</text><text start="8160.34" dur="4.98">data structures needed to track</text><text start="8161.87" dur="5.31">allocated and free objects the initial</text><text start="8165.32" dur="3.78">implementation of the slub allocator</text><text start="8167.18" dur="3.72">contained a performance bug that</text><text start="8169.1" dur="4.98">affected the results of certain memory</text><text start="8170.9" dur="4.65">benchmarking tools initially christoph</text><text start="8174.08" dur="3.45">believed the bug was of little</text><text start="8175.55" dur="4.02">importance since the conditions required</text><text start="8177.53" dur="4.86">to trigger it were fairly uncommon in</text><text start="8179.57" dur="4.44">practice however ninis informed</text><text start="8182.39" dur="3.54">christoph that either the problem would</text><text start="8184.01" dur="5.97">be fixed or slug would be dropped</text><text start="8185.93" dur="5.67">entirely from the kernel in the end it</text><text start="8189.98" dur="4.68">was determined that the bug was caused</text><text start="8191.6" dur="4.71">by adding partially used slabs beginning</text><text start="8194.66" dur="4.83">of a linked list instead of to the end</text><text start="8196.31" dur="5.58">of that list the fix was a change to one</text><text start="8199.49" dur="4.35">line of code and the slub alligator has</text><text start="8201.89" dur="5.42">been the default linux alligator since</text><text start="8203.84" dur="3.47">two point six point 23</text><text start="8210.969" dur="5.13">in this lecture I will introduce paging</text><text start="8213.849" dur="5.161">and related topics including logical</text><text start="8216.099" dur="5.93">addressing address translation and the</text><text start="8219.01" dur="3.019">translation lookaside buffer</text><text start="8223.51" dur="4.42">paging provides a mechanism for sharing</text><text start="8225.95" dur="4.83">memory among multiple user space</text><text start="8227.93" dur="4.83">processes at the same time this</text><text start="8230.78" dur="4.17">mechanism improves upon simpler</text><text start="8232.76" dur="4.56">algorithms such as static partitioning</text><text start="8234.95" dur="4.59">and direct power of two methods by</text><text start="8237.32" dur="5.43">allocating fixed sized pages of memory</text><text start="8239.54" dur="5.07">to processes the key to effective memory</text><text start="8242.75" dur="4.08">utilization with paging is that each</text><text start="8244.61" dur="5.25">process it&amp;#39;s given its own logical</text><text start="8246.83" dur="5.1">memory space in other words each process</text><text start="8249.86" dur="5.4">has its own view of memory with its own</text><text start="8251.93" dur="5.49">address space the addresses that the</text><text start="8255.26" dur="5.07">process sees are called logical</text><text start="8257.42" dur="5.79">addresses these logical addresses are</text><text start="8260.33" dur="5.01">divided into fixed sized pages each</text><text start="8263.21" dur="4.74">process in the system receives its own</text><text start="8265.34" dur="6.39">private set of pages with private memory</text><text start="8267.95" dur="6.48">addresses when a process accesses memory</text><text start="8271.73" dur="5.19">using one of its logical addresses the</text><text start="8274.43" dur="5.1">CPU translates the logical address into</text><text start="8276.92" dur="6.62">a physical address physical addresses</text><text start="8279.53" dur="4.01">refer to locations in system memory</text><text start="8284.62" dur="5.01">for performance reasons translation is</text><text start="8287.5" dur="5.429">done in terms of memory frames or fixed</text><text start="8289.63" dur="5.82">size regions of RAM the base frame size</text><text start="8292.929" dur="4.531">is normally four Kitty bytes although</text><text start="8295.45" dur="3.479">this can vary by hardware device and</text><text start="8297.46" dur="4.86">most Hardware can support multiple</text><text start="8298.929" dur="5.491">different frame sizes operating systems</text><text start="8302.32" dur="3.81">normally use logical page sizes that</text><text start="8304.42" dur="5.009">correspond to supported Hardware frame</text><text start="8306.13" dur="7.229">sizes again for kitty byte pages are a</text><text start="8309.429" dur="6.571">typical base size on x86 and x86 64</text><text start="8313.359" dur="5.16">systems the Linux kernel can support</text><text start="8316" dur="4.679">so-called huge pages which can be as</text><text start="8318.519" dur="7.38">large as one gigabyte when using the</text><text start="8320.679" dur="6.781">newest AMD and Intel CPUs the key</text><text start="8325.899" dur="3.151">advantage to paging is that it</text><text start="8327.46" dur="4.17">eliminates the issue of external</text><text start="8329.05" dur="5.129">fragmentation since the CPU is</text><text start="8331.63" dur="4.469">translating logical page based addresses</text><text start="8334.179" dur="4.531">into physical frame based addresses</text><text start="8336.099" dur="5.79">anyway there is no need for the physical</text><text start="8338.71" dur="5.099">frames to be contiguous as a result we</text><text start="8341.889" dur="3.84">can store a data structure and process</text><text start="8343.809" dur="4.681">memory using pages that are logically</text><text start="8345.729" dur="5.16">contiguous however when these logical</text><text start="8348.49" dur="4.76">pages are mapped to physical frames the</text><text start="8350.889" dur="4.5">frames may be scattered throughout Ram</text><text start="8353.25" dur="3.609">notice that the distinction between a</text><text start="8355.389" dur="4.59">page and a frame is a matter of</text><text start="8356.859" dur="5.49">terminology a page refers to a block of</text><text start="8359.979" dur="5.311">logical memory while a frame refers to a</text><text start="8362.349" dur="3.781">block of physical memory for now pretend</text><text start="8365.29" dur="3.059">that there is a one-to-one</text><text start="8366.13" dur="4.32">correspondence between logical pages and</text><text start="8368.349" dur="4.931">physical frames will make things more</text><text start="8370.45" dur="5.29">complicated later</text><text start="8373.28" dur="4.319">the key to making page translation</text><text start="8375.74" dur="3.75">efficient is that the cpu contains</text><text start="8377.599" dur="4.92">special hardware called the memory</text><text start="8379.49" dur="5.67">management unit or MMU which performs</text><text start="8382.519" dur="4.681">the translation operations in this</text><text start="8385.16" dur="4.11">diagram the process accesses memory</text><text start="8387.2" dur="4.77">using logical addresses which are</text><text start="8389.27" dur="4.98">divided into pages when requests are</text><text start="8391.97" dur="4.65">made using these addresses the memory</text><text start="8394.25" dur="4.32">management unit on the CPU translates</text><text start="8396.62" dur="5.07">the logical address into a corresponding</text><text start="8398.57" dur="4.8">physical address the resulting physical</text><text start="8401.69" dur="4.68">address will be to some point of a</text><text start="8403.37" dur="5.13">physical memory frame note that</text><text start="8406.37" dur="4.71">individual memory addresses within pages</text><text start="8408.5" dur="6.48">or frames still remain contiguous which</text><text start="8411.08" dur="6.24">is important because the MMU translates</text><text start="8414.98" dur="4.41">page numbers to frame numbers leaving</text><text start="8417.32" dur="7.23">the offset to the memory location within</text><text start="8419.39" dur="7.26">the page unchanged as shown in this</text><text start="8424.55" dur="4.35">diagram we can divide a logical address</text><text start="8426.65" dur="7.74">from a process into two components the</text><text start="8428.9" dur="7.92">page number P and the offset D when the</text><text start="8434.39" dur="4.29">MMU is asked to perform a translation it</text><text start="8436.82" dur="4.26">consults a data structure called a page</text><text start="8438.68" dur="5.88">table which provides a mapping between</text><text start="8441.08" dur="5.76">page number and frame numbers using this</text><text start="8444.56" dur="4.05">information the MMU constructs the</text><text start="8446.84" dur="4.02">physical address by using the</text><text start="8448.61" dur="4.32">corresponding frame number represented</text><text start="8450.86" dur="4.89">here about the letter F in place of the</text><text start="8452.93" dur="4.89">page number once again the offset to the</text><text start="8455.75" dur="5.58">particular byte within the page or frame</text><text start="8457.82" dur="5.49">is left unchanged a particular byte in</text><text start="8461.33" dur="4.04">RAM is actually addressed using the</text><text start="8463.31" dur="5.1">frame number and offset into the frame</text><text start="8465.37" dur="6.34">however to the process this memory</text><text start="8468.41" dur="5.22">access appears to occur using a logical</text><text start="8471.71" dur="3.98">memory address which is conceptually</text><text start="8473.63" dur="4.83">divided into a page number and offset</text><text start="8475.69" dur="5.26">the offset component is not changed by</text><text start="8478.46" dur="4.41">the MMU but the page numbers replaced by</text><text start="8480.95" dur="4.529">the physical frame numbers</text><text start="8482.87" dur="4.529">in order to perform the translation from</text><text start="8485.479" dur="4.441">page numbers to frame numbers the MMU</text><text start="8487.399" dur="4.621">must consult the page table the page</text><text start="8489.92" dur="5.15">table is a data structure that is itself</text><text start="8492.02" dur="5.76">stored in RAM in the kernel memory space</text><text start="8495.07" dur="4.75">storing the page table in RAM leads to a</text><text start="8497.78" dur="3.98">major problem since every MMU</text><text start="8499.82" dur="4.289">translation would require a look up</text><text start="8501.76" dur="4.87">since the lookup requires a memory</text><text start="8504.109" dur="4.29">access each process memory request would</text><text start="8506.63" dur="5.19">actually require to physical memory</text><text start="8508.399" dur="5.281">accesses this situation is especially</text><text start="8511.82" dur="5.13">troublesome because a memory access</text><text start="8513.68" dur="4.65">occurs both upon accessing data and upon</text><text start="8516.95" dur="3.87">reading the next instruction to be</text><text start="8518.33" dur="4.74">executed without some additional</text><text start="8520.82" dur="3.869">hardware to resolve this problem system</text><text start="8523.07" dur="4.619">memory performance would be effectively</text><text start="8524.689" dur="5.75">cut in to greatly reducing the overall</text><text start="8527.689" dur="2.75">performance of the system</text><text start="8530.59" dur="3.809">the solution for eliminating the double</text><text start="8532.84" dur="3.33">memory access issue is to add a</text><text start="8534.399" dur="3.991">component to the CPU called the</text><text start="8536.17" dur="5.42">translation lookaside buffer or TLB</text><text start="8538.39" dur="4.92">which stores some page to frame mappings</text><text start="8541.59" dur="3.43">some Tobs</text><text start="8543.31" dur="4.11">also provide room for address space</text><text start="8545.02" dur="5.25">identifiers which aid in implementing</text><text start="8547.42" dur="4.739">memory protection the TLB is a piece of</text><text start="8550.27" dur="4.08">associative memory meaning that it can</text><text start="8552.159" dur="4.71">perform rapid parallel searches</text><text start="8554.35" dur="4.969">resulting in constant time lookup for</text><text start="8556.869" dur="5.221">page translation this memory is</text><text start="8559.319" dur="6.34">exceptionally fast meaning that it is</text><text start="8562.09" dur="6.109">also quite expensive as a result TLB</text><text start="8565.659" dur="5.96">sizes are typically limited from 8 to</text><text start="8568.199" dur="3.42">4096 entries</text><text start="8573.261" dur="3.96">the addition of the TOB provides a</text><text start="8575.48" dur="4.111">potential shortcut for performing</text><text start="8577.221" dur="4.349">address translation instead of</text><text start="8579.591" dur="5.159">immediately searching the page table the</text><text start="8581.57" dur="5.191">MMU first searches the TLB if the page</text><text start="8584.75" dur="3.69">to frame mapping can be found in the TLB</text><text start="8586.761" dur="4.17">then it is used to perform the</text><text start="8588.44" dur="4.89">translation from page number P to frame</text><text start="8590.931" dur="6.78">number F this situation is called the</text><text start="8593.33" dur="6.75">TLB hit a TLB miss occurs whenever the</text><text start="8597.711" dur="5.01">page number is not present in the TLB in</text><text start="8600.08" dur="4.62">this case the MMU must search the page</text><text start="8602.721" dur="4.559">table to locate the appropriate frame</text><text start="8604.7" dur="5.25">number the CPU and operating system</text><text start="8607.28" dur="4.83">employed various policies to determine</text><text start="8609.95" dur="5.221">when to store a page to frame mapping in</text><text start="8612.11" dur="5.37">the TLB a simple policy would be to use</text><text start="8615.171" dur="4.319">a first-in first-out policy that</text><text start="8617.48" dur="4.371">replaces the earliest entry in the TLB</text><text start="8619.49" dur="5.34">with the newest entry upon a TLB miss</text><text start="8621.851" dur="7.96">other more complex and potentially</text><text start="8624.83" dur="6.841">better policies also exists rules page</text><text start="8629.811" dur="3.75">tables are data structures that store</text><text start="8631.671" dur="4.079">mappings between logical pages and</text><text start="8633.561" dur="4.799">process memory and physical frames in</text><text start="8635.75" dur="4.051">RAM these structures are used and</text><text start="8638.36" dur="4.231">managed in different ways on different</text><text start="8639.801" dur="5.609">systems often with assistance from the</text><text start="8642.591" dur="4.92">hardware at the end of this lecture I</text><text start="8645.41" dur="4.051">will discuss extended page tables which</text><text start="8647.511" dur="4.05">are useful for allowing Hardware to</text><text start="8649.461" dur="4.88">support multiple simultaneous operating</text><text start="8651.561" dur="2.78">systems at once</text><text start="8655.38" dur="3.989">recall from the previous lecture that</text><text start="8657.689" dur="4.831">the page table stores mappings between</text><text start="8659.369" dur="5.641">age numbers and frame numbers whenever a</text><text start="8662.52" dur="4.94">TLB miss occurs the page table must be</text><text start="8665.01" dur="4.8">searched to find the appropriate mapping</text><text start="8667.46" dur="4.09">CPUs have differing levels of support</text><text start="8669.81" dur="4.32">for managing and searching the page</text><text start="8671.55" dur="5.97">tables automatically on most modern</text><text start="8674.13" dur="5.849">systems including x86 64 and arm CPUs</text><text start="8677.52" dur="5.389">the page tables are managed and searched</text><text start="8679.979" dur="5.731">by the CPU automatically upon TLB miss</text><text start="8682.909" dur="5.051">this search increases the memory access</text><text start="8685.71" dur="5.13">time but no fault or interrupt is</text><text start="8687.96" dur="6.12">generated as a result the CPU does not</text><text start="8690.84" dur="5.819">have to perform a context switch among a</text><text start="8694.08" dur="4.59">few other cpus the MIPS architecture</text><text start="8696.659" dur="4.981">requires the operating system to manage</text><text start="8698.67" dur="5.25">and search the page table whenever a TLB</text><text start="8701.64" dur="4.83">miss occurs the CPU triggers a fault</text><text start="8703.92" dur="4.439">which is a type of interrupt the CPU</text><text start="8706.47" dur="3.149">must make a context switch away from</text><text start="8708.359" dur="3.3">whatever task is currently being</text><text start="8709.619" dur="4.86">executed in order to execute the</text><text start="8711.659" dur="4.381">interrupt handler for the fault software</text><text start="8714.479" dur="3.781">managed page tables are becoming</text><text start="8716.04" dur="5.1">increasingly uncommon even on embedded</text><text start="8718.26" dur="5.67">systems as the popular arm CPU supports</text><text start="8721.14" dur="4.74">Hardware management the MIPS CPU is</text><text start="8723.93" dur="4.41">typically used in lower end consumer</text><text start="8725.88" dur="6.14">devices such as inexpensive ear eaters</text><text start="8728.34" dur="3.68">and the least expensive tablets</text><text start="8733.431" dur="4.769">one approach to reducing the page table</text><text start="8736.101" dur="4.109">search time whenever a TLB miss occurs</text><text start="8738.2" dur="4.83">is to store the page table as a tree</text><text start="8740.21" dur="4.801">instead of a list this technique of</text><text start="8743.03" dur="5.34">hierarchical page tables divides the</text><text start="8745.011" dur="5.279">page tables into pages each logical</text><text start="8748.37" dur="4.261">address in process memory is divided</text><text start="8750.29" dur="4.231">into an outer page table a set of</text><text start="8752.631" dur="4.739">offsets into various levels of sub</text><text start="8754.521" dur="5.219">tables and a final offset a specific</text><text start="8757.37" dur="3.931">byte of memory to be accessed this</text><text start="8759.74" dur="3.781">technique can be generalized to any</text><text start="8761.301" dur="3.96">number of levels in the hierarchy but I</text><text start="8763.521" dur="5.309">will present here a simple system that</text><text start="8765.261" dur="6.21">uses only two levels as illustrated in</text><text start="8768.83" dur="4.62">this diagram the data structure is</text><text start="8771.471" dur="4.109">arranged so that an outer page table</text><text start="8773.45" dur="5.22">provides a mapping between outer page</text><text start="8775.58" dur="5.88">numbers and page table pages once the</text><text start="8778.67" dur="4.711">proper page table page is located the</text><text start="8781.46" dur="4.051">translation from page number to frame</text><text start="8783.381" dur="4.26">number can be completed quickly since</text><text start="8785.511" dur="5.059">the page of the inner page table is</text><text start="8787.641" dur="2.929">relatively small</text><text start="8791.801" dur="4.62">the address translation mechanism used</text><text start="8794.501" dur="3.84">with hierarchical page tables is more</text><text start="8796.421" dur="4.949">complex than that used with a simple</text><text start="8798.341" dur="5.059">linear page table the logical address is</text><text start="8801.37" dur="5.16">divided into additional components in</text><text start="8803.4" dur="5.5">this example with two levels in the page</text><text start="8806.53" dur="5.21">table the logical address is divided</text><text start="8808.9" dur="5.58">into an outer page table entry number T</text><text start="8811.74" dur="4.781">which specifies the location in the</text><text start="8814.48" dur="4.71">outer page table in which to find the</text><text start="8816.521" dur="6">proper inner page table the next</text><text start="8819.19" dur="5.401">component of the address P is the offset</text><text start="8822.521" dur="4.889">into the inner page table at which the</text><text start="8824.591" dur="5.04">mapping can be found in this example we</text><text start="8827.41" dur="4.681">have a single page to frame mapping in</text><text start="8829.631" dur="4.439">each inner page table entry so we can</text><text start="8832.091" dur="4.769">obtain the frame number from that entry</text><text start="8834.07" dur="4.83">a real system will be more complex and</text><text start="8836.86" dur="4.79">likely will require a short linear</text><text start="8838.9" dur="5.731">search at some level in the page table</text><text start="8841.65" dur="5.021">once the frame number is determined Ram</text><text start="8844.631" dur="4.38">is accessed in exactly the same way as</text><text start="8846.671" dur="4.199">it is in simpler designs with the final</text><text start="8849.011" dur="8.37">address joining a frame number and</text><text start="8850.87" dur="8.79">offset into the physical address storing</text><text start="8857.381" dur="5.04">the page table in a tree improves access</text><text start="8859.66" dur="4.95">performance however as the total amount</text><text start="8862.421" dur="4.17">of system ram continues to increase with</text><text start="8864.61" dur="4.08">newer and newer generations of computers</text><text start="8866.591" dur="3.199">the size of the page table also</text><text start="8868.69" dur="4.231">increases</text><text start="8869.79" dur="5.53">moreover the address spaces on 64-bit</text><text start="8872.921" dur="4.17">architectures are much larger than the</text><text start="8875.32" dur="5.01">amount of memory that the MMU actually</text><text start="8877.091" dur="5.819">supports current 64-bit systems have</text><text start="8880.33" dur="6">true hardware address sizes in the range</text><text start="8882.91" dur="5.401">of 34 to 48 bits if we were to store a</text><text start="8886.33" dur="4.08">mapping to handle every logical page</text><text start="8888.311" dur="4.11">number in such a system the mapping</text><text start="8890.41" dur="4.83">would be large and inefficient since the</text><text start="8892.421" dur="5.909">address space is sparse that is not</text><text start="8895.24" dur="5.37">every 64 bit logical address can map to</text><text start="8898.33" dur="4.5">a physical location in RAM since the</text><text start="8900.61" dur="5.911">physical addresses are at most 48 bits</text><text start="8902.83" dur="7.381">as a result many of the possible 64-bit</text><text start="8906.521" dur="6.03">addresses are unused a solution to this</text><text start="8910.211" dur="4.56">problem which both reduces page table</text><text start="8912.551" dur="4.8">storage size and increases search speed</text><text start="8914.771" dur="5.099">is to use a hash table or dictionary</text><text start="8917.351" dur="5.46">structure to store the outer page table</text><text start="8919.87" dur="5.881">since several addresses may hash</text><text start="8922.811" dur="5.94">the same value each entry in the hash</text><text start="8925.751" dur="5.369">table is an inner linear page table</text><text start="8928.751" dur="7.47">which allows the hash collisions to be</text><text start="8931.12" dur="6.721">resolved through chaining translating a</text><text start="8936.221" dur="3.87">logical address to a physical address</text><text start="8937.841" dur="5.309">with a hashed page table begins by</text><text start="8940.091" dur="5.25">hashing the page number P hashing is</text><text start="8943.15" dur="4.141">accomplished using a hash function which</text><text start="8945.341" dur="4.8">may be implemented in hardware for high</text><text start="8947.291" dur="4.529">performance the hash value returned by</text><text start="8950.141" dur="4.229">the function gives the location in the</text><text start="8951.82" dur="5.101">hash table where the inner page table</text><text start="8954.37" dur="4.801">may be found a linear search of the</text><text start="8956.921" dur="5.22">inner page table is performed to locate</text><text start="8959.171" dur="5.97">the frame number once the frame number F</text><text start="8962.141" dur="7.889">is obtained it is joined with the offset</text><text start="8965.141" dur="7.41">D to give the hardware address some</text><text start="8970.03" dur="5.071">architectures notably PowerPC and Intel</text><text start="8972.551" dur="5.399">Itanium store their page tables</text><text start="8975.101" dur="5.07">backwards that is the system stores a</text><text start="8977.95" dur="4.74">data structure with one entry per frame</text><text start="8980.171" dur="5.1">and the entry stores the corresponding</text><text start="8982.69" dur="5.01">page number along with the process ID of</text><text start="8985.271" dur="4.53">the process owning the page this</text><text start="8987.7" dur="4.38">approach called an inverted page table</text><text start="8989.801" dur="5.609">is efficient in terms of page table</text><text start="8992.08" dur="5.25">storage size however inverted page</text><text start="8995.41" dur="4.021">tables are inefficient in terms of</text><text start="8997.33" dur="5.721">performance and these structures are not</text><text start="8999.431" dur="3.62">used on a majority of systems</text><text start="9003.68" dur="5.429">on newer x86 64 systems with</text><text start="9006.739" dur="4.771">virtualization extensions hardware</text><text start="9009.109" dur="6.151">support exists for extended page tables</text><text start="9011.51" dur="5.3">or EPT AMD and Intel each brand this</text><text start="9015.26" dur="4.38">technique with a different name</text><text start="9016.81" dur="6.61">AMD uses the term rapid virtualization</text><text start="9019.64" dur="5.58">indexing for our VI on newer CPUs they</text><text start="9023.42" dur="5.67">used to call this technique nested page</text><text start="9025.22" dur="7.53">tables or NPT Intel uses the extended</text><text start="9029.09" dur="6.21">page tables terminology EPT adds a level</text><text start="9032.75" dur="4.979">of paging to the system at the outer</text><text start="9035.3" dur="5.46">level each virtual machine or guest</text><text start="9037.729" dur="5.46">running on the CPU sees its own set of</text><text start="9040.76" dur="5.67">memory frames isolated from an</text><text start="9043.189" dur="5.701">independent of the actual Hardware the</text><text start="9046.43" dur="4.559">CPU translates page numbers to frame</text><text start="9048.89" dur="5.13">numbers first by translating the page</text><text start="9050.989" dur="5.011">number to a guest frame number the guest</text><text start="9054.02" dur="4.08">frame number is then translated to a</text><text start="9056" dur="5.91">host frame number which is the physical</text><text start="9058.1" dur="6">frame number EPT technology is important</text><text start="9061.91" dur="4.079">for virtual machine performance since it</text><text start="9064.1" dur="5.099">allows each guest to manage its own</text><text start="9065.989" dur="5.011">memory efficiently moreover guests can</text><text start="9069.199" dur="4.141">access memory without having to switch</text><text start="9071" dur="7.8">the CPU in a hypervisor mode which can</text><text start="9073.34" dur="7.32">be an expensive operation the downside</text><text start="9078.8" dur="3.75">the logical address translation with</text><text start="9080.66" dur="3.27">extended page tables is that it becomes</text><text start="9082.55" dur="3.149">conceptually more difficult to</text><text start="9083.93" dur="5.04">understand as illustrated by the</text><text start="9085.699" dur="4.921">complexity of this diagram a process</text><text start="9088.97" dur="3.84">running in a guest operating system</text><text start="9090.62" dur="4.92">makes a memory request just as it would</text><text start="9092.81" dur="4.62">if no virtualization were present this</text><text start="9095.54" dur="6.449">memory request is divided into a page</text><text start="9097.43" dur="6.63">number and an offset as usual the CPU</text><text start="9101.989" dur="5.79">then performs translation of this page</text><text start="9104.06" dur="6.839">number P to a frame number F also as</text><text start="9107.779" dur="4.981">usually furthermore as far as the guest</text><text start="9110.899" dur="4.5">operating system is concerned the</text><text start="9112.76" dur="4.559">translation is complete the guest OS</text><text start="9115.399" dur="4.471">sees the memory region provided by the</text><text start="9117.319" dur="4.951">host system as if that memory were</text><text start="9119.87" dur="4.529">physical memory in other words the guest</text><text start="9122.27" dur="4.74">OS has no idea that is running in a</text><text start="9124.399" dur="4.17">virtual machine to the guest OS the</text><text start="9127.01" dur="6.389">virtual machine looks just like a</text><text start="9128.569" dur="6.991">physical system however in reality the</text><text start="9133.399" dur="4.021">guests physical memory is actually an</text><text start="9135.56" dur="4.56">illusion provided by the</text><text start="9137.42" dur="5.55">to make the solution work the host must</text><text start="9140.12" dur="5.31">translate the frame number F and guests</text><text start="9142.97" dur="5.73">memory to an actual physical frame</text><text start="9145.43" dur="5.7">number G this translation is performed</text><text start="9148.7" dur="6.36">by the CPU without any context or mode</text><text start="9151.13" dur="6.12">switches using the EPT table once this</text><text start="9155.06" dur="4.56">translation is performed the physical</text><text start="9157.25" dur="5.3">memory address is generated by combining</text><text start="9159.62" dur="6.42">G and D where D is the original</text><text start="9162.55" dur="4.99">unchanged offset into the page it is</text><text start="9166.04" dur="4.38">important to note that this entire</text><text start="9167.54" dur="5.22">process is performed by the CPU without</text><text start="9170.42" dur="6.57">switching to the host OS or hypervisor</text><text start="9172.76" dur="6.69">in this lecture I will discuss memory</text><text start="9176.99" dur="6.83">protection including segmentation and</text><text start="9179.45" dur="4.37">permission bits on page table entries</text><text start="9186.44" dur="4.68">remember the operating systems perform</text><text start="9188.84" dur="6.181">two functions abstraction and</text><text start="9191.12" dur="5.82">arbitration mechanisms for accessing</text><text start="9195.021" dur="4.679">memory provide abstractions of the</text><text start="9196.94" dur="5.34">underlying memory hardware however</text><text start="9199.7" dur="5.52">operating systems must also arbitrate</text><text start="9202.28" dur="5.071">access Aram by ensuring that one process</text><text start="9205.22" dur="5.4">cannot access memory that does not</text><text start="9207.351" dur="5.339">belong to it without this arbitration a</text><text start="9210.62" dur="5.43">process could change memory belonging to</text><text start="9212.69" dur="5.161">another process or worse it could crash</text><text start="9216.05" dur="5.04">the system by changing memory that</text><text start="9217.851" dur="5.579">belongs to the kernel on systems that</text><text start="9221.09" dur="5.011">utilize simple memory management such as</text><text start="9223.43" dur="4.591">power of two methods memory access</text><text start="9226.101" dur="4.799">protections are provided by a mechanism</text><text start="9228.021" dur="5.009">called segmentation on the majority of</text><text start="9230.9" dur="4.5">modern systems which employ paging</text><text start="9233.03" dur="4.83">memory protection is implemented as part</text><text start="9235.4" dur="6.29">of the paging system and memory access</text><text start="9237.86" dur="3.83">permissions are stored in the page table</text><text start="9242.84" dur="5.13">on any system process memory is divided</text><text start="9245.84" dur="4.5">into logical pieces or segments at</text><text start="9247.97" dur="4.44">compile time these segments include the</text><text start="9250.34" dur="4.47">text segment a region for global</text><text start="9252.41" dur="4.8">variables a stack region for automatic</text><text start="9254.81" dur="5.129">variables and a heap for dynamically</text><text start="9257.21" dur="4.91">allocated data structures access</text><text start="9259.939" dur="4.951">permissions apply to each segment in</text><text start="9262.12" dur="6.79">particular the text segment is set to be</text><text start="9264.89" dur="5.73">read only outside a single process there</text><text start="9268.91" dur="3.42">must be a mechanism to track which</text><text start="9270.62" dur="4.95">segments of memory belong to which</text><text start="9272.33" dur="5.76">processes when a process is executing</text><text start="9275.57" dur="4.289">its segments or marked valid so that it</text><text start="9278.09" dur="4.68">can access the corresponding memory</text><text start="9279.859" dur="5.281">locations segments of memory belonging</text><text start="9282.77" dur="4.83">to other processes are marked invalid</text><text start="9285.14" dur="5.13">and any attempt to access those segments</text><text start="9287.6" dur="4.19">results in a fault or interrupt call the</text><text start="9290.27" dur="4.29">segmentation fault</text><text start="9291.79" dur="6.1">typically a segmentation fault causes</text><text start="9294.56" dur="5.25">the process to be terminated segment</text><text start="9297.89" dur="4.68">memory permissions are implemented on</text><text start="9299.81" dur="5.22">non paging systems using a segment table</text><text start="9302.57" dur="4.26">the segment table has permission bits</text><text start="9305.03" dur="4.409">that can be applied to each region of</text><text start="9306.83" dur="5.37">memory when memory is accessed using</text><text start="9309.439" dur="4.261">segmentation the segment table must be</text><text start="9312.2" dur="4.73">consulted to determine whether or not</text><text start="9313.7" dur="3.23">the access is legal</text><text start="9317.5" dur="4.82">in this example a process requests</text><text start="9320.17" dur="4.53">access to memory using a logical address</text><text start="9322.32" dur="4.33">since we do not have paging with the</text><text start="9324.7" dur="4.38">system this logical address is not</text><text start="9326.65" dur="5.19">translated by a page table mechanism</text><text start="9329.08" dur="4.68">however this logical address is divided</text><text start="9331.84" dur="4.47">into a segment address and an offset</text><text start="9333.76" dur="6.15">into the segment in a manner similar to</text><text start="9336.31" dur="5.01">page translation the MMU checks the</text><text start="9339.91" dur="3.45">segment table to determine if a</text><text start="9341.32" dur="4.65">particular memory access is valid in</text><text start="9343.36" dur="4.89">this example the segment table stores up</text><text start="9345.97" dur="5.88">to four permissions and up to three bits</text><text start="9348.25" dur="7.2">a valid invalid bit a read/write bit and</text><text start="9351.85" dur="5.73">an execute bit in practice most systems</text><text start="9355.45" dur="4.53">that support segmentation without paging</text><text start="9357.58" dur="5.73">normally only use two bits valid and</text><text start="9359.98" dur="5.22">valid and read right if the process</text><text start="9363.31" dur="4.05">tries to read from a segment that is</text><text start="9365.2" dur="4.65">marked valid the memory access is</text><text start="9367.36" dur="4.62">permitted and occurs normally the same</text><text start="9369.85" dur="3.84">thing happens if a process tries to</text><text start="9371.98" dur="5.64">write to a memory location that is</text><text start="9373.69" dur="6.15">marked both valid and writable however</text><text start="9377.62" dur="4.95">if a process tries to write to a segment</text><text start="9379.84" dur="5.91">marked read-only or if a process tries</text><text start="9382.57" dur="5.22">to access an invalid segment the CPU</text><text start="9385.75" dur="4.95">triggers a segmentation fault and the</text><text start="9387.79" dur="5.7">process is terminated for some invalid</text><text start="9390.7" dur="4.98">accesses on a unix-like system this</text><text start="9393.49" dur="4.73">segmentation fault may be reported as a</text><text start="9395.68" dur="2.54">bus here</text><text start="9398.37" dur="4.859">with paging systems which comprise the</text><text start="9401.13" dur="4.59">majority of modern systems including</text><text start="9403.229" dur="4.231">mobile devices memory protection is</text><text start="9405.72" dur="4.889">accomplished by adding permission bits</text><text start="9407.46" dur="5.189">to the page table entries in general</text><text start="9410.609" dur="5.161">page table entries will have a valid</text><text start="9412.649" dur="5.25">invalid bit and a read/write bit the</text><text start="9415.77" dur="4.89">valid invalid bit is used in the same</text><text start="9417.899" dur="4.83">way as it is for segmentation pages that</text><text start="9420.66" dur="5.43">a process is allowed to access our mark</text><text start="9422.729" dur="6.871">valid other pages and any non-existent</text><text start="9426.09" dur="6.57">pages are marked invalid if a process</text><text start="9429.6" dur="5.46">attempts to access an invalid page a CPU</text><text start="9432.66" dur="4.31">fault is raised which functions like an</text><text start="9435.06" dur="5.43">interrupt to trap into the kernel a</text><text start="9436.97" dur="6.04">linux kernel will send a 6 8 V or sig</text><text start="9440.49" dur="4.59">bus signal to the process depending on</text><text start="9443.01" dur="4.95">the location memory the process tried to</text><text start="9445.08" dur="4.71">access in practice the signal is</text><text start="9447.96" dur="5.1">normally not caught in the process</text><text start="9449.79" dur="5.28">terminates for historical reasons this</text><text start="9453.06" dur="5.48">event is called a segmentation fault or</text><text start="9455.07" dur="3.47">seg fault for short</text><text start="9459.18" dur="5.04">the read/write bit used to mark the text</text><text start="9462.119" dur="3.691">segment of a process can be used to</text><text start="9464.22" dur="4.38">allow pages of memory to be shared</text><text start="9465.81" dur="5.729">between processes pages that are</text><text start="9468.6" dur="4.83">re-entrant or read-only can be accessed</text><text start="9471.539" dur="5.011">by multiple instances of multiple</text><text start="9473.43" dur="5.309">programs simultaneously this capability</text><text start="9476.55" dur="3.96">is useful on modern systems since</text><text start="9478.739" dur="4.47">multiple instances of programs are</text><text start="9480.51" dur="5.13">typically run at the same time in the</text><text start="9483.209" dur="4.351">case of a web browser for example it is</text><text start="9485.64" dur="4.979">only necessary to load one copy of the</text><text start="9487.56" dur="4.74">browser program code into memory several</text><text start="9490.619" dur="4.11">copies of the browser can be run as</text><text start="9492.3" dur="5.97">several different processes sharing the</text><text start="9494.729" dur="5.701">program code and thus saving memory the</text><text start="9498.27" dur="4.979">open source chromium browser and its</text><text start="9500.43" dur="5.58">Google Chrome derivative allow each tab</text><text start="9503.249" dur="4.681">to run in a separate process shared</text><text start="9506.01" dur="4.649">memory pages allow the code for the</text><text start="9507.93" dur="4.08">browser any extensions and any plugins</text><text start="9510.659" dur="4.46">to be loaded only once</text><text start="9512.01" dur="3.109">saving memory</text><text start="9515.88" dur="4.109">this diagram illustrates how two</text><text start="9517.89" dur="5.61">processes can share a single page and</text><text start="9519.989" dur="5.88">RAM each process sees a handful of valid</text><text start="9523.5" dur="5.58">frames one of which is marked read-only</text><text start="9525.869" dur="4.981">if this memory frame contains code or</text><text start="9529.08" dur="4.289">other information that can be shared</text><text start="9530.85" dur="4.23">between the processes then the two frame</text><text start="9533.369" dur="4.86">numbers will be identical with in the</text><text start="9535.08" dur="4.92">separate processes each process may use</text><text start="9538.229" dur="4.411">a different page number to represent</text><text start="9540" dur="5.1">this memory location however since each</text><text start="9542.64" dur="7.95">process has its own independent logical</text><text start="9545.1" dur="8.219">view of memory incidentally this diagram</text><text start="9550.59" dur="4.62">is a conceptual diagram only it does not</text><text start="9553.319" dur="3.471">directly map to any particular data</text><text start="9555.21" dur="4.29">structure in the operating system</text><text start="9556.79" dur="6.869">instead the two tables illustrate how</text><text start="9559.5" dur="4.159">each process might see page do</text><text start="9563.95" dur="4.77">newer AMD and Intel CPUs support an</text><text start="9567.04" dur="4.32">additional permission bit for setting</text><text start="9568.72" dur="5.7">execute permissions this bit called the</text><text start="9571.36" dur="5.4">no execute or NX bit is actually an</text><text start="9574.42" dur="4.35">inverted permission it is set to one</text><text start="9576.76" dur="5.31">whenever execution of data found on a</text><text start="9578.77" dur="5.73">memory page is forbidden originally the</text><text start="9582.07" dur="5.19">NX bit was implemented by AMD on its</text><text start="9584.5" dur="4.41">64-bit capable processors using the</text><text start="9587.26" dur="5.04">marketing name of enhanced virus</text><text start="9588.91" dur="5.76">protection intel followed suit and added</text><text start="9592.3" dur="5.67">this mechanism as the execute disabled</text><text start="9594.67" dur="4.98">or XD bit the concept behind the bit was</text><text start="9597.97" dur="3.57">to provide a mechanism that could be</text><text start="9599.65" dur="4.079">used to prevent execution of native</text><text start="9601.54" dur="5.04">machine instructions from memory space</text><text start="9603.729" dur="4.981">used for regular data although the</text><text start="9606.58" dur="4.17">primary beneficiary of this feature was</text><text start="9608.71" dur="4.53">a certain virus prone system that is not</text><text start="9610.75" dur="4.59">even Experion the linux kernel does</text><text start="9613.24" dur="5.42">support the NX bit as a guard against</text><text start="9615.34" dur="5.52">buffer overflow and similar exploits in</text><text start="9618.66" dur="4.51">the example presented in the</text><text start="9620.86" dur="6.09">hypothetical page table here only the</text><text start="9623.17" dur="6.66">page with hex numbers 0 for a4 allows</text><text start="9626.95" dur="5.16">code execution in the event of an</text><text start="9629.83" dur="4.109">exploit attempts a malicious application</text><text start="9632.11" dur="6.06">could try to load code in another page</text><text start="9633.939" dur="6.75">perhaps 0 for a1 however since the NX</text><text start="9638.17" dur="4.74">bit is set on that page any attempt to</text><text start="9640.689" dur="4.591">execute the code loaded by the exploit</text><text start="9642.91" dur="4.65">will trigger CPU fault and the process</text><text start="9645.28" dur="4.17">will be terminated this mechanism</text><text start="9647.56" dur="6.32">increases the security of the system</text><text start="9649.45" dur="4.43">against certain types of attacks</text><text start="9654.921" dur="6.3">in this lecture I will begin discussing</text><text start="9658.101" dur="5.37">virtual memory due to the complexity of</text><text start="9661.221" dur="3.99">the virtual memory subsystem the second</text><text start="9663.471" dur="6.42">part of this introduction will be given</text><text start="9665.211" dur="7.229">as a second lecture we have previously</text><text start="9669.891" dur="5.73">seen that each processing system can be</text><text start="9672.44" dur="5.521">given its own logical memory space this</text><text start="9675.621" dur="4.44">arrangement allows logical pages to be</text><text start="9677.961" dur="3.689">mapped to physical frames without the</text><text start="9680.061" dur="4.08">need for physical frames to be</text><text start="9681.65" dur="4.441">contiguous eliminating external</text><text start="9684.141" dur="4.259">fragmentation and increasing the degree</text><text start="9686.091" dur="5.43">of multi-programming the de system can</text><text start="9688.4" dur="4.771">support we can further increase the</text><text start="9691.521" dur="3.99">degree of multi programming in the</text><text start="9693.171" dur="5.279">system by recognizing that processes do</text><text start="9695.511" dur="5.719">not actually use all the memory in their</text><text start="9698.45" dur="5.401">logical address spaces at any given time</text><text start="9701.23" dur="4.481">parts of the program code including</text><text start="9703.851" dur="5.179">error handlers and infrequently called</text><text start="9705.711" dur="5.699">functions are not utilized often</text><text start="9709.03" dur="4.87">furthermore arrays and other data</text><text start="9711.41" dur="6.24">structures are often oversized and used</text><text start="9713.9" dur="6.3">in sections instead of all at once if we</text><text start="9717.65" dur="5.311">can swap unused pages out of memory and</text><text start="9720.2" dur="5.91">onto a backing store such as a hard disk</text><text start="9722.961" dur="5.22">we can fit more processes into memory at</text><text start="9726.11" dur="5.281">once increasing our degree of</text><text start="9728.181" dur="5.94">multiprogramming furthermore we can give</text><text start="9731.391" dur="5.67">each process its own large logical</text><text start="9734.121" dur="4.77">memory space which can in fact be larger</text><text start="9737.061" dur="7.049">than the amount of physical RAM on the</text><text start="9738.891" dur="7.41">system when we add a backing store the</text><text start="9744.11" dur="5.281">general address translation process</text><text start="9746.301" dur="5.67">remains the same processes access</text><text start="9749.391" dur="4.559">memories and logical addresses which are</text><text start="9751.971" dur="5.46">translated into physical addresses by</text><text start="9753.95" dur="5.99">the MMU the page table is still utilized</text><text start="9757.431" dur="5.4">to store the page to frame mappings</text><text start="9759.94" dur="4.75">however we do add some complexity in</text><text start="9762.831" dur="4.59">that a frame could be swapped out to</text><text start="9764.69" dur="5.101">disk at the time when it is needed the</text><text start="9767.421" dur="4.92">CPU must provide a mechanism to detect</text><text start="9769.791" dur="4.71">the situation and generate a fault that</text><text start="9772.341" dur="3.93">the operating system can handle to bring</text><text start="9774.501" dur="4.01">the required page back into physical</text><text start="9776.271" dur="2.24">memory</text><text start="9779.499" dur="6.04">the process of moving pages or frames of</text><text start="9783.229" dur="4.59">memory back and forth between RAM and</text><text start="9785.539" dur="5.851">the backing store is known either as</text><text start="9787.819" dur="5.611">swapping or as paging historically the</text><text start="9791.39" dur="4.859">term swapping referred to the movement</text><text start="9793.43" dur="6.38">of entire logical address spaces or</text><text start="9796.249" dur="6.811">entire processes between RAM and disk</text><text start="9799.81" dur="5.559">moving single pages or frames of data</text><text start="9803.06" dur="6.12">between RAM and the disk was called</text><text start="9805.369" dur="5.91">paging in modern practice both terms are</text><text start="9809.18" dur="3.84">used interchangeably and the Linux</text><text start="9811.279" dur="5.76">kernel component that performs page</text><text start="9813.02" dur="6.66">movements is called the swapper a single</text><text start="9817.039" dur="4.591">movement of a single page frame into or</text><text start="9819.68" dur="5.58">out of physical memory is called a page</text><text start="9821.63" dur="5.819">swap historically Linux machines used a</text><text start="9825.26" dur="5.179">dedicated hard disk partition to store</text><text start="9827.449" dur="5.611">the pages that were swapped out to disk</text><text start="9830.439" dur="5.111">modern versions of Linux are just as</text><text start="9833.06" dur="5.009">efficient using a swap file which is a</text><text start="9835.55" dur="5.849">regular file stored alongside other data</text><text start="9838.069" dur="5.521">in the file system it should be noted</text><text start="9841.399" dur="5.701">that swapping is an optional feature and</text><text start="9843.59" dur="5.909">it is possible and even quite common to</text><text start="9847.1" dur="5.58">run systems without any backing store or</text><text start="9849.499" dur="5.67">swapping capability most embedded Linux</text><text start="9852.68" dur="5.549">systems such as Android devices do not</text><text start="9855.169" dur="5.131">use a backing store if memory cannot be</text><text start="9858.229" dur="6.081">allocated to a process on such a system</text><text start="9860.3" dur="4.01">the process typically crashes</text><text start="9864.69" dur="6.15">now Paige spots are implemented by the</text><text start="9867.93" dur="5.1">operating system some assistance from</text><text start="9870.84" dur="5.19">hardware is required to determine when a</text><text start="9873.03" dur="4.92">page swap needs to be performed when</text><text start="9876.03" dur="4.02">translating a page number to a frame</text><text start="9877.95" dur="4.321">number the MMU checks to see if the</text><text start="9880.05" dur="6.24">corresponding frame is resident or</text><text start="9882.271" dur="7.259">loaded in RAM if the frame is present</text><text start="9886.29" dur="6.24">the memory access proceeds as normal if</text><text start="9889.53" dur="6.15">the frame is not present in RAM however</text><text start="9892.53" dur="5.491">the MMU generates a page fault which is</text><text start="9895.68" dur="5.341">a CPU exception that is similar in</text><text start="9898.021" dur="5.129">concept to an interrupt a specific page</text><text start="9901.021" dur="4.349">fault handling routine is registered</text><text start="9903.15" dur="4.08">with the system either as part of the</text><text start="9905.37" dur="4.13">interrupt vector table or using a</text><text start="9907.23" dur="5.55">separate structure for fault handlers a</text><text start="9909.5" dur="6.851">page fault causes this routine known as</text><text start="9912.78" dur="5.55">the swapper in linux to be invoked it is</text><text start="9916.351" dur="3.719">then the responsibility of the swapper</text><text start="9918.33" dur="3.981">to locate the missing page on the</text><text start="9920.07" dur="5.52">backing store and load it into ram</text><text start="9922.311" dur="8.529">possibly moving some other page frame to</text><text start="9925.59" dur="7.89">the backing store in the process the</text><text start="9930.84" dur="6.301">address translation process gains a few</text><text start="9933.48" dur="6.15">steps when paging is utilized a process</text><text start="9937.141" dur="4.829">makes a memory request using a logical</text><text start="9939.63" dur="5.79">address in its private address space as</text><text start="9941.97" dur="5.551">usual the MMU first checks the</text><text start="9945.42" dur="4.02">translation lookaside buffer to</text><text start="9947.521" dur="5.639">determine if the page to frame mapping</text><text start="9949.44" dur="6.121">is present in the case of a TLB miss the</text><text start="9953.16" dur="5.43">MMU must consult the page table to find</text><text start="9955.561" dur="5.519">the mapping once the mapping from page</text><text start="9958.59" dur="4.44">number to frame number is known the MMU</text><text start="9961.08" dur="5.311">must next verify that the page is</text><text start="9963.03" dur="5.52">actually loaded in physical RAM if the</text><text start="9966.391" dur="4.67">corresponding frame is available in RAM</text><text start="9968.55" dur="5.19">the memory access proceeds as normal</text><text start="9971.061" dur="5.349">however if the corresponding frame is</text><text start="9973.74" dur="4.23">not in memory the MMU generates a page</text><text start="9976.41" dur="4.8">fault which is essentially a type of</text><text start="9977.97" dur="5.22">interrupt if generated the page fault</text><text start="9981.21" dur="4.561">causes the operating system to switch</text><text start="9983.19" dur="4.56">context to the page fault handling</text><text start="9985.771" dur="3.929">routine which retrieves the</text><text start="9987.75" dur="4.62">corresponding memory contents from the</text><text start="9989.7" dur="5.67">backing store once this process is</text><text start="9992.37" dur="4.619">complete the OS changes the CPU context</text><text start="9995.37" dur="6.05">back to the original process</text><text start="9996.989" dur="7.051">and the memory access proceeds as normal</text><text start="10001.42" dur="5.019">in order for the MMU to be able to</text><text start="10004.04" dur="4.14">detect situations in which a requested</text><text start="10006.439" dur="5.04">memory frame is not physically present</text><text start="10008.18" dur="6.75">in RAM an extra bit must be added to the</text><text start="10011.479" dur="5.67">page table this bit is set to 1 whenever</text><text start="10014.93" dur="5.28">the contents of a logical page are</text><text start="10017.149" dur="5.401">present in a memory frame if the present</text><text start="10020.21" dur="5.279">bit is 0 the page has been swapped out</text><text start="10022.55" dur="5.429">to the backing store for efficiency</text><text start="10025.489" dur="5.16">reasons the TLB entry corresponding to a</text><text start="10027.979" dur="6.09">row in the page table must also store</text><text start="10030.649" dur="5.46">the present bit you might have noticed</text><text start="10034.069" dur="4.08">that the terminology between page and</text><text start="10036.109" dur="5.79">frame is starting to become a bit blurry</text><text start="10038.149" dur="6.3">here in general we refer to pages of</text><text start="10041.899" dur="4.8">memory being swapped out to disk even</text><text start="10044.449" dur="5.721">though the swap operation is actually</text><text start="10046.699" dur="6.721">moving physical memory frame contents</text><text start="10050.17" dur="5.74">this fuzzy terminology is a result of</text><text start="10053.42" dur="6.809">historical evolution of the virtual</text><text start="10055.91" dur="6.42">memory subsystem now I&amp;#39;d like to take a</text><text start="10060.229" dur="4.531">moment to discuss the nature of backing</text><text start="10062.33" dur="5.94">stores as technology is changing in this</text><text start="10064.76" dur="5.939">area historically the backing store was</text><text start="10068.27" dur="4.169">a mechanical hard disk drive and a</text><text start="10070.699" dur="3.841">number of design decisions in the</text><text start="10072.439" dur="5.821">virtual memory subsystem still use this</text><text start="10074.54" dur="6.81">assumption however many systems now</text><text start="10078.26" dur="6.63">especially embedded systems have only</text><text start="10081.35" dur="6.06">solid-state storage since each block and</text><text start="10084.89" dur="4.74">a solid-state drive can be erased and</text><text start="10087.41" dur="4.98">written only a finite number of times</text><text start="10089.63" dur="4.8">there is some question as to whether it</text><text start="10092.39" dur="5.16">is a good idea to use an SSD as a</text><text start="10094.43" dur="6.059">backing store for virtual memory many</text><text start="10097.55" dur="6.03">embedded devices do not use paging for</text><text start="10100.489" dur="5.04">this reason another issue with the</text><text start="10103.58" dur="4.199">backing store is that it is subject to</text><text start="10105.529" dur="4.62">attack via forensic disk analysis</text><text start="10107.779" dur="5.91">methods in the event the device is lost</text><text start="10110.149" dur="5.661">or stolen sensitive information such as</text><text start="10113.689" dur="4.5">cached passwords and other credentials</text><text start="10115.81" dur="4.69">might have been swapped out to the</text><text start="10118.189" dur="5.311">backing store and these pieces of</text><text start="10120.5" dur="4.979">information could be recovered one</text><text start="10123.5" dur="4.35">solution to this problem which is</text><text start="10125.479" dur="3.851">available as an easy to enable option in</text><text start="10127.85" dur="3.85">Mac OS 10</text><text start="10129.33" dur="5.34">is to encrypt the contents of virtual</text><text start="10131.7" dur="6.27">memory the downside to this approach is</text><text start="10134.67" dur="5.43">the addition of CPU overhead on top of</text><text start="10137.97" dur="6.36">the generally slow nature of the backing</text><text start="10140.1" dur="6.6">store hardware another approach to</text><text start="10144.33" dur="4.29">avoiding the issues of right limits and</text><text start="10146.7" dur="4.53">post-mortem forensic recovery of</text><text start="10148.62" dur="5.87">sensitive memory data is to use the</text><text start="10151.23" dur="7.081">Linux compressed caching or comp cache</text><text start="10154.49" dur="6.01">mechanism as a backing store with this</text><text start="10158.311" dur="4.559">approach a section of RAM is reserved</text><text start="10160.5" dur="5.73">ahead of time to create a compressed ram</text><text start="10162.87" dur="6.51">disk or z ram disk when a page is</text><text start="10166.23" dur="5.37">swapped out to the C Ram disk it is</text><text start="10169.38" dur="5.49">compressed on-the-fly to fit into a</text><text start="10171.6" dur="5.37">smaller amount of memory whenever a page</text><text start="10174.87" dur="5.191">needs to be swapped in from the backing</text><text start="10176.97" dur="5.15">store the page is read from the 0m disk</text><text start="10180.061" dur="3.96">and decompressed</text><text start="10182.12" dur="4.36">although the compression and</text><text start="10184.021" dur="5.369">decompression steps do result in CPU</text><text start="10186.48" dur="5.16">overhead the comp cache system is still</text><text start="10189.39" dur="6.3">generally faster than using a disk or</text><text start="10191.64" dur="6.51">SSD as a backing store furthermore comp</text><text start="10195.69" dur="5.371">cache is as secure as ram against</text><text start="10198.15" dur="4.92">forensic analysis particularly against</text><text start="10201.061" dur="4.289">recovering sensitive information from a</text><text start="10203.07" dur="4.97">system that has been powered off for a</text><text start="10205.35" dur="2.69">period of time</text><text start="10208.51" dur="4.08">in this lecture I will continue the</text><text start="10210.819" dur="3.681">discussion of virtual memory by</text><text start="10212.59" dur="5.03">discussing paging performance and</text><text start="10214.5" dur="8.52">introducing demand paging copy-on-write</text><text start="10217.62" dur="5.4">memory mapped files and shared libraries</text><text start="10223.2" dur="4.6">swapping pages to and from a backing</text><text start="10225.819" dur="5.311">store is a relatively slow operation</text><text start="10227.8" dur="4.86">compared to a direct access to ram due</text><text start="10231.13" dur="3.87">to the fact that most backing store</text><text start="10232.66" dur="5.4">Hardware is several orders of magnitude</text><text start="10235" dur="5.55">slower than Ram whenever a page fault</text><text start="10238.06" dur="4.53">occurs the memory access that triggers</text><text start="10240.55" dur="5.33">the page fault will require more time</text><text start="10242.59" dur="5.939">than a non faulting memory access as</text><text start="10245.88" dur="4.72">long as the number of pages swapped out</text><text start="10248.529" dur="4.171">to the backing store is relatively small</text><text start="10250.6" dur="4.53">compared to the total number of pages of</text><text start="10252.7" dur="5.13">memory in the system the performance</text><text start="10255.13" dur="5.97">cost of page swaps is amortized over all</text><text start="10257.83" dur="5.819">memory accesses as a result the average</text><text start="10261.1" dur="4.89">memory access time is increased by only</text><text start="10263.649" dur="5.761">a small amount relative to a system that</text><text start="10265.99" dur="5.76">does not swap out pages however if the</text><text start="10269.41" dur="5.31">fraction of memory accesses resulting in</text><text start="10271.75" dur="6">page faults becomes too high the system</text><text start="10274.72" dur="6.059">begins swapping or thrashing its memory</text><text start="10277.75" dur="5.1">on most or all memory accesses this</text><text start="10280.779" dur="3.87">situation typically occurs when memory</text><text start="10282.85" dur="4.41">becomes oversubscribed too due to a</text><text start="10284.649" dur="4.651">program bug and the result is a system</text><text start="10287.26" dur="5.55">that is painfully slow to respond to</text><text start="10289.3" dur="6.42">user input in some cases on some systems</text><text start="10292.81" dur="6.799">the OS may need to be rebooted to</text><text start="10295.72" dur="3.889">recover from a swapping situation</text><text start="10299.74" dur="4.65">with any paging system it is necessary</text><text start="10302.5" dur="4.44">to decide on a virtual memory fetch</text><text start="10304.39" dur="4.77">policy which determines how data are</text><text start="10306.94" dur="4.86">loaded into memory pages for the first</text><text start="10309.16" dur="5.67">time typically when a program is started</text><text start="10311.8" dur="5.55">a lazy implementation of page fetching</text><text start="10314.83" dur="4.47">is to use demand paging which loads a</text><text start="10317.35" dur="5.4">page from the backing store into RAM</text><text start="10319.3" dur="4.82">only when it is actually required this</text><text start="10322.75" dur="3.45">approach improves the overall</text><text start="10324.12" dur="4.42">responsiveness of the system and</text><text start="10326.2" dur="4.89">increases the degree of multiprogramming</text><text start="10328.54" dur="4.85">at the expense of reducing the initial</text><text start="10331.09" dur="5.37">performance of newly started processes</text><text start="10333.39" dur="5.95">the main alternative to demand paging is</text><text start="10336.46" dur="5.12">prefetching which loads some pages into</text><text start="10339.34" dur="4.62">memory before they are actually needed</text><text start="10341.58" dur="4.2">prefetching may waste some memory by</text><text start="10343.96" dur="4.26">loading data that will not be used</text><text start="10345.78" dur="3.85">however it does improve the startup</text><text start="10348.22" dur="4.44">performance of many software</text><text start="10349.63" dur="5.01">applications for this reason prefetching</text><text start="10352.66" dur="5">is a fairly common feature of desktop</text><text start="10354.64" dur="3.02">operating systems</text><text start="10358.32" dur="5.52">when memory is limited or electrical</text><text start="10360.93" dur="4.5">power is severely limited a pure demand</text><text start="10363.84" dur="2.52">paging approach to fetching may be</text><text start="10365.43" dur="3.93">appropriate</text><text start="10366.36" dur="5.67">pure demand paging does not prefetch any</text><text start="10369.36" dur="5.161">pages including the text segments of</text><text start="10372.03" dur="5.16">newly started programs when a new</text><text start="10374.521" dur="5.099">program starts a page fault occurs the</text><text start="10377.19" dur="4.91">first instruction and the first page of</text><text start="10379.62" dur="4.89">the program code is loaded into memory</text><text start="10382.1" dur="4.15">pure demand paging has the greatest</text><text start="10384.51" dur="4.17">potential to increase the degree of</text><text start="10386.25" dur="4.29">multiprogramming particularly in</text><text start="10388.68" dur="4.77">situations where physical RAM is</text><text start="10390.54" dur="5.1">extremely limited in addition the lack</text><text start="10393.45" dur="5.16">of prefetching can save CPU and memory</text><text start="10395.64" dur="5.81">operations thus providing up small power</text><text start="10398.61" dur="6.12">savings when operating from battery as</text><text start="10401.45" dur="5.56">such pure demand paging could be useful</text><text start="10404.73" dur="4.56">on smaller embedded systems especially</text><text start="10407.01" dur="6.87">monitoring devices and other systems</text><text start="10409.29" dur="7.11">without direct human interaction while</text><text start="10413.88" dur="5.16">the fetching policy has a high impact on</text><text start="10416.4" dur="5.16">newly started programs a copy-on-write</text><text start="10419.04" dur="4.91">approach improves performance whenever a</text><text start="10421.56" dur="5.25">process makes a copy of itself on</text><text start="10423.95" dur="4.84">unix-like systems new processes are</text><text start="10426.81" dur="6.211">created when a running process clones</text><text start="10428.79" dur="6.42">itself with the fork system call cloning</text><text start="10433.021" dur="5.189">all memory at fork time could be a slow</text><text start="10435.21" dur="5.66">operation since a process might be using</text><text start="10438.21" dur="4.95">a large number of pages copy-on-write</text><text start="10440.87" dur="4.51">addresses this performance issue by</text><text start="10443.16" dur="5.31">having the Cologne initially share the</text><text start="10445.38" dur="5.06">original processes memory pages these</text><text start="10448.47" dur="4.44">shared pages are marked read-only</text><text start="10450.44" dur="5.59">causing the MMU default if either</text><text start="10452.91" dur="4.98">process tries to write to them the</text><text start="10456.03" dur="4.44">operating system handles this type of</text><text start="10457.89" dur="4.59">fault by copying the page and turning</text><text start="10460.47" dur="5.27">off the read-only bits on both the copy</text><text start="10462.48" dur="3.26">and the original page</text><text start="10465.99" dur="5.19">this diagram illustrates the</text><text start="10468.24" dur="5.52">copy-on-write process in the top half of</text><text start="10471.18" dur="5.28">the diagram the original process forks a</text><text start="10473.76" dur="5.46">child process which initially shares the</text><text start="10476.46" dur="5.22">original parents pages the child</text><text start="10479.22" dur="5.13">proceeds to modify page three in the</text><text start="10481.68" dur="5.43">bottom half of the diagram when this</text><text start="10484.35" dur="4.65">modification is attempted the MMU raises</text><text start="10487.11" dur="4.71">a fault that traps into the operating</text><text start="10489" dur="5.46">system the operating system makes a copy</text><text start="10491.82" dur="4.53">of the corresponding memory frame gives</text><text start="10494.46" dur="4.08">the copy to the child process and</text><text start="10496.35" dur="5.58">removes the read-only setting on both</text><text start="10498.54" dur="5.52">the original and clone page the child&amp;#39;s</text><text start="10501.93" dur="4.62">page is then updated with the modified</text><text start="10504.06" dur="6.05">value from the memory write and normal</text><text start="10506.55" dur="3.56">process execution resumes</text><text start="10510.511" dur="4.83">in addition to increasing the degree of</text><text start="10512.971" dur="4.109">multiprogramming by enabling pages of</text><text start="10515.341" dur="4.29">memory to be swapped out to a backing</text><text start="10517.08" dur="4.86">store the virtual memory subsystem can</text><text start="10519.631" dur="5.159">also be used to improve IO performance</text><text start="10521.94" dur="5.25">for processes on the system when</text><text start="10524.79" dur="5.311">programs request IO from persistent</text><text start="10527.19" dur="4.38">storage devices these requests generally</text><text start="10530.101" dur="3.96">take a significant amount of time to</text><text start="10531.57" dur="5.79">fulfill due to the relatively slow speed</text><text start="10534.061" dur="5.46">of the persistent storage device since</text><text start="10537.36" dur="4.83">many processes perform frequent reads</text><text start="10539.521" dur="4.799">and writes of small pieces of data the</text><text start="10542.19" dur="4.05">performance overheads caused by waiting</text><text start="10544.32" dur="4.92">for device IO to complete can become</text><text start="10546.24" dur="5.49">substantial the virtual memory subsystem</text><text start="10549.24" dur="6.181">can be made to reduce this overhead by</text><text start="10551.73" dur="5.97">memory mapping files page sized pieces</text><text start="10555.421" dur="3.989">of a file are loaded into memory where</text><text start="10557.7" dur="4.2">they can be read and written efficiently</text><text start="10559.41" dur="6.861">these pages are periodically written</text><text start="10561.9" dur="7.531">back to the persistent storage device as</text><text start="10566.271" dur="5.049">is the case with paging in general there</text><text start="10569.431" dur="4.319">is no requirement that memory mapped</text><text start="10571.32" dur="5.611">files be mapped into contiguous regions</text><text start="10573.75" dur="5.671">of physical memory logical addressing is</text><text start="10576.931" dur="5.149">used to present an ordered contiguous</text><text start="10579.421" dur="5.609">logical view of the file to the process</text><text start="10582.08" dur="5.37">however the file may be mapped out of</text><text start="10585.03" dur="4.8">order and non contiguous memory frames</text><text start="10587.45" dur="4.78">although the file will be put back in</text><text start="10589.83" dur="4.53">order when you distort disk the file</text><text start="10592.23" dur="6.51">still might be non contiguous if the</text><text start="10594.36" dur="6.151">file system is fragmented to reduce the</text><text start="10598.74" dur="4.29">total amount of memory required by a</text><text start="10600.511" dur="5.309">program and to make software development</text><text start="10603.03" dur="5.191">easier certain programming routines are</text><text start="10605.82" dur="4.441">implemented and shared libraries which</text><text start="10608.221" dur="5.009">can be used by multiple programs on the</text><text start="10610.261" dur="5.46">system when these libraries are used by</text><text start="10613.23" dur="5.25">a program they must be made available to</text><text start="10615.721" dur="5.069">put the program when it runs in the</text><text start="10618.48" dur="4.951">simple case which defeats the memory</text><text start="10620.79" dur="5.731">saving potential of shared libraries the</text><text start="10623.431" dur="5.25">compiler may statically link or copy the</text><text start="10626.521" dur="4.769">library into the program executable at</text><text start="10628.681" dur="4.95">compile time a more efficient approach</text><text start="10631.29" dur="5.96">is for the loader to link the program to</text><text start="10633.631" dur="5.939">the shared library at runtime on disk</text><text start="10637.25" dur="5.141">precompiled shared libraries are stored</text><text start="10639.57" dur="4.57">in binary form as shared objects or dot</text><text start="10642.391" dur="4.269">iso files on</text><text start="10644.14" dur="6.36">these libraries are called dynamic link</text><text start="10646.66" dur="5.819">libraries or DLLs on Windows runtime</text><text start="10650.5" dur="5.04">dynamic linking of these shared</text><text start="10652.479" dur="5.01">libraries relies on read only shared</text><text start="10655.54" dur="5.93">pages that can be used by multiple</text><text start="10657.489" dur="6.5">programs at the same time</text><text start="10661.47" dur="4.559">shared libraries are made available to</text><text start="10663.989" dur="4.26">processes by mapping them into the</text><text start="10666.029" dur="4.651">middle of the logical address spaces of</text><text start="10668.249" dur="5.28">each process between the stack and the</text><text start="10670.68" dur="4.95">heat these shared objects limit the</text><text start="10673.529" dur="3.84">maximum size to which the stack where</text><text start="10675.63" dur="4.979">heat can grow before running out of</text><text start="10677.369" dur="5.46">space a situation in which shared</text><text start="10680.609" dur="4.35">library mapping becomes a problem is</text><text start="10682.829" dur="5.851">when virtual machines are implemented on</text><text start="10684.959" dur="6.091">32-bit hosts the hypervisor process on</text><text start="10688.68" dur="6.09">these hosts has a maximum logical</text><text start="10691.05" dur="5.97">address space of 4 gigabytes when the</text><text start="10694.77" dur="4.349">hypervisor tries to allocate a large</text><text start="10697.02" dur="4.86">contiguous block of memory to provide</text><text start="10699.119" dur="4.53">Ram to the guest system the allocation</text><text start="10701.88" dur="5.04">could run into the shared libraries</text><text start="10703.649" dur="5.55">resulting in a failure I have seen this</text><text start="10706.92" dur="4.559">type of failure occur on a 32-bit host</text><text start="10709.199" dur="5.01">when trying to allocate a continuous</text><text start="10711.479" dur="5.191">block of just over 1.2 gigabytes on a</text><text start="10714.209" dur="5.4">Linux system with 4 gigabytes of</text><text start="10716.67" dur="5.64">physical RAM it is for this reason that</text><text start="10719.609" dur="5.76">a 64-bit operating system is desirable</text><text start="10722.31" dur="5.879">even for physical systems only 2 to 4</text><text start="10725.369" dur="4.441">gigabytes of RAM</text><text start="10728.189" dur="3.96">Heckscher I will discuss page</text><text start="10729.81" dur="5.399">replacement as it is used in the virtual</text><text start="10732.149" dur="6.06">memory subsystem I will discuss global</text><text start="10735.209" dur="4.681">and local page replacement page table</text><text start="10738.209" dur="4.171">entries that support page replacement</text><text start="10739.89" dur="7.469">and a number of classical page</text><text start="10742.38" dur="6.96">replacement algorithms whenever there is</text><text start="10747.359" dur="3.811">a demand for memory that is greater than</text><text start="10749.34" dur="4.199">the actual amount of physical RAM</text><text start="10751.17" dur="4.529">installed on the system the operating</text><text start="10753.539" dur="4.351">system must determine which pages will</text><text start="10755.699" dur="5.071">be kept in memory and which pages will</text><text start="10757.89" dur="5.46">be swapped out to disk when memory frame</text><text start="10760.77" dur="4.679">contents are swapped out to disk the</text><text start="10763.35" dur="4.37">operating system needs to find a page of</text><text start="10765.449" dur="4.861">memory that is not currently in use</text><text start="10767.72" dur="4.689">furthermore in an ideal case the</text><text start="10770.31" dur="4.469">operating system should also pick a page</text><text start="10772.409" dur="4.53">that will not be used for some time</text><text start="10774.779" dur="6.18">so as to reduce the total number of page</text><text start="10776.939" dur="6.151">swaps in order for the page swapper to</text><text start="10780.959" dur="4.62">operate some additional data must be</text><text start="10783.09" dur="4.229">kept about each page including whether</text><text start="10785.579" dur="3.45">or not the page has been referenced and</text><text start="10787.319" dur="5.87">whether or not the page has been altered</text><text start="10789.029" dur="4.16">since it was last swapped into RAM</text><text start="10793.97" dur="5.71">decisions regarding page swaps can be</text><text start="10796.68" dur="5.16">made globally or locally with global</text><text start="10799.68" dur="4.2">replacement any page in the system is a</text><text start="10801.84" dur="4.8">potential candidate to be swapped out in</text><text start="10803.88" dur="4.949">favor of another page while simpler to</text><text start="10806.64" dur="4.319">implement global page replacement does</text><text start="10808.829" dur="5.551">allow processes to steal memory frames</text><text start="10810.959" dur="5.73">from each other on the other hand local</text><text start="10814.38" dur="5.22">replacement policies allocate a limited</text><text start="10816.689" dur="5.1">number of frames to each process when a</text><text start="10819.6" dur="4.95">process exceeds its frame allocation</text><text start="10821.789" dur="6.471">only frames belonging to that process</text><text start="10824.55" dur="3.71">are selected for replacement</text><text start="10828.499" dur="4.54">implementing a local page replacement</text><text start="10830.579" dur="5.341">algorithm is more complex than it seems</text><text start="10833.039" dur="5.94">on the surface largely due to Bella D&amp;#39;s</text><text start="10835.92" dur="5.64">anomaly allocating more frames to a</text><text start="10838.979" dur="4.38">process does not necessarily reduce the</text><text start="10841.56" dur="5.37">number of page faults that occur as a</text><text start="10843.359" dur="6.061">result of that process in fact with some</text><text start="10846.93" dur="4.399">replacement algorithms increasing the</text><text start="10849.42" dur="4.47">number of available frames actually</text><text start="10851.329" dur="6.46">increases the number of page faults that</text><text start="10853.89" dur="6.21">occur at the opposite extreme allocation</text><text start="10857.789" dur="3.911">of too few memory frames to a process</text><text start="10860.1" dur="4.72">also increases the</text><text start="10861.7" dur="5.31">of page faults without enough physical</text><text start="10864.82" dur="4.83">memory processes we&amp;#39;ll spend more time</text><text start="10867.01" dur="5.94">page faulting or swapping than they will</text><text start="10869.65" dur="5.25">spend executing the goal with a local</text><text start="10872.95" dur="4.25">replacement algorithm is to find an</text><text start="10874.9" dur="5.1">optimal working set for each process</text><text start="10877.2" dur="4.6">this working set is the minimum number</text><text start="10880" dur="4.62">of frames that a process actually</text><text start="10881.8" dur="7.23">requires in order to execute to some</text><text start="10884.62" dur="6.45">desired level of efficiency regardless</text><text start="10889.03" dur="4.35">of whether global or local replacement</text><text start="10891.07" dur="4.26">policies are chosen the operating system</text><text start="10893.38" dur="5.22">needs a few pieces of information to</text><text start="10895.33" dur="5.25">implement page swapping correctly first</text><text start="10898.6" dur="4.17">the operating system needs to know</text><text start="10900.58" dur="3.899">whether or not a page that is currently</text><text start="10902.77" dur="5.55">in memory has been referenced by a</text><text start="10904.479" dur="6">process pages that are loaded but unused</text><text start="10908.32" dur="5.43">might be more ideal candidates to be</text><text start="10910.479" dur="5.011">swapped out to the backing store the</text><text start="10913.75" dur="3.54">second piece of information that needs</text><text start="10915.49" dur="5.43">to be stored in the page table is the</text><text start="10917.29" dur="6.33">dirty bit this bit is set to 1 whenever</text><text start="10920.92" dur="4.74">a process writes to a page which lets</text><text start="10923.62" dur="4.08">the operating system know that the copy</text><text start="10925.66" dur="3.779">of the memory frame contents on the</text><text start="10927.7" dur="5.61">backing store needs to be updated</text><text start="10929.439" dur="5.641">whenever the page is swapped out keep in</text><text start="10933.31" dur="4.98">mind that on systems with hardware</text><text start="10935.08" dur="6.84">managed page tables such as the x86 and</text><text start="10938.29" dur="5.95">x86 64 platforms the MMU updates these</text><text start="10941.92" dur="4.6">bits automatically</text><text start="10944.24" dur="4.65">whenever a page fault occurs the</text><text start="10946.52" dur="4.98">operating system must locate the desired</text><text start="10948.89" dur="4.92">page on the backing store then it must</text><text start="10951.5" dur="5.16">find a free frame and ram into which the</text><text start="10953.81" dur="4.74">page can be loaded if no memory frames</text><text start="10956.66" dur="3.93">are free the operating system must</text><text start="10958.55" dur="5.7">select a victim frame to be swapped out</text><text start="10960.59" dur="5.43">to the backing store the algorithm that</text><text start="10964.25" dur="3.18">is around to determine which frame will</text><text start="10966.02" dur="4.65">be the victim is called a page</text><text start="10967.43" dur="5.25">replacement algorithm page replacement</text><text start="10970.67" dur="3.66">algorithms ideally should minimize the</text><text start="10972.68" dur="3.78">total number of page faults in the</text><text start="10974.33" dur="5.07">running system in order to maximize</text><text start="10976.46" dur="4.769">system performance let&amp;#39;s take a look at</text><text start="10979.4" dur="6.39">several classic page replacement</text><text start="10981.229" dur="6.511">algorithms the first classical page</text><text start="10985.79" dur="5.28">replacement algorithm we will consider</text><text start="10987.74" dur="5.88">is the random algorithm whenever a page</text><text start="10991.07" dur="5.159">swap is required this algorithm simply</text><text start="10993.62" dur="5.04">picks a victim frame at random in</text><text start="10996.229" dur="4.051">practice this random selection often</text><text start="10998.66" dur="4.08">picks a page that will be needed in the</text><text start="11000.28" dur="5.64">near future leading to another page</text><text start="11002.74" dur="5.16">fault in a short time period as such it</text><text start="11005.92" dur="5.64">is not effective for minimizing page</text><text start="11007.9" dur="6.12">faults another ineffective algorithm is</text><text start="11011.56" dur="4.41">to select the oldest page or the page</text><text start="11014.02" dur="5.25">that has been in memory for the longest</text><text start="11015.97" dur="5.88">period of time unfortunately this page</text><text start="11019.27" dur="5.16">could be frequently accessed so if it</text><text start="11021.85" dur="4.17">has swapped out another page fault could</text><text start="11024.43" dur="5.43">be triggered in a short period of time</text><text start="11026.02" dur="5.97">to bring it back into memory somewhat</text><text start="11029.86" dur="3.6">counter-intuitively selecting the frame</text><text start="11031.99" dur="4.71">that has been accessed the least</text><text start="11033.46" dur="5.25">frequently is also ineffective a page</text><text start="11036.7" dur="4.77">that is used relatively infrequently</text><text start="11038.71" dur="4.59">might still be used regularly which</text><text start="11041.47" dur="6.27">would lead to another page fault to</text><text start="11043.3" dur="6.33">bring this frame back into RAM the most</text><text start="11047.74" dur="3.81">frequently used algorithm picks</text><text start="11049.63" dur="4.71">whichever frame is being used the most</text><text start="11051.55" dur="5.04">and selects that frame to be swapped out</text><text start="11054.34" dur="4.86">to the backing store this is a</text><text start="11056.59" dur="5.13">completely stupid idea since this page</text><text start="11059.2" dur="5.88">is likely to be accessed again shortly</text><text start="11061.72" dur="5.43">after it is swapped out a good algorithm</text><text start="11065.08" dur="5.399">for choosing victim frames is the least</text><text start="11067.15" dur="6.06">recently used or LRU algorithm this</text><text start="11070.479" dur="5.221">algorithm selects the victim frame that</text><text start="11073.21" dur="4.04">has not been accessed for the longest</text><text start="11075.7" dur="3.72">period of time</text><text start="11077.25" dur="4.391">unfortunately with current hardware</text><text start="11079.42" dur="5.641">there is no good way to track the last</text><text start="11081.641" dur="5.219">memory access time tracking every access</text><text start="11085.061" dur="4.319">and software would be a terrible idea</text><text start="11086.86" dur="4.94">since such a scheme would require an</text><text start="11089.38" dur="5.67">interrupt on every memory access</text><text start="11091.8" dur="6.28">thus it is impractical to implement LRU</text><text start="11095.05" dur="4.53">directly most implemented page</text><text start="11098.08" dur="5.57">replacement algorithms are</text><text start="11099.58" dur="4.07">approximations of LRU however</text><text start="11103.891" dur="5.229">theoretically the optimal algorithm or</text><text start="11106.69" dur="6.27">opt is the best staged replacement</text><text start="11109.12" dur="6">algorithm to use with this algorithm the</text><text start="11112.96" dur="4.11">operating system picks a frame that will</text><text start="11115.12" dur="5.13">not be accessed for the longest period</text><text start="11117.07" dur="5.491">of time as the victim delaying a future</text><text start="11120.25" dur="5.03">page fault related to the corresponding</text><text start="11122.561" dur="5.25">page for as long as possible a</text><text start="11125.28" dur="4.27">mathematical proof exists showing that</text><text start="11127.811" dur="3.919">opt is the best possible page</text><text start="11129.55" dur="5.1">replacement algorithm</text><text start="11131.73" dur="4.93">unfortunately opt is also impossible to</text><text start="11134.65" dur="4.29">implement since it must be able to</text><text start="11136.66" dur="6.03">predict all memory accesses ahead of</text><text start="11138.94" dur="6.48">time as such we are left with LRU</text><text start="11142.69" dur="6.451">approximation algorithms such as the not</text><text start="11145.42" dur="5.82">used recently or in you our algorithm nu</text><text start="11149.141" dur="4.679">our tracks frame accesses using a</text><text start="11151.24" dur="5.821">combination of the reference bit dirty</text><text start="11153.82" dur="5.16">bit and/or an age counter this algorithm</text><text start="11157.061" dur="5.21">produces reasonable performance and</text><text start="11158.98" dur="3.291">actual implementations</text><text start="11163.65" dur="4.17">in this lecture which is presented in</text><text start="11165.9" dur="6.121">two parts I will begin discussing</text><text start="11167.82" dur="6.571">processes I will introduce the process</text><text start="11172.021" dur="5.219">model discuss the type of information</text><text start="11174.391" dur="5.609">associated with the process give an</text><text start="11177.24" dur="6.061">overview of process state and introduce</text><text start="11180" dur="5.49">the concept of process forking as is</text><text start="11183.301" dur="7.559">usually the case my presentation is</text><text start="11185.49" dur="8.941">focused on unix-like systems let&amp;#39;s begin</text><text start="11190.86" dur="5.76">by defining what a process is a process</text><text start="11194.431" dur="5.79">is an instance of a computer program in</text><text start="11196.62" dur="6.061">execution when we ask computer system to</text><text start="11200.221" dur="4.889">run a program the code for that program</text><text start="11202.681" dur="5.299">is loaded from disk into memory and</text><text start="11205.11" dur="6">executed as a process in the system on</text><text start="11207.98" dur="7.451">some platforms processes might be called</text><text start="11211.11" dur="6.691">jobs or tasks on a modern system a</text><text start="11215.431" dur="5.639">process consists of one or more threads</text><text start="11217.801" dur="6.059">of execution in other words a process</text><text start="11221.07" dur="4.981">can execute one instruction at a time or</text><text start="11223.86" dur="6.72">it can execute several instructions at</text><text start="11226.051" dur="6.119">the same time on the CPU each process on</text><text start="11230.58" dur="4.74">the system receives its own private</text><text start="11232.17" dur="6.06">allocation of resources each process</text><text start="11235.32" dur="4.71">also has access to its own data and the</text><text start="11238.23" dur="3.901">operating system maintain statistics</text><text start="11240.03" dur="7.441">about each process in order to make</text><text start="11242.131" dur="7.789">effective scheduling decisions in memory</text><text start="11247.471" dur="5.189">a process is divided into segments</text><text start="11249.92" dur="6.221">program code and other read-only data</text><text start="11252.66" dur="5.191">are placed into the text segment global</text><text start="11256.141" dur="3.689">variables in a program have their own</text><text start="11257.851" dur="5.429">data segment that allows both reading</text><text start="11259.83" dur="5.641">and writing automatic variables for</text><text start="11263.28" dur="4.351">local variables and functions are</text><text start="11265.471" dur="5.13">allocated at compile time and placed on</text><text start="11267.631" dur="4.92">the stack data structures explicitly</text><text start="11270.601" dur="5.549">allocated at runtime are placed on the</text><text start="11272.551" dur="5.04">heat as memory is used by a process the</text><text start="11276.15" dur="4.111">stack and the heap grow toward each</text><text start="11277.591" dur="5.01">other if a process makes use of shared</text><text start="11280.261" dur="4.229">libraries these libraries are mapped</text><text start="11282.601" dur="6.78">into process memory between the stack</text><text start="11284.49" dur="7.26">and the heap in order to track processes</text><text start="11289.381" dur="4.859">correctly and allow multiple processes</text><text start="11291.75" dur="4.351">to share the same system the operating</text><text start="11294.24" dur="2.56">system must track some information that</text><text start="11296.101" dur="3.519">is associate</text><text start="11296.8" dur="4.53">with each process this information</text><text start="11299.62" dur="4.38">includes the memory that the process is</text><text start="11301.33" dur="4.98">using as well as the current location in</text><text start="11304" dur="5.189">the process code that is executing known</text><text start="11306.31" dur="4.83">as the process program counter the</text><text start="11309.189" dur="5.311">operating system must also track other</text><text start="11311.14" dur="5.549">resources in use by a process including</text><text start="11314.5" dur="6.29">which files are currently open and any</text><text start="11316.689" dur="6.781">network connections the process is using</text><text start="11320.79" dur="5.08">in addition to the information generated</text><text start="11323.47" dur="4.05">by the process itself the operating</text><text start="11325.87" dur="4.65">system must keep scheduling information</text><text start="11327.52" dur="5.129">and statistics about each process this</text><text start="11330.52" dur="4.919">information includes a unique identifier</text><text start="11332.649" dur="4.83">or process ID it can be used to</text><text start="11335.439" dur="4.861">distinguish processes from each other in</text><text start="11337.479" dur="5.521">order to arbitrate access to system</text><text start="11340.3" dur="4.5">resources the operating system must also</text><text start="11343" dur="3.87">store information about the owner of a</text><text start="11344.8" dur="4.889">process so that permissions can be</text><text start="11346.87" dur="4.71">enforced correctly to facilitate</text><text start="11349.689" dur="3.991">scheduling decisions the operating</text><text start="11351.58" dur="4.59">system collects various statistics about</text><text start="11353.68" dur="4.53">process execution such as the amount of</text><text start="11356.17" dur="6.149">CPU time consumed and the amount of</text><text start="11358.21" dur="5.88">memory used during the lifetime of a</text><text start="11362.319" dur="4.771">process the process moves between</text><text start="11364.09" dur="5.33">several states when a process is first</text><text start="11367.09" dur="4.71">created it is initially in the new state</text><text start="11369.42" dur="4.87">once creation is complete and the</text><text start="11371.8" dur="4.32">process is ready to run it transitions</text><text start="11374.29" dur="4.71">to the ready state where it waits to be</text><text start="11376.12" dur="5.04">assigned with CPU core when the</text><text start="11379" dur="4.2">scheduler selects a ready process to run</text><text start="11381.16" dur="5.31">that process is moved to the running</text><text start="11383.2" dur="5.43">state and has given CPU resources during</text><text start="11386.47" dur="5.06">execution a process might request</text><text start="11388.63" dur="5.37">external resources such as disk i/o</text><text start="11391.53" dur="4.84">since these resources take time to</text><text start="11394" dur="4.14">provide the process is moved out of the</text><text start="11396.37" dur="4.05">running state and into the waiting state</text><text start="11398.14" dur="5.94">so that the CPU core can be given to</text><text start="11400.42" dur="5.22">another process finally when a process</text><text start="11404.08" dur="3.72">is finished it is placed in the</text><text start="11405.64" dur="4.469">terminated state so that the operating</text><text start="11407.8" dur="3.75">system can perform cleanup tasks before</text><text start="11410.109" dur="6.631">destroying the process instance</text><text start="11411.55" dur="6.96">completely in this diagram we can see</text><text start="11416.74" dur="5.43">how processes may transition between</text><text start="11418.51" dur="5.309">states at creation time a process is</text><text start="11422.17" dur="3.72">placed into the new state while the</text><text start="11423.819" dur="4.12">operating system allocates initial</text><text start="11425.89" dur="4.989">memory and other resources</text><text start="11427.939" dur="4.68">once creation is complete the process is</text><text start="11430.879" dur="4.92">submitted to the system and placed in</text><text start="11432.619" dur="5.34">the ready state whenever a CPU core is</text><text start="11435.799" dur="4.23">available to execute a process it is</text><text start="11437.959" dur="5.73">dispatched to the running state where it</text><text start="11440.029" dur="5.76">executes execution of a process can be</text><text start="11443.689" dur="4.621">interrupted for a variety of reasons if</text><text start="11445.789" dur="4.17">a hardware interrupt occurs the</text><text start="11448.31" dur="4.259">operating system might have to move the</text><text start="11449.959" dur="4.65">process off the CPU core in order to</text><text start="11452.569" dur="5.31">service the interrupt returning the</text><text start="11454.609" dur="5.67">process to the ready state or the</text><text start="11457.879" dur="4.5">process might make an i/o request in</text><text start="11460.279" dur="4.02">which case the process is moved to the</text><text start="11462.379" dur="4.471">waiting state while the system waits on</text><text start="11464.299" dur="6.06">the relatively slow IO device to provide</text><text start="11466.85" dur="5.399">the requested data once IO is complete</text><text start="11470.359" dur="3.781">the process is moved back to the ready</text><text start="11472.249" dur="5.51">state so that it can be scheduled to run</text><text start="11474.14" dur="6.809">again whenever a CPU core becomes free</text><text start="11477.759" dur="7.66">upon exiting the process is moved to the</text><text start="11480.949" dur="6.54">terminated State for cleanup the</text><text start="11485.419" dur="4.71">mechanism for process creation is</text><text start="11487.489" dur="4.56">platform dependence I will be</text><text start="11490.129" dur="4.41">introducing process creation on a</text><text start="11492.049" dur="6.511">unix-like platform such as Linux or Mac</text><text start="11494.539" dur="6">OS 10 on these platforms all processes</text><text start="11498.56" dur="4.409">descend from a single parent process</text><text start="11500.539" dur="6.481">that is created by the kernel at boot</text><text start="11502.969" dur="6.18">time on Linux this first process is</text><text start="11507.02" dur="5.339">called a nit which is the common UNIX</text><text start="11509.149" dur="5.61">name for the first created process Apple</text><text start="11512.359" dur="6.48">decided to call this process launch D on</text><text start="11514.759" dur="7.591">Mac OS 10 by convention the initial</text><text start="11518.839" dur="5.79">process always has a process ID of 1 the</text><text start="11522.35" dur="4.469">initial process must also remain alive</text><text start="11524.629" dur="4.32">for the entire time the system is up and</text><text start="11526.819" dur="5.52">running otherwise the whole computer</text><text start="11528.949" dur="5.46">crashes with the kernel panic the anit</text><text start="11532.339" dur="5.16">or launch D process is started by the</text><text start="11534.409" dur="5.401">kernel at boot time every other process</text><text start="11537.499" dur="6.75">on the system is a child of this special</text><text start="11539.81" dur="7.619">process child processes on UNIX are</text><text start="11544.249" dur="5.521">created by forking a parent process the</text><text start="11547.429" dur="4.68">parent process makes a system call named</text><text start="11549.77" dur="5.429">fork which makes a copy of the parent</text><text start="11552.109" dur="5.161">process this copy which is initially a</text><text start="11555.199" dur="3.391">clone of the parent is called the child</text><text start="11557.27" dur="4.08">process</text><text start="11558.59" dur="5.519">it is up to the parent process to</text><text start="11561.35" dur="5.67">determine what if any resources it will</text><text start="11564.109" dur="5.37">share with the child process by default</text><text start="11567.02" dur="4.62">the parent process shares any open file</text><text start="11569.479" dur="4.441">descriptors network connections and</text><text start="11571.64" dur="5.639">other resources apart from the CPU and</text><text start="11573.92" dur="5.67">memory with the child however the</text><text start="11577.279" dur="4.83">program code can close or reassign</text><text start="11579.59" dur="5.899">resources in the child making the child</text><text start="11582.109" dur="6.12">completely independent of the parent</text><text start="11585.489" dur="4.811">once the child process is forked the</text><text start="11588.229" dur="4.05">child becomes an independent instance of</text><text start="11590.3" dur="4.649">the program which can be scheduled to</text><text start="11592.279" dur="5.311">run in parallel with the parent however</text><text start="11594.949" dur="4.981">the parent process can be coded to wait</text><text start="11597.59" dur="5.34">on the child process to finish executing</text><text start="11599.93" dur="5.04">before the parent proceeds furthermore</text><text start="11602.93" dur="5.639">the parent process is able to terminate</text><text start="11604.97" dur="5.309">the child process at any time on some</text><text start="11608.569" dur="3.51">systems termination of the parent</text><text start="11610.279" dur="4.95">process will terminate all child</text><text start="11612.079" dur="6.12">processes automatically on other systems</text><text start="11615.229" dur="6.121">child processes become orphan processes</text><text start="11618.199" dur="5.67">whenever the parent terminates unix-like</text><text start="11621.35" dur="6.21">systems including Linux have the ability</text><text start="11623.869" dur="5.971">to support both models any process</text><text start="11627.56" dur="4.5">including a child process has the</text><text start="11629.84" dur="4.859">ability to load a different program into</text><text start="11632.06" dur="4.65">its memory space this loading is</text><text start="11634.699" dur="4.891">accomplished via the exec system call</text><text start="11636.71" dur="5.04">which replaces the entire program code</text><text start="11639.59" dur="5.34">of the process with the program code</text><text start="11641.75" dur="5.34">from a different program new programs on</text><text start="11644.93" dur="4.679">unix-like systems are started by forking</text><text start="11647.09" dur="7.649">an existing program then exacting the</text><text start="11649.609" dur="7.17">new program in the child process when</text><text start="11654.739" dur="3.84">multiple processes are executing on the</text><text start="11656.779" dur="3.691">same system they have the ability to</text><text start="11658.579" dur="3.981">execute independently or share</text><text start="11660.47" dur="4.259">information between themselves an</text><text start="11662.56" dur="3.969">independent process is completely</text><text start="11664.729" dur="4.68">separate from other processes in the</text><text start="11666.529" dur="5.311">system its execution is not affected by</text><text start="11669.409" dur="4.83">other processes and it cannot affect</text><text start="11671.84" dur="3.899">other processes as long as the operating</text><text start="11674.239" dur="5.311">system is designed and implemented</text><text start="11675.739" dur="5.941">correctly alternatively processes could</text><text start="11679.55" dur="4.529">share information between themselves and</text><text start="11681.68" dur="4.229">thereby affect each other when this</text><text start="11684.079" dur="5.101">occurs we say that the processes are</text><text start="11685.909" dur="5.611">cooperating cooperating processes may be</text><text start="11689.18" dur="3.841">used for a variety of reasons including</text><text start="11691.52" dur="3.661">information</text><text start="11693.021" dur="4.889">implementing high-performance parallel</text><text start="11695.181" dur="5.639">computation increasing the modularity of</text><text start="11697.91" dur="4.261">a program implementation or simply for</text><text start="11700.82" dur="5.25">convenience when implementing certain</text><text start="11702.171" dur="6.21">designs in part 2 of this lecture I will</text><text start="11706.07" dur="6.38">provide additional detail about process</text><text start="11708.381" dur="4.069">forking and executing new programs</text><text start="11713.949" dur="4.5">in this lecture I will continue the</text><text start="11716.14" dur="5.309">discussion of processes by introducing</text><text start="11718.449" dur="5.611">the fork exec and weight system calls</text><text start="11721.449" dur="6.74">I will provide examples of using these</text><text start="11724.06" dur="4.129">system calls in both C and Python</text><text start="11728.939" dur="5.65">processes are created on unix-like</text><text start="11731.379" dur="6.24">systems such as Linux by using the fork</text><text start="11734.589" dur="6.12">system call this call is available as a</text><text start="11737.619" dur="7.2">c function by including the eunice TDH</text><text start="11740.709" dur="9.351">header file in python access to fork is</text><text start="11744.819" dur="8.76">provided by importing the OS module in</text><text start="11750.06" dur="6.399">this c example i fork a child process</text><text start="11753.579" dur="6.481">that simply prints a message my code</text><text start="11756.459" dur="6.721">begins by importing the stdio.h and yuna</text><text start="11760.06" dur="6.96">STD h header files for the printf and</text><text start="11763.18" dur="6.119">fork functions respectively inside my</text><text start="11767.02" dur="5.609">main function i declare a variable named</text><text start="11769.299" dur="5.19">pig of integer type i then print a</text><text start="11772.629" dur="5.73">message stating that I have not yet</text><text start="11774.489" dur="6">forked the parent process I call the</text><text start="11778.359" dur="3.93">fork function without arguments and I</text><text start="11780.489" dur="4.861">assign its return value to the Pitt</text><text start="11782.289" dur="5.28">variable upon completion of the fork</text><text start="11785.35" dur="5.519">function I will have two copies of my</text><text start="11787.569" dur="6.72">program running at the same time in the</text><text start="11790.869" dur="5.13">first copy where I called fork the value</text><text start="11794.289" dur="5.13">of the pig variable will be set to the</text><text start="11795.999" dur="5.76">process ID of the child process in my</text><text start="11799.419" dur="6.721">child process the value of the pig</text><text start="11801.759" dur="6.6">variable will be set to zero my child</text><text start="11806.14" dur="5.279">process prints a message stating that it</text><text start="11808.359" dur="5.06">is the child my parent process prints a</text><text start="11811.419" dur="5.07">message stating that it is the parent</text><text start="11813.419" dur="5.23">both messages will be printed since the</text><text start="11816.489" dur="6">value of the pit&amp;#39; variable will be zero</text><text start="11818.649" dur="5.73">only in the child process however the</text><text start="11822.489" dur="4.5">order in which the two messages are</text><text start="11824.379" dur="7.891">printed is not guaranteed and may vary</text><text start="11826.989" dur="7.56">between program runs the equivalent</text><text start="11832.27" dur="4.319">Python code is a bit shorter but looks</text><text start="11834.549" dur="4.381">rather similar owing to the fact that</text><text start="11836.589" dur="6.091">python is exposing the underlying c</text><text start="11838.93" dur="7.739">library to us to use the fork function I</text><text start="11842.68" dur="4.86">need to import the OS module I then set</text><text start="11846.669" dur="3.391">the pit very</text><text start="11847.54" dur="6">by calling OS dot fork and a manner</text><text start="11850.06" dur="5.669">similar to the corresponding C code upon</text><text start="11853.54" dur="5.55">receiving this call the entire Python</text><text start="11855.729" dur="5.701">interpreter process will be cloned the</text><text start="11859.09" dur="4.2">pit variable inside my script in the</text><text start="11861.43" dur="4.44">parent interpreter process will contain</text><text start="11863.29" dur="5.279">the process ID of the child Python</text><text start="11865.87" dur="4.62">interpreter process in the child</text><text start="11868.569" dur="5.571">interpreter the pit variable in my</text><text start="11870.49" dur="3.65">Python script will have the value 0</text><text start="11874.529" dur="4.75">since the value of pit is 0 in the child</text><text start="11877.359" dur="4.861">process the child message will be</text><text start="11879.279" dur="5.761">printed in the parent process where the</text><text start="11882.22" dur="5.849">value of pit is not 0 the parent message</text><text start="11885.04" dur="4.95">will be printed both messages will</text><text start="11888.069" dur="3.571">appear in the output of the script but</text><text start="11889.99" dur="7.11">the order of the messages is not</text><text start="11891.64" dur="7.41">guaranteed and may change while we</text><text start="11897.1" dur="4.32">sometimes fork processes in order to</text><text start="11899.05" dur="3.96">create parallel programs a more common</text><text start="11901.42" dur="4.649">use of forking is to start another</text><text start="11903.01" dur="5.91">program we can replace a currently</text><text start="11906.069" dur="5.551">running process with another program by</text><text start="11908.92" dur="4.38">using the exec system call which is made</text><text start="11911.62" dur="5.94">available through a variety of functions</text><text start="11913.3" dur="6.33">in C and Python one important thing to</text><text start="11917.56" dur="3.9">remember about the exec system call is</text><text start="11919.63" dur="3.84">that it completely replaces the</text><text start="11921.46" dur="5.55">currently running program with the new</text><text start="11923.47" dur="6.809">program a common error is to try to put</text><text start="11927.01" dur="5.73">program code after a call to exec in an</text><text start="11930.279" dur="5.12">attempt to perform some other operation</text><text start="11932.74" dur="6.329">after the external program is finished</text><text start="11935.399" dur="6.821">however any such code placed after the</text><text start="11939.069" dur="5.191">call to exec will never execute since</text><text start="11942.22" dur="7.95">that code gets replaced with the rest of</text><text start="11944.26" dur="8.07">the original program now in practice the</text><text start="11950.17" dur="4.68">exec system call is implemented as a</text><text start="11952.33" dur="5.46">collection of functions not as a single</text><text start="11954.85" dur="4.83">function these functions differ and</text><text start="11957.79" dur="5.18">whether or not they take arguments as a</text><text start="11959.68" dur="6.54">fixed set of parameters or as an array</text><text start="11962.97" dur="5.65">they also vary by allowing the operating</text><text start="11966.22" dur="5.07">system to search the system path to find</text><text start="11968.62" dur="4.77">the new program versus requiring the</text><text start="11971.29" dur="5.399">absolute path to the new program to be</text><text start="11973.39" dur="5.52">provided furthermore some of the exec</text><text start="11976.689" dur="4.641">functions allow environment variables to</text><text start="11978.91" dur="4.62">be set for the new program</text><text start="11981.33" dur="4.69">regardless of the version of exec that</text><text start="11983.53" dur="4.74">is chosen a standard convention on</text><text start="11986.02" dur="4.709">unix-like systems is that the first</text><text start="11988.27" dur="5.16">argument to a newly loaded program is</text><text start="11990.729" dur="6.181">the name of the program itself exactly</text><text start="11993.43" dur="5.58">as the user entered it this convention</text><text start="11996.91" dur="4.489">allows a single program to be known in</text><text start="11999.01" dur="4.41">the system by a variety of names</text><text start="12001.399" dur="4.391">possibly allowing it to implement</text><text start="12003.42" dur="5.63">different behaviors depending upon the</text><text start="12005.79" dur="3.26">name by which it was invoked</text><text start="12009.49" dur="5.611">as I mentioned there are quite a few</text><text start="12011.38" dur="6">variations in exec functions in the C</text><text start="12015.101" dur="5.429">programming language versions with a</text><text start="12017.38" dur="5.37">lowercase L in the name take a fixed</text><text start="12020.53" dur="4.41">number of arguments where the final</text><text start="12022.75" dur="5.91">argument is always a null pointer of</text><text start="12024.94" dur="5.55">character type alternatively versions of</text><text start="12028.66" dur="6.12">the lowercase V in the name take a</text><text start="12030.49" dur="6.66">vector or array of arguments if a</text><text start="12034.78" dur="4.111">lowercase P appears in the name the</text><text start="12037.15" dur="5.25">operating system will search the system</text><text start="12038.891" dur="5.849">path to find the new program versions of</text><text start="12042.4" dur="6.84">the lowercase e allow environment</text><text start="12044.74" dur="6.901">variables to be modified the</text><text start="12049.24" dur="5.31">corresponding Python versions of exec</text><text start="12051.641" dur="4.71">available in the OS module follow the</text><text start="12054.55" dur="4.86">same conventions with respect to the</text><text start="12056.351" dur="5.46">lettering however it is not necessary to</text><text start="12059.41" dur="7.111">pass a null pointer at the end of the L</text><text start="12061.811" dur="7.109">versions now let&amp;#39;s take a look at a</text><text start="12066.521" dur="4.469">simple C application that Forks a child</text><text start="12068.92" dur="4.88">that execs the /bin slash</text><text start="12070.99" dur="5.13">echo program installed on the system</text><text start="12073.8" dur="5.23">nothing has changed about the fork</text><text start="12076.12" dur="5.311">operation the pit value returned to the</text><text start="12079.03" dur="4.2">child is always zero while the value</text><text start="12081.431" dur="5.639">returned to the parent is always non</text><text start="12083.23" dur="7.14">zero however in the child we are now</text><text start="12087.07" dur="7.111">using exec L to load the /bin slash echo</text><text start="12090.37" dur="6.03">program notice that slash bin slash echo</text><text start="12094.181" dur="4.559">is both the name of the program and the</text><text start="12096.4" dur="4.5">first argument to the program the</text><text start="12098.74" dur="6.18">message to be printed hello</text><text start="12100.9" dur="6.3">follows finally we must terminate the</text><text start="12104.92" dur="4.23">parameter list with a null pointer which</text><text start="12107.2" dur="6.63">needs to be cast to a character pointer</text><text start="12109.15" dur="6.781">type when the child calls exec out it&amp;#39;s</text><text start="12113.83" dur="4.351">copy of the program code is completely</text><text start="12115.931" dur="6.119">replaced by the code for slash bin slash</text><text start="12118.181" dur="5.579">echo thus the child never prints the</text><text start="12122.05" dur="5.221">final message in the program</text><text start="12123.76" dur="6.08">the message is thus printed only once by</text><text start="12127.271" dur="2.569">the parent</text><text start="12131.07" dur="5.4">the Python version of this code is</text><text start="12133.74" dur="5.04">shorter but otherwise similar there is</text><text start="12136.47" dur="6.21">no need nor is there any practical way</text><text start="12138.78" dur="7.5">to pass a null pointer to OS exec l at</text><text start="12142.68" dur="5.67">the end of the parameter list as is the</text><text start="12146.28" dur="4.08">case with the C version of the code only</text><text start="12148.35" dur="4.41">the parent prints the message at the</text><text start="12150.36" dur="4.26">bottom the copy of the Python</text><text start="12152.76" dur="4.62">interpreter and script that was created</text><text start="12154.62" dur="5.55">by fork is completely replaced by the</text><text start="12157.38" dur="7.08">called OSX echo with the code for the</text><text start="12160.17" dur="6.66">slash bin slash echo program thus the</text><text start="12164.46" dur="4.561">child is no longer executing the Python</text><text start="12166.83" dur="7.8">interpreter and the final line of code</text><text start="12169.021" dur="7.469">is never run in the child we have seen</text><text start="12174.63" dur="5.01">that we can execute another program</text><text start="12176.49" dur="5.19">using exec however what if we would like</text><text start="12179.64" dur="5.28">to take some other action after the</text><text start="12181.68" dur="5.25">other program executes in that case we</text><text start="12184.92" dur="4.08">can use a version of the wait system</text><text start="12186.93" dur="5.64">call in the parent to wait for the child</text><text start="12189" dur="5.55">to terminate before moving to example</text><text start="12192.57" dur="5.31">code for wait take a moment to note that</text><text start="12194.55" dur="6.2">in some cases the parent process may</text><text start="12197.88" dur="5.58">terminate before the child process on a</text><text start="12200.75" dur="4.72">Linux system termination of the parent</text><text start="12203.46" dur="6.87">before the child creates an orphan child</text><text start="12205.47" dur="7.01">process some programs are intentionally</text><text start="12210.33" dur="5.16">designed to create orphan processes</text><text start="12212.48" dur="5.47">these programs are engineered to run in</text><text start="12215.49" dur="4.83">the background performing various system</text><text start="12217.95" dur="6.51">services and these are known as demons</text><text start="12220.32" dur="6.42">or service processes by convention the</text><text start="12224.46" dur="6.54">word daemon uses its archaic spelling</text><text start="12226.74" dur="6.72">here with an AE in place of the e in the</text><text start="12231" dur="5.46">modern spelling however the</text><text start="12233.46" dur="4.02">pronunciation is the same daemon is</text><text start="12236.46" dur="4.55">incorrect</text><text start="12237.48" dur="7.05">even though many people mispronounce it</text><text start="12241.01" dur="5.74">now the C code illustrating the act of</text><text start="12244.53" dur="5.61">waiting on a child process to terminate</text><text start="12246.75" dur="5.37">barely fits on one slide in order to get</text><text start="12250.14" dur="4.381">access to the weight pit function on</text><text start="12252.12" dur="6.96">Linux I need to include the sis</text><text start="12254.521" dur="7.409">slash wait H header file in this example</text><text start="12259.08" dur="5.101">I fork a child process that sleeps for</text><text start="12261.93" dur="4.86">five seconds before exiting</text><text start="12264.181" dur="5.309">in my parent code I announced that the</text><text start="12266.79" dur="4.91">parent is waiting then I call wheat ped</text><text start="12269.49" dur="5.43">to wait for the child to terminate in</text><text start="12271.7" dur="6.34">this simple example I need to pass three</text><text start="12274.92" dur="5.25">arguments to wait ped the process ID of</text><text start="12278.04" dur="5.271">the child which fork returned to the</text><text start="12280.17" dur="7.74">parent above followed by a null pointer</text><text start="12283.311" dur="7.359">followed by the number 0 the weight paid</text><text start="12287.91" dur="6.42">function blocks or stops and waits until</text><text start="12290.67" dur="5.881">the child process terminates once</text><text start="12294.33" dur="5.67">termination occurs the parent prints</text><text start="12296.551" dur="5.909">another message the Python version of</text><text start="12300" dur="5.101">this example is considerably shorter but</text><text start="12302.46" dur="6.24">once again it uses essentially the same</text><text start="12305.101" dur="5.7">calls in addition to the OS module I</text><text start="12308.7" dur="5.641">need to import the time module to get</text><text start="12310.801" dur="5.189">access to the sleep function the major</text><text start="12314.341" dur="3.75">difference in this code from the</text><text start="12315.99" dur="4.26">corresponding C code is that I do not</text><text start="12318.091" dur="6.389">pass a null pointer is the second</text><text start="12320.25" dur="6.36">argument to OS suite bid instead I skip</text><text start="12324.48" dur="4.25">that argument and simply pass a 0</text><text start="12326.61" dur="5.031">instead</text><text start="12328.73" dur="5.46">discuss process management I will</text><text start="12331.641" dur="7.29">discuss process contexts context</text><text start="12334.19" dur="6.84">switches and process scheduling each</text><text start="12338.931" dur="3.96">process running on a system has a</text><text start="12341.03" dur="4.83">certain amount of information associated</text><text start="12342.891" dur="4.979">with it a minimal set of state</text><text start="12345.86" dur="4.35">information that allows a process to be</text><text start="12347.87" dur="6.72">stopped and later restarted is called</text><text start="12350.21" dur="6.54">process context process context includes</text><text start="12354.59" dur="4.95">the current contents of CPU registers</text><text start="12356.75" dur="5.73">the current program counter value for</text><text start="12359.54" dur="7.891">the process in the contents of RAM the</text><text start="12362.48" dur="7.56">process is using switching between</text><text start="12367.431" dur="3.659">processes on a system is often called a</text><text start="12370.04" dur="3.54">context switch</text><text start="12371.09" dur="6.27">although processed switch is a more</text><text start="12373.58" dur="6.41">precise term the operating system can</text><text start="12377.36" dur="5.67">perform a context switch from a process</text><text start="12379.99" dur="5.5">into a section of the kernel and then</text><text start="12383.03" dur="5.6">back to the same process without</text><text start="12385.49" dur="5.64">actually performing a process switch</text><text start="12388.63" dur="5.77">context switches require at least one</text><text start="12391.13" dur="5.46">mode switch to perform since the CPU</text><text start="12394.4" dur="6.18">must enter supervisor mode enter the</text><text start="12396.59" dur="5.67">kernel relatively speaking context</text><text start="12400.58" dur="5.01">switches are a fairly expensive</text><text start="12402.26" dur="6.71">operation and frequent context switching</text><text start="12405.59" dur="5.85">will reduce system performance</text><text start="12408.97" dur="4.9">whenever the operating system needs to</text><text start="12411.44" dur="4.831">switch from one process to another it</text><text start="12413.87" dur="5.46">must first make a context switch into</text><text start="12416.271" dur="5.16">the kernel the kernel then saves the</text><text start="12419.33" dur="4.68">state of the previously running process</text><text start="12421.431" dur="5.67">and restores the state of the process to</text><text start="12424.01" dur="5.04">which it is switching a second context</text><text start="12427.101" dur="7.279">switch is then required to start the</text><text start="12429.05" dur="7.83">newly restored process on UNIX systems</text><text start="12434.38" dur="6.52">processes are created via the fork</text><text start="12436.88" dur="6.301">system call following creation the CPU</text><text start="12440.9" dur="6.09">scheduler determines when and where the</text><text start="12443.181" dur="6.149">process will be run once the CPU core is</text><text start="12446.99" dur="7.441">selected the dispatcher starts the</text><text start="12449.33" dur="8.07">process whenever a process makes an i/o</text><text start="12454.431" dur="5.369">request a system call into the kernel is</text><text start="12457.4" dur="4.741">made which removes the process from</text><text start="12459.8" dur="5.821">execution while waiting for the i/o to</text><text start="12462.141" dur="6.15">complete the process yields any CPU</text><text start="12465.621" dur="4.619">cores it is currently using so that</text><text start="12468.291" dur="4.5">another process can use those resources</text><text start="12470.24" dur="7.08">while the first process waits on the</text><text start="12472.791" dur="6.81">relatively slow i/o device operations</text><text start="12477.32" dur="4.59">that cause the process to yield the CPU</text><text start="12479.601" dur="5.96">cores and wait for some external event</text><text start="12481.91" dur="6.511">to occur are called blocking operations</text><text start="12485.561" dur="5.799">reading data from a file is an example</text><text start="12488.421" dur="5.1">of such an operation the process calls</text><text start="12491.36" dur="4.321">the read function and the read function</text><text start="12493.521" dur="5.31">does not return until it is read some</text><text start="12495.681" dur="6.12">data the process may have been moved off</text><text start="12498.831" dur="5.609">the CPU and then restored and returned</text><text start="12501.801" dur="8.79">to the CPU while waiting for the read</text><text start="12504.44" dur="9.21">function to return some processes are</text><text start="12510.591" dur="6">CPU bound which means they perform large</text><text start="12513.65" dur="6.33">computations with few i/o requests or</text><text start="12516.591" dur="5.13">other blocking operations in order to</text><text start="12519.98" dur="4.5">enable the system to service other</text><text start="12521.721" dur="5.79">processes and effectively implement</text><text start="12524.48" dur="5.84">multi programming the CPU scheduler must</text><text start="12527.511" dur="5.58">preempt these types of processes</text><text start="12530.32" dur="5.471">preemption involuntarily saves the</text><text start="12533.091" dur="5.49">process state removes the process from</text><text start="12535.791" dur="6.569">execution and allows another process to</text><text start="12538.581" dur="6.39">run interrupts from hardware devices may</text><text start="12542.36" dur="5.79">also preempt running processes in favor</text><text start="12544.971" dur="5.429">of kernel interrupt handlers without</text><text start="12548.15" dur="6.471">this type of preemption the system would</text><text start="12550.4" dur="4.221">appear to be unresponsive to user input</text><text start="12554.65" dur="5.561">now in order to store process state</text><text start="12557.841" dur="4.319">information the operating system must</text><text start="12560.211" dur="4.83">maintain data structures about each</text><text start="12562.16" dur="7.5">process these structures are called</text><text start="12565.041" dur="6.96">process control blocks or PCBs PCBs</text><text start="12569.66" dur="4.83">contain fields where information about a</text><text start="12572.001" dur="6.12">process can be saved whenever a process</text><text start="12574.49" dur="6.12">is moved off the CPU once again this</text><text start="12578.121" dur="4.59">information includes the contents of CPU</text><text start="12580.61" dur="6.3">registers and current program counter</text><text start="12582.711" dur="6.54">value the operating system stores</text><text start="12586.91" dur="6.921">process control blocks in linked list</text><text start="12589.251" dur="4.58">structures within kernel memory space</text><text start="12594.94" dur="5.73">here is a process control block in</text><text start="12597.49" dur="5.7">greater detail some of the information</text><text start="12600.67" dur="6.36">fields include process state the unique</text><text start="12603.19" dur="8.28">ID for the process the process program</text><text start="12607.03" dur="6.45">counter CPU register contents memory</text><text start="12611.47" dur="6.18">limits for regions of memory used by the</text><text start="12613.48" dur="8.041">process open file descriptors and other</text><text start="12617.65" dur="5.4">data when performing a process switch it</text><text start="12621.521" dur="5.669">is critical that the operating system</text><text start="12623.05" dur="6.24">saves the processes CPU state at a</text><text start="12627.19" dur="4.17">theoretical minimum this state</text><text start="12629.29" dur="6.51">information includes the program counter</text><text start="12631.36" dur="6.93">and CPU register contents in practice</text><text start="12635.8" dur="7.53">more information will be saved during</text><text start="12638.29" dur="7.65">each process switch this diagram</text><text start="12643.33" dur="4.92">presents a simplified view of process</text><text start="12645.94" dur="4.951">switching where only the program counter</text><text start="12648.25" dur="5.851">and register contents are saved and</text><text start="12650.891" dur="6.179">restored here are the operating system</text><text start="12654.101" dur="6.809">switches from the process with ID 1 -</text><text start="12657.07" dur="5.94">the process with ID 2 the first step in</text><text start="12660.91" dur="4.74">the process which is to perform a mode</text><text start="12663.01" dur="4.5">switch into kernel mode along with the</text><text start="12665.65" dur="3.98">context switch to the section of the</text><text start="12667.51" dur="5.16">kernel that handles process switching</text><text start="12669.63" dur="5.71">that component of the kernel saves the</text><text start="12672.67" dur="5.48">program counter and CPU registers into</text><text start="12675.34" dur="6.66">the process control block for process 1</text><text start="12678.15" dur="7.03">also the process State for process 1 is</text><text start="12682" dur="5.79">set to ready indicating that the process</text><text start="12685.18" dur="6.06">is ready to run again once the CPU core</text><text start="12687.79" dur="6.3">becomes available the process switching</text><text start="12691.24" dur="5.64">code then restores the CPU register</text><text start="12694.09" dur="5.19">contents and program counter value from</text><text start="12696.88" dur="6.33">the process control block for process</text><text start="12699.28" dur="6.991">two the state of process 2 is changed</text><text start="12703.21" dur="5.12">from ready to running the CPU privilege</text><text start="12706.271" dur="5.269">level is returned to user mode and</text><text start="12708.33" dur="8.981">contexts to switch to process two</text><text start="12711.54" dur="7.66">process two now begins executing during</text><text start="12717.311" dur="4.169">the lifetime of a process the</text><text start="12719.2" dur="4.47">corresponding process control block is</text><text start="12721.48" dur="3.89">moved between various cues in the</text><text start="12723.67" dur="4.22">operating system</text><text start="12725.37" dur="4.98">each cue is a linked list of process</text><text start="12727.89" dur="7.68">control blocks and multiple linked lists</text><text start="12730.35" dur="8.04">overlap all PCBs are at all times in the</text><text start="12735.57" dur="5.24">job queue which is a linked list of all</text><text start="12738.39" dur="5.4">process control blocks in the system</text><text start="12740.81" dur="5.53">process control blocks corresponding to</text><text start="12743.79" dur="5.76">processes in the ready state are linked</text><text start="12746.34" dur="5.849">into the ready list processes waiting</text><text start="12749.55" dur="7.61">for device IO have their PCBs linked</text><text start="12752.189" dur="8.281">into various different device queues in</text><text start="12757.16" dur="7">this diagram we can see the PCBs for</text><text start="12760.47" dur="6.99">five different processes all PCBs are</text><text start="12764.16" dur="7.56">members of the job list with list links</text><text start="12767.46" dur="7.05">depicted by the green arrows three jobs</text><text start="12771.72" dur="6.15">are in the ready list and two jobs are</text><text start="12774.51" dur="5.25">in a device queue waiting on IO the</text><text start="12777.87" dur="5.21">linked lists within each queue are</text><text start="12779.76" dur="6.09">represented by the dark blue arrows</text><text start="12783.08" dur="5.319">notice that the two linked lists for the</text><text start="12785.85" dur="6.06">queues overlap the linked lists for the</text><text start="12788.399" dur="5.761">job list careful management of linked</text><text start="12791.91" dur="7.29">lists is a major requirement of</text><text start="12794.16" dur="7.05">operating system kernel code now I&amp;#39;d</text><text start="12799.2" dur="3.9">like to shift focus for a moment to</text><text start="12801.21" dur="4.85">mention scheduling since it is closely</text><text start="12803.1" dur="5.82">related to the linked list management in</text><text start="12806.06" dur="5.919">operating systems theory we typically</text><text start="12808.92" dur="6.39">divide scheduling into three types job</text><text start="12811.979" dur="7.5">scheduling midterm scheduling and CPU</text><text start="12815.31" dur="6.75">scheduling job or long term scheduling</text><text start="12819.479" dur="6.181">refers to the selection of processes to</text><text start="12822.06" dur="5.97">place into the ready State this type of</text><text start="12825.66" dur="5.55">scheduler has a long interval typically</text><text start="12828.03" dur="4.92">on the order of seconds to minutes one</text><text start="12831.21" dur="4.17">of the clearest examples of job</text><text start="12832.95" dur="4.56">scheduling on modern systems occurs on</text><text start="12835.38" dur="5.34">high-performance computational clusters</text><text start="12837.51" dur="5.13">on which users submit jobs that may</text><text start="12840.72" dur="6.69">require hours to be scheduled and</text><text start="12842.64" dur="7.23">execute midterm scheduling refers to the</text><text start="12847.41" dur="5.19">swapping of inactive processes out to</text><text start="12849.87" dur="4.141">disk and restoring swapped processes</text><text start="12852.6" dur="4.321">from disk</text><text start="12854.011" dur="7.139">many of these tasks are now part of the</text><text start="12856.921" dur="7.35">virtual memory subsystem CPU scheduling</text><text start="12861.15" dur="4.981">or short-term scheduling refers to the</text><text start="12864.271" dur="5.58">selection of processes from the ready</text><text start="12866.131" dur="5.729">list to run on CPU cores this type of</text><text start="12869.851" dur="8.219">scheduling will be the subject of future</text><text start="12871.86" dur="7.741">lectures one important operating system</text><text start="12878.07" dur="4.05">component related to short-term</text><text start="12879.601" dur="4.679">scheduling is the dispatcher which</text><text start="12882.12" dur="4.141">receives a process control block from</text><text start="12884.28" dur="3.931">the short-term scheduler and restores</text><text start="12886.261" dur="4.59">the context of the process</text><text start="12888.211" dur="5.369">the dispatcher also completes the</text><text start="12890.851" dur="4.979">context switch to the process by</text><text start="12893.58" dur="4.471">performing a mode switch into user mode</text><text start="12895.83" dur="4.92">and jumping to the instruction address</text><text start="12898.051" dur="5.179">specified by the newly restored program</text><text start="12900.75" dur="2.48">counter</text></transcript>