 and welcome to today's lecture on storage technology. This is the second lecture on this topic and in the last lecture we have discussed about magnetic disks, optical disks and magnetic tapes which are used as storage devices. And today I shall focus on flash memory which is one of the storage technology that is used and that is flash memory technology has led to solid state disks. I shall discuss about those solid state disks and then to improve the reliability and availability we shall discuss how multiple disks can be used together and leading to what is known as red 0 to 6 red levels redundant array of independent disks that is the acronym that has been used to this technique. I mean where multiple disks are used together and there are 6 different levels 0 through 6 I shall discuss about them and then I shall conclude my lecture by considering which ones to be used in what situation. I have already discussed about flash memory in the context of main memory. We have seen that flash memory is a form of electrically erasable programmable ROM and only difference with electrically erasable and programmable ROM is that this flash memory allows a block to be raised or written in a single operation and as you know it is based on floating gate avalanche injection metal oxide semiconductor technology. And we have seen how electrons are trapped in a floating gate as it is shown in this particular diagram how the electrons are trapped and as the electrons get trapped in the floating gate the threshold voltage is reduced and that is how reading and writing can be done. And particularly in for flash memory reads can be done at the speed of dynamic RAM that is at the in the range which is in the range of nanosecond. On the other hand writing is rather slow which is done in I mean at the speed of millisecond. So, write is a complex operation because you have to I mean allow the accumulation of electrons in the floating gate and that is why the writing is a complex process and takes longer time. And flash memory has become very popular in recent years particularly in embedded systems in portable mp3 player because of their low power consumption fast read. And as I mentioned write times is much slower than DRAM, but in the embedded applications like mp3 player where it is primarily used for reading it is rarely used for writing most of the time you are reading. And it is costlier than dynamic RAM about 6 times costlier than dynamic RAM and price per megabit is about 6000 times that of magnetic disks. However, price is falling and that has led to the development of solid state disks. So, most systems designers as you know dream of replacing slow mechanical storage as you have seen the magnetic storage is based on you know driving a magnetic I mean plate disk by motor and as a consequence it is very slow and it is not very reliable. So, this slow and mechanical storage to be replaced with fast and non volatile memory that is the dream of most of the system designers and particularly the advantage of inexpensive solid state disk based on flash memory technology and eventually on storage class memory which has evolved based on this flash memory is bringing this dream closer to reality. So, it has become that dream is coming true and particularly this storage class memory based on this solid state disk or flash memory technology has the important properties like it is non volatile short access time, low cost per bit and solid state because there is no and it has got no moving parts because it is based on solid state technology. And we have already seen the use discuss the use of flash memory in the context of main memory. So, main memory requirement is fast, expensive and volatile you know main memory is much more expensive. So, it satisfies more or less both the requirements that is the reason why this solid state disk blurs the distinction between memory and storage. The requirement for storage is slow, cheap and non volatile. So, in case of this storage class memory you know that distinction between memory and storage has reduced in other words it can be used for both the purposes. With this background let us focus on how we can improve the reliability and availability by using multiple disk together. So, the innovation that improve both performance and dependability of storage systems is to use multiple disk together. So, this is not a very new concept as we know whenever we try to improve the availability we use multiple devices if one fails other one will serve the purpose. Similarly, for the purpose of reliability we can use multiple disks where you know we can do have some kind of redundancy which will allow you to have higher reliability. So, that is the basic concept behind multiple disks I mean use of multiple disk together. So, it gives you not only reliability and dependability it gives you better performance because throughout can be, throughput can be increased by having multiple disks serve several pieces of single data set. That means, since we are using multiple disks the throughput can be increased by parallel access of the disks and serving and that a single data set can be accessed from multiple disks and that is how the throughput can be increased. And we shall see that distribution and is data is done by a technique known as stripping actually different blocks of the same data is placed in different disks and that approach is known as stripping in the context of this multiple disk memory system. So, the problem is that when one adds components to a system results in decreased dependability as you know whenever the number of components increases. So, instead of one component if you have a n component obviously the possibility of failure of the entire thing I mean one of the n components increases. So, reliability in way in that way decreases. However, this is based on the assumption that all devices must be operational for the system to be working only then you know this reliability or dependability decreases whenever we have n devices and the reliability becomes one by n of the reliability of one device. However, we do not use the multiple disk in this manner as we shall see. So, better availability is achieved in this way if a disk fails then while it is being referred data can be obtained from other disks. So, that means whenever we are using multiple disks you can access one disk and if a particular disk fails then instead of accessing from that disk we can access from other disks. So, that is done because of the replication of data. Data is fully replicated on other disks. So, either you will read from the replicated data or there is another approach which can be used that data can be reconstructed from the data on other disks. So, if a particular disk fails then from the replicated data that is where the duplicate information is stored because by incorporating redundancy you can read it from there or from the remaining disks which are available which have not failed from those disks data can be reconstructed we shall see how it can be done. One important issue is what if other disk fails while a disk is being repaired. That means so far as the single fault is concerned it can be easily tolerated or in the sense that availability on reliability can be improved for a single disk failure. That means whenever one disk is getting I mean the regeneration of a particular disk is taking place from other disk say if another disk fails then what will happen. Now, let us see whether this is possible or not. It has been found that the mean time to failure. So, MTTF is a very important parameter MTTF mean time to failure. So, this figure is quite large it has been found that of a disk is tens of years that means a particular disk does not fail in alternate days. So, failure is very uncommon and rare. So, once in may be several years tens of years it happens. However, that MTTR mean time to repair that time is very low. So, MTTR of a disk can be as low as hours that means if a particular disk fails that failing that it will fail may be once in ten years. But, whenever it fails within few hours it can be regenerated or it can be I mean a new disk can be placed in place of that faulty disk. So, in such a case you know since the mean time to failure is very long and mean time to regeneration repair is very I mean is only few hours it does not pose any problem that means there is a possibility of multiple disk failing is extremely low because of these two figures. So, therefore, statistically there is no problem the possibility of two disk failing together will not happen in practice. Now, question is there are two important technical issues by which mean time to repair is reduced. So, how do you detect whenever a disk fails? So, there are you have to understand the technical aspects behind this behind the operation of this multiple disk. So, you have got n disks in a multiple disk same question arises which one has failed and how to how to find it out fortunately the disk which has failed generates a flag. So, it is a self generated signal that means the disk which has failed will generate a signal that it has I am failed I have failed. So, that is a very good thing luckily all disk return error information. So, flag is generated and so identification of a faulty disk it does not pose any problem. Now, how do we make empty tier as low as possible that is few hours that can be done in two ways. One concept is known as hot spares in such a case unused disk that can be quickly used to replace a failed disk that means what can be done few disks are already in the rack which are not being used. So, those are known as hot spares it is not that a disk has been kept in the Almira that the disk is already there it is in the system, but it was not being used. So, as soon as a particular disk fails it can be it can be made operational very quickly that is known as hot spares. Another possibility is to use hot swapping. So, whenever a particular disk fails it can be taken out and a new disk can be placed. So, hot swapping is without shutting down the system you can put in a new disk I mean a disk which is not faulty. So, replacing the disk without shutting down the computer at all. So, these two techniques are used to reduce the empty tier it can be very small. Now, we shall discuss about a technique which is used for the purpose of improving the reliability and availability and the technique is known as redundant array of inexpensive disks or in short it is red. So, redundant array of inexpensive disks. So, redundant at this R A I D. So, this is the acronym that is being used and this I sometimes is instead of inexpensive it is also used for independent. So, independent or inexpensive I mean either anyone of them or both of them can be used it does not matter and so this is the standard way of using multiple disks to increase throughput and or reliability. So, this uses a red controller to make things transparent to the user or program. In the previous case as you have seen whenever we do hot spares or hot swapping sometimes manual intervention is required. But in this particular case whenever we are using reds then there is no need for any manual intervention the automatically the red controller will that is an electronics that is present in the system will take care of will make things transparent to the user or programmer user or the program that is running. So, redundancy can deal with one or more failures. So, what can be done as we shall see either single failure or multiple failures can be tolerated and each sector of a disk records check information that allows it to determine if the disk has an error or not. So, as we know some error detecting codes or error detected I mean codes are usually present for each sector. So, either parity or some other technique checks some whatever it may be that is being used to detect any error for reading a sector. So, whenever that happens we know that a particular disk has become faulty. And when a disk reads this is read flags and error turn elsewhere for correct data. That means whenever that flag is generated because of incorrect reading from of a sector then it generates a error message or you can say disk flags and error and we have to turn to other devices for getting correct data how it can be done we shall see. So, as I mentioned there are six different red levels first one is known as red 0. So, although we are using the terminology redundant as we shall see in case of red level 0 there is no redundancy. The main thing that is being done in the red level 0 is data stripping across multiple disks. So, uses a fixed strip sizes and this is transparent to the users or programs and what is being done say suppose you have got 8 disks. So, this is one disk and in this way you can have 8 disks. So, one strip is stored here that one strip that is usually called a block. We have already seen that data that can be accessed from a hard disk minimum unit is a sector. So, a sector of data can be called as a block in the context of hard disk or if we go for bigger size block then it may involve multiple sectors that is why that we do not use the term sector, but you can have a block which is bigger than a sector, but it has to be bigger than a sector. So, 0th block is placed here then the first block is placed here then the 8th block is placed here. So, in this way data is distributed in 8 disks you may be asking what purpose it will serve. The main purpose that will serve is first of all it will give the illusion of a larger disk. So, that means you can store large amount of data and it will give the illusion of a single disk. So, since it is transparent to the user they will feel that the capacity is now 8 times that of a single disk and also it gives you higher throughput. As we know we have already seen whenever you read data from a disk it will involve several parameters or times, seek time, then rotational time, rotational time, then transfer time all these times are involved, but here all these things can be done parallel and as a consequence it will give you high performance. So, it uses routinely for high performance computing applications like rendering, scientific computing. So, as I mentioned ironically although you are using the term read it has no redundancy. So, it provides high data transfer capability. So, it is just a bunch of disks. So, that is the terminology sometimes used just a bunch of disks. So, this is read level 0. Now, let us come to I mean how it is being done this is shown here. So, here the user have the illusion of a single logical disk of large capacity. Then there is an array management controller which distributes the data to different disks. So, this you have got in this particular diagram four different disks disk 0, disk 1, disk 2 and disk 3 and where data is distributed to strip 1 goes to disk 0, strip 0 goes to disk 0, strip 1 goes to disk 1, strip 2 goes to disk 2 and that distribution is done by this array management controller and user have a logical disk space. So, whenever it reads then the array management controller will read them from different disk and provide the data to the user or the program which is running. So, this is your read level 0 and this shows how it really works. Coming to read level 1. So, this creates an exact copy of a set of data on two or more disk called mirroring or shadowing. So, in this case what we are doing to improve the reliability and availability you are doing what is known as mirroring. So, suppose this is one disk and here you have got another disk. So, same data so if you are storing block B 0 here it will be also storing block B 0 here. So, you are using a concept known as mirroring or which is also known as shadowing. So, this is the in the context of a context of only two disks and since you are using mirroring and shadowing although you are having two disks the capacity is equivalent to that of a single disk. So, in this case this is the very simple redundancy and very simple and that was widely used in many applications in the early years of redundancy and whenever data is read into a disk it is also written to another disk. So, in this particular case reading can be I mean reading can be done either from disk disk where block 0 is stored or you can read it either from the disk. So, there is a possibility of reading from both, but usually data is read from one main disk not the mirror disk only when the main disk fails the disk fails and data is read from the mirror disk and in the meantime the regeneration of data takes place in the failed disk. So, this is how it really works the system automatically gets data from the mirror disk and regeneration takes place in the other disk this is how read level one works. So, in this particular case as you can see the redundancy is maximum you are you require double the number of disks. So, whenever you are using this type of the read level one for the purpose of improving the reliability and availability. So, because of this large requirement I mean requirement of so many disks this is really used and this. So, as I mentioned higher availability, but expensive to store x gigabytes of data one need to purchase 2 x gigabytes of storage. So, you can do some optimization read from disk whose arm has the shortest travelling distance to the desired sector. So, this kind of optimization can be done as you have seen there is a seek time seek time is dependent on the position of the arm and since we are having 2 disks storing the same data. So, you can identify for which one arm is nearer to the track that has been referred by the access. So, then you can access from that particular disk. So, that kind of optimization can be done and data request can be served by either of disk and recovery from failure is very simple as I have told already explained. So, this is read level one then coming to read level two you have started with a sentence read two is not being used, but since in the original document of read this was also mentioned for the sake of academic interest we shall be discussing it very briefly. Since this was defined in the original standard and this is also expensive in terms of disk and also expensive in terms of controller why it is expensive. You can see we are using the conventional technique of error correcting codes for the purpose of regenerating data. That means in this case as you can see here you have got 4 disks and to make it feasible for correcting code automatically you will require 3 additional disks where you have to store those redundant information. So, that you can regenerate data whenever one of the disk fails. So, in this particular case I am not going into the details of error correcting codes you may have studied it and you will require large amount of redundancy for 4 disks you will require 3 additional disks to store information and is the redundancy requirement is quite heavy. So, that is why this is not used. However, most disk provide ecc by default nowadays, but this is done not in the disk level it is done within the disk. So, that is why error correcting code not provided in this way, but within the disk you can use error correcting code because you know disks are very unreliable medium and error correcting code is provided within the disk. Then coming to the red 3 data is bit interleaved across several disks. So, if separate disk maintains parity information for a set of bits. So, here you are using the concept of parity bit for the purpose of error detection and correction. Actually error detection is done by the disk itself as I have already told the disk will give a send a flag. So, whenever a disk has failed that is known. Now, from the remaining disk you should be able to regenerate data by using the concept of parity bit that is the basic idea of this red level 3. So, for example, whenever you are using 9 disks you are storing 8 bits of data in 8 disks bit 0, n disk 0, bit 1, n disk 1, bit 7, n disk 7 and so on and then disk 8 is used to store the parity bit. So, for any read you will require 9 disks to be accessed. This is a very important point you must notice. So, whenever you are performing reading or writing you have to access all the disks. That means, whenever you are reading you have to read all the 9 disks must be accessed whenever you are using 9 disks or if you are using 5 disks you have to read all the 5 disks because after reading those bits you have to check whether you have read correctly or not that means parity bit has to be compared. So, for any write 9 disks must be accessed as parity has to be calculated. So, this gives you high throughput for a single request since you are reading from multiple disks. So, you are reading parallely. So, it will give you high throughput and redundancy over rate is much smaller only 12.5 percent whenever you are using 9 disks that means 1 x drive required for 8 data. So, 1 by 8 percent divide it is 12.5 percent. So, since the interleafing is done interleafing is that means you have distributed bit wise that all read writes go to all disks. So, time to recover is very long because you will require computation you have to read from all the disks then if a particular disk fails then you have to read all the disks then you have to regenerate the other disk. So, the time to recover is very long because of the calculation of the parity bit and wasted space is only 1 by n as I have already mentioned compared to 1 by 2 that is used in read level 1. So, since failures are rare it is better than read 1 in most of the situations. So, if you compare read level 3 with read level 1 read level 2 actually it has to be read level 2 where redundancy is used this is a better scheme. So, this is the simple idea of parity bit how it is being used for the purpose of regeneration of data. So, the parity bit is defined as XOR of the individual bits on the disk from it one can reconstruct loss data. For example, whenever you are using 5 disks 4 disks for storing data and 1 for parity disk let us assume the initial data is 0 1 1 0. So, parity bit can be easily calculated since you are using even parity. So, 0 1 1 1 0 even parity bit is 0 bit. So, you will store 0 in that disk where you are storing the parity bits. Suppose, we have lost a bit that means this particular disk that the first bit disk number 1 has failed. So, this has failed you can reconstruct the data from the other disk including the parity bit disk. So, we then take the XOR of the 3 remaining bits and the parity bit that means 0 exclusive or 1 exclusive or 0 and exclusive or this 0 and that gives you 1 and indeed this is the data that was present here. So, you can easily reconstruct the data from this. Suppose, we have lost another bit say this bit has been lost after this and here also again you can calculate the find out the bit value to that was present here in the same way and you get 0 and indeed 0 was present as you can see here. So, in this way this bit interlip parity can be easily used for the purpose of regeneration of the data whenever a particular disk fails. It can be done in the block level as well suppose you have got data 8 bit data d 1 d 2 d 3 and d 4 stored in 4 different disks. So, you can compute the parity d 1 exclusive or d 2 exclusive or d 3 exclusive of d 4 to get the parity 1 1 0 1 0 0 0 0 for the data which is shown here. Then suppose that d 3 has failed if this has failed you can regenerate the data to be stored in d 3 in the same way d 1 exclusive of d 2 exclusive of d 4 exclusive of p gives you the data that was present in the disk d 3. So, this is how you can do it in the bit level or in the block level and that is the reason why instead of single bit in the bit level the red level 4 has gone for block interlip parity. So, in this case the idea is same as red level 3, but improved how it has been improved because in red level 3 we have seen every read write goes to all disk because data is interliped at bit level. So, you have to read some meaningful data you have to read all you have to access all the disks, but that need not be done whenever it is block interliped. So, whenever you are reading small amount of data that may be present in a single disk instead of reading from all the disk for small data I mean for large data say more than a block if it is only one block then you can read it from a single disk. So, that idea is being used in red level 4. So, data is stripped in the block fashion. So, that way small read involves only one disk. So, an application can then do many small independent reads to multiple disk. So, this allows you to have multiple reading for multiple applications. So, in case of multi-program environment multiple users can read from multiple disks in the block level that is the reason why it is more desirable than bit level scheme of red 3. However, the time needed for reading data will be longer same volume of data. So, block interliping reduces throughput for a single request as I have already mentioned as only a single disk drive is serving the request. So, you have to transfer the entire data from a single disk it will take longer time, but improves the task level parallelism as other disk drives are free to service other request as I have already told in a multi-programming environment whenever you are using task level parallelism that can be helpful in this in the red level 4 can be helpful. So, this is here how the data is stored is shown here block 0, block 1, block 2, block 3 the corresponding parity block level parity is stored here then block level 4, 5, 6 and 7 are stored in 4 different disk corresponding parity is stored in another disk and so on. So, this is your red level 4 now coming to the end when writing data one must read all disk and recompute the parity bit. So, this is a significant over it as we have already seen suppose you have got 4 disks. So, say d 0, d 1, d 2, d 3 and this is your parity. So, suppose this is the new data you have to write. So, what you have to do you have to calculate the new parity by using all the data that means you will perform exclusive or operation then you will write in that new parity bit it will go to the new parity bit this data will go to the new disk. So, that is your d 0 dash then of course you will be having d 1, d 2, d 3 and d 4 sorry d 0, d 1, d 2 and d 3. So, in this case as you can see you have to read all the disks. So, you will require 3 reads if you are having say d 1, d 2, 4 data disks if you write on one of the disks you have to perform 3 reads then of course you will be writing in 2 disks that is required. So, the number of reads that you have to perform is long can it be reduced. So, whenever you are using this red level 4 what you can do you can use a better scheme. So, better scheme is like this you do not have to read so many disks. So, suppose it is this d 0 then you have got d 0, d 1, d 2 and d 3 and this is your parity bit. Now, you have to write in d 0 dash in any case now for the calculation of parity what you can do you can use these 2 bits exclusive or and also this can be you can perform exclusive or with this old parity and you can write a new parity and of course d 0, d 1, d 2, d 0 is already there d 1, d 2 and d 3. So, in this case as you can see the number of reads is 2, but whenever you have got 8 disks in the previous case your requirement was if you had 8 disks, data disks then you had to read from 7 disks, 7 I mean read you have to read 7 data disks. But whenever you use this scheme whenever you use this particular scheme you can see you can you have to read only 2 disks number of reads is reducing if it is 8 even when you have got 8 disks you will require only 2 reads by this enhanced scheme and you can calculate the parity you will it can be proved that you get the same parity new parity whenever you do it in this manner. That means the parity that is being generated in this scheme will be same as the parity that is being generated in this scheme. So, you will require lesser number of reads and that is what is being done in read level 4 and so in this particular case whenever you are writing you have to perform writing on the only on n minus 2 disks if you have got n disks you have to this leaves n minus 2 disks unused. So, they can be used for other computations the more disk the larger the saving. So, as I have already explained within the context of 8 disks 9 disks. So, this is your read level 4 then coming to the read level 5 the in the previous case we have seen that data was the parity bits all the parity bits were stored in a single disk. That means whenever you are performing trying to perform writing then you have to access that particular disk and also suppose you are reading block B 0 and block B 5 writing block B 0 and writing block B 5. So, what you have to do you have to access block B 0 you have to access sorry disk D 0 disk D 1 and also you have to access the parity disk twice because you have to modify this P 0 to 3 and P 4 to 7. So, this leads to a bottleneck this bottleneck can be overcome by using this read level 5 where the parity data is distributed across different disks. So, though the parity data for a block is never stored on the disk drive that stores that block. So, what you are doing here you are using say for example P 0 to 3 stored in disk 5 then P 4 to 7 stored in disk 4 P 8 to 11 stored in disk 3 and so on. Suppose if you are reading if you are writing B 0 and B 5. So, you can see you have to modify B 0 and B 5 means you have to access B 0 and disk D 0 and disk D 5 sorry D 4 and also whenever you are modifying writing block 5 then you have to access the disk D 1 and disk D 3. So, you can see you can do it parallel. So, the bottleneck arising in read full read level 4 is overcome. So, main advantages is allows multiple concurrent writes as I have already explained and that is the reason why this read level 5 is very popular and it is widely used in most of the commercial systems. I believe our discussion will not be complete without considering read level 6. So, read level 6 extend the read level 5 by adding an additional parity block. Why do you need additional parity block? The need for having additional parity what arises to take care of multiple failures? We have seen from read level 1 to read level 5 they can tolerate single disk failure, but there are situations where you have to handle multiple failures. For example, an user instead of replacing a faulty disk has taken out a good disk correct disk. So, in such a case it will lead to multiple failures. So, it will be difficult to regenerate the data whenever multiple failures occurs. So, to take care of that situation read level 6 is used, where two types of parities are used. One is that row oriented parity that is the row parity. So, you are using one disk to stores the row parity that means the row parity for 0, 1, 2 and 3 that row parity is stored in this disk 4. In addition to that you are storing another parity which is known as diagonal parity. So, diagonal parity how it is calculated? Diagonal parity is calculated for all the diagonal elements. For example, that diagonal 0 corresponds to disk 0, disk 2, disk 3, disk 4 and disk 5 and diagonal 1 corresponds to disk 0, disk 1, disk 3, disk 4 and disk 5. So, in this way diagonally the number that is being shown here the parity is calculated for those diagonal elements. However, as you can see if you are having n disks the parity is restricted to p minus 1. So, if you have got p disks all together p minus 1 parity is calculated on p minus 1 disk. So, whenever multiple disk fails for example, disk 1 and disk 3 has failed together. So, how do you take care of that situation? So, whenever both disk 1 and disk 3 has failed, but that simple row parity calculation will not allow you to regenerate data in disk 1 and disk 3. So, in such a situation you have to resort to this diagonal parity. For example, this disk 1 and disk 3 fails then we find that parity 0 that the diagonal parity 0 does not involve disk 1. So, you can regenerate data for the third disk by using the diagonal diagonal elements that 0 diagonal elements and because one of the faulty disk is excluded from there. Similarly, you can use say diagonal 2 for regenerating data for disk 1. So, in this way two failures can be whenever more than one failure occurs you will be able to reconstruct data using the diagonal parity that is the basic idea of read level 6. If you want to have I mean tolerate more than two faults then you have to keep on increasing the number of parity disks. So, if you want to tolerate three faults then you have to add another additional disk, but as we have already seen that the possibility of failure is I mean that mean time to failure is very long. So, this read level 6 although theoretically it has been proposed it is rarely used because read level 5 satisfies most of the requirements because single failure is that what occurs occasionally. So, double failure is very rare and so read level 6 is not commonly used. So, as I have already mentioned recovers two or more failures you have to go for read level 6. So, drawback is that waste twice as many storage space as read level 5 and more parity data needs to be updated on a write. So, whenever you are writing data this will be more complex because you have to write data not only in the row parity also you have to write data in the diagonal parity and as you can see you have to perform I mean access of multiple disks to generate the row parity in the block level and diagonal parity in the block level that calculation will involve access of several disks and so that is why writing is very complex and it will take time and that is the reason why this read level 6 although it has been proposed after read 5 it is not commonly used. So, now the question is which one to use in what situation. So, read level 1 this is good when need for high fault tolerance at low overhead but waste a lot of storage. So, we have seen where we are using mirroring or shadowing that thing can be used when you want very high fault tolerance and used when parity is dimmed in sufficient that means parity bit the other techniques are based on parity bit but read level 1 is not based on parity it is based on mirroring or shadowing that is why when parity bit when parity is dimmed in sufficient this particular technique is used and used for small databases or individual users since read 1 controller are cheaper. So, it is simplest and read level 2 is not commonly used as I have already mentioned because it is based on error correcting codes and it is not at all used and read level 3 applications with large files and requiring high transfer performance. So, this is bit oriented interliving that is used when you are dealing with very large files and that requires very high performance. So, basically these applications one read 0 but without fault tolerance. Then, coming to read level 4 sort of compromise between read level 3 and read level 5 and not commonly used and read level 5 as I have already mentioned it is the most popular seen as the best tradeoff between fault tolerance, storage space, overhead and parallel transfers. So, this is the most popular one and not good for write intensive applications because we have seen whenever you are doing writing then it involves more overhead and I mean in that case it is better to use read level 1 and read level 6 as I have already mentioned not really popular and higher cost than read level 5 and the two failures case is rare for disk and that is the reason why read level 6 is not used. So, this gives you a summary of the redundancy that is being required read level 0 there is no redundancy read level 1 has the largest redundancy. So, where you use mirroring or shadowing and read level 2 also has got high redundancy you can for 4 disks you will require 3 additional redundant disks. On the other hand read level 3, 4 and 5 all the 3 will require single additional disk to store the parity bits. So, and out of the 3 we have seen that read level 5 is the choice of the day because of many good features. Read level 6 requires two additional disks one for that row parity another for that diagonal parity. So, one additional disk is required here which can tolerate more than one I mean double fault tolerance of double fault. However, it is rarely used and not commonly used. So, this summarizes the various read levels and commercially in all the disk storage systems you will find particularly read level 5 is commonly used. So, with this we have come to the end of storage technology and in the next lecture we shall discuss about some you know that for the processors some processors which are commercial available we shall discuss about them. Thank you.