 Hello everybody! In this video, we'll be creating a software raid on a Linux platform. In my previous videos, I have already shown you what is raid, types of such disk arrays and how to build them. Welcome to this video – you will learn how to build a raid system on a Linux platform. In our channel and blog, you will find solutions to any problem, from installing an operating system or configuring it to fixing possible bugs and errors or optimizing mobile gadgets. Our specialists will answer any questions you ask in your comments under the videos or articles. Alright, if you want to create a raid system, you need at least a few hard drives. When building raid on several physical drives, pay special attention to using storage devices of the same capacity or, ideally, that they should be of the same type and model. One more thing to decide on before you start is which technologies should be used. There are three main options to choose from – a hardware raid, a hybrid raid, a combination of hardware and software elements, and a software raid. The first two types require expensive raid controllers and have one downside critical enough. If a raid controller breaks down, restoring the array is only possible when you replace it with the same model, which, as I said, is going to cost you pretty much. Much more than you would have to pay if you lost one of the hard drives in a typical software raid system. In this video, we'll be building a software raid on a Linux Ubuntu 20.04 platform. To create and manage a raid system, you need a special utility – MDADM, which can be installed with Synaptic or with the command apt install mdadm. During the installation, you will be asked for settings required to manage the existing or the future arrays. To save yourself the trouble of going deeper into this topic, leave all settings at their default values. Check if the hard drives are present. Cat. Proc. Partitions. Here are the drives. One of them marked as SDA is where the operating system is installed, and others – SDB, SDC, SDD – are the ones that will make up the raid system. Before the system is built, other hard drives need to be prepared for it. Create partitions in the drives. Type SUDOFDISdevSDB Use the P command to view partitions. Create a new partition by typing N. Specify the partition type – primary, press P, or extended, press E. The first partition – 1. Leave the rest of the settings at default values and press Enter twice. Now you can see that a new partition has been created. Type P. Here is the partition I created – SDB1. To record changes, press W, Enter. The partition table has been modified and now drives in prime position begins. Repeat the same steps for the other two drives. SUDOFDISdevSDC Check if the drive is empty – press P. Create a new partition by typing N. Specify that this is a primary partition type – P. Partition number – 1. Leave other items at default – press Enter twice. Check if it was created successfully by pressing P. Record changes by pressing W. Now does the third drive – SUDOFDISCdevSDD Press P to check it. Press Enter to create a new partition. Press P to set it as primary. Partition number – 1. Enter – Enter Press P to check it. Record changes by pressing W. After all these actions, it is recommended to run the command SUDO.pro and it is especially relevant for Red Hat and CentOS. For Ubuntu, it's not obligatory as it is done automatically. One more – check for presence of drives. To see if the system registered them – cat, proc, partitions. All right, you can see everything was successful. Devices SDB1, SDC1 and SDD1 appeared. Now we can start creating the array. Save the command SUDO, md, ADM, create, dev, md, 0, a, yes, l, array level 5, n, number of drives, 3 and specify the devices – dev, SDB1, dev, SDC1 and dev, SDD1. Enter. That's all. The RAID system has been created. Let's review the drives. Let's proc partitions. At the bottom of the list, you can see the partition md, 0. The last step to take is to convert it to the file system mkfs, mkfs, ext2, dev, md, 0. The process begins. The table is saved, the superblocks are written, wait until it is completed. Ready? After that, let's mount the partition. Mount, dev, md, 0, mnt, df, h. Here is the mounted partition of 4. To make sure it works properly, let's create a file there. dd, input file, dev, 0 output file, mnt, file, md4, block size 4096 and count 100 000. Here is the file we created, the size is 410 megabytes. Let's check the directory. Go to mnt, cd, mnt, type ls to display what's inside. Here is our file of 410 megabytes created with root. Check the capacity of our drives by pressing df, h. As you can see, 415 megabytes is used. You can check the current state of the rate system with the file proc mdstat. Type cat proc mdstat. Sometimes certain failures make the array become inactive even if there are no drive errors, but they are marked as being inactive. There is nothing to worry about, though, just stop the array with this command. And then reassemble it with another command. If a serious failure struck your array, for example, too many drives crashed, the array will also get the inactive status. In this case, you can't make it operable again by reassembling. What is more, it can even harm your drives. So be attentive, and if any issues arise, check the array condition and the condition of all of its components in the first place. In the end, don't forget to mount the file system, because it won't be done automatically when you restart the array. If your array uses the command ETCFSTab, usually you can complete the mount operation with the command SUDOMountA. Sometimes there may be critical failures that become irreparable. For example, when two drives within array 5 system crash. This makes the entire array inoperable, and at first sight it seems there is no way to recover the system. But if this disaster happened to your array, first of all, check the status of all the right elements by using this command SUDOMDADMEDEVSDC1. Instead of SDC1, type all elements of the right system one after another. Pay special attention to the last block in each case. The second step should be checking smart values for all drives and running surface tests. It is important to make sure that the drives are physically sound, and that there are no read errors. You can test the drives with a special disk utility available in Linux. Now, try reassembling the array. MDADM. Assemble. Scan. Summing up, implementation of software aid on Linux lets you create disk arrays of several levels, using both physical drives and logical partitions. The functionality you get with this subsystem is quite sufficient to organize a data storage system which is good enough in terms of both reliability and performance. And that is all for now. Hopefully, this video was useful. Remember to click the Like button and subscribe to our channel. Push the bell button to receive notifications and never miss new videos. Leave comments to ask questions. Thank you for watching. Good luck.