In this tutorial, we will go through the Mdadm configuration of RAID 5 using 3 disks in Linux. I assume that you have 3 disks /dev/sda, /dev/sdb and /dev/sdc which you want to use in RAID 5. Each disk is partitioned into a single partition which makes use of the whole disk, /dev/sda1, /dev/sdb1 and /dev/sdc1.
We can now go through the step by step procedure to add these 3 disks into RAID5 using mdadm commands.
1. Change the partition type to RAID type
You need to use fdisk command to change the partition type of the participating disks. Type “t” to change the partition’s type. Use “fd” to change into RAID.
# fdisk /dev/sda The number of cylinders for this disk is set to 8355. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): t Partition number (1-5): 1 Hex code (type L to list codes): fd The partition table has been altered! Calling ioctl() to re-read partition table.
These steps needs to be repeated for the other disks /dev/sdb and /dev/sdc
2. Creating the RAID Group
Now, we need to add these 3 disks into the RAID group. This can be achieved using the command 'mdadm'.
The syntax for creating RAID set is,
mdadm --create md-device --level=Y --raid-devices=Z devices
--level = Set the raid level, options are linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, raid6, 6 etc
--raid-devices = no of disks participating
So, for our case we can create the RAID group “md0” as follows.
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
3. Format the RAID set md0
As for normal partitions, you have to format the RAID set first before using it. This can be achieved using the following command.
4. Configuring mdadm.conf
This file includes the configuration for management of Software Raid with mdadm. Some common tasks, such as assembling all arrays, can be simplified by describing the devices and arrays in this configuration file.
We can create this file using mdadm command as follows.
# mdadm --detail --scan > /etc/mdadm.conf # cat etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 UUID= 3aaa0122:29827cfa:5331ad66:ca767371
ARRAY : The ARRAY lines identify actual arrays. The second word on the line should be the name of the device where the array is normally assembled, such as /dev/md0. Subsequent words identify the array, or identify the array as a member of a group.
Uuid : The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in the superblock.
Metadata : The metadata format that the array has. This is mainly recognized for comparability with the output of mdadm -Es.
5. Mount the RAID set
You can mount the newly created RAID group as normal partitions as follows.
a. Create the mount point
# mkdir /mnt/raid
b. Then add the following entries to the fstab file
/dev/md0 /mnt/raid ext3 defaults 1 2
c. Mount the raid group
# mount /dev/md0 /mnt/raid
Now, we are done the software RAID 5 configuration in our Linux machine.
Lets first discuss available Raid types:
1. Linear Mode RAID
In this RAID technology more than 1 disk is added into the group, but the data is written into the second disk only after finishing the first disk. The only advantage of Linear RAID is large filesystem and there is no data redundancy or system performance.
2. RAID 0 (Striping)
In RAID0 data is distributed evenly across the disk. This improves the access speed, but there is no data redundancy. So, crashing of a single disk will cause data loss. Also, the disk can be of unequal sizes.
3. RAID 1 (Mirroring)
In RAID 1, the disk data is duplicated into a second disk. When a disk crashes, the second one continues to function. After replacing the bad disk, the data will be automatically copied into the new added disk. Hence, it provides data redundancy without speed/performance enhancement. A limitation of RAID 1 is that the total RAID size is equal to that of the smallest disk in the RAID set. Unlike RAID 0, the extra space on the larger device isn't used.
4. RAID 4
In RAID 4, RAID 0 and 1 are combined to provide data redundancy and performance improvement. It requires atleast 3 disks, data is striped across the first two disks and a parity (error check) is added into the third disk. So, in case of any one data disk crashes, data can be retrieved using the parity code. Limitation of RAID 4 is that, data written into any section of the data disk requires an update in parity disk which may become a bottleneck.
5. RAID 5
This technology is an improvement over RAID 4 where parity data is also striped across all the disks. But like RAID 4 , this can also survive only crashing of a single disk.
It is always recommended to use all partitions of a disk in RAID rather than using only a few partitions because disk failures will only be able to survive the data in participated RAID partition.