RAID stands for Redundant Array of Independent (or Inexpensive) Disks. It is a mechanism for a logical combination of two or more disks to provide fault tolerance and performance upgrades.
Different architectures in which these functionalities are provided are called RAID Levels. RAID Levels are from 0 to 6.
This tutorial shows how to create RAID 1 using levels and how you can configure RAID 1 using mdadm on Linux.
Before we start creating a Linux raid 1 let's check what all are the different RAID Levels:
RAID 0 ( Disk Striping)
RAID 0 devices split the data over two or more disks. The two disks work independently to provide better performance.
RAID 1 ( Mirror)
The two disks are identical to each other in RAID 1. Data is written simultaneously to both disks. Hence the two are mirror images of each other. Fault tolerance is provided with RAID 1, because if one disk stops working, the data can be recovered from the copy (i.e. the other drive).
RAID 2 (Bit level striping with dedicated hamming code parity)
The data is stripped at bit level rather than byte or block level.
RAID 3 ( Byte level striping with dedicated parity)
The stripping is done at byte level. All the data is stored such that sequential bytes are on different disks.
RAID 4 (Block level striping with dedicated parity)
It provides block level striping like RAID 0 with a parity disk.
RAID 5 (Block level striping with distributed parity)
This is similar to RAID 4 but the parity information is also striped.
RAID 6 ( Block level striping with double distributed parity)
As in RAID 5, the striping of data drives is block level, and the parity information is striped. But there are two parity blocks per stripe.
RAID 10 ( RAID 1 + 0)
The data is striped and RAID 1 mirrors are created over them. It requires 4 drives at least, two for striping and two for mirroring.
Mdadm Create RAID 1
The disk mirroring can be simulated in a software environment. The tool used in Linux to create software RAID is mdadm.
The mdadm utility can be used to create, manage, and monitor MD (multi-disk) arrays for software RAID or multipath I/O. First of all, make sure mdadm is present in the Linux box. If it is not installed, install the package.
In Red Hat based Linux distributions (RHEL/Fedora/CentOS) following the rpm architecture:
# sudo yum install mdadm
In Debian based distributions such as Ubuntu and LinuxMint:
$ sudo apt-get install mdadm
Disk partitions for RAID Mirroring
We need two identical partitions for creating RAID 1 with mdadm. Otherwise, if the size of two partitions is unequal, the usable size will be of the smaller one.
So now we create two partitions with fdisk command of type software RAID:
# fdisk /dev/sda The number of cylinders for this disk is set to 2088. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 17.1 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 650 5116702+ 83 Linux /dev/sda3 651 2088 11550735 5 Extended Command (m for help): n Command action l logical (5 or over) p primary partition (1-4) l First cylinder (651-2088, default 651): Using default value 651 Last cylinder or +size or +sizeM or +sizeK (651-2088, default 2088): +400M Command (m for help): t Partition number (1-5): 5 Hex code (type L to list codes): fd Changed system type of partition 5 to fd (Linux raid autodetect) Command (m for help): n Command action l logical (5 or over) p primary partition (1-4) l First cylinder (701-2088, default 701): Using default value 701 Last cylinder or +size or +sizeM or +sizeK (701-2088, default 2088): +400M Command (m for help): t Partition number (1-6): 6 Hex code (type L to list codes): fd Changed system type of partition 6 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sda: 17.1 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 650 5116702+ 83 Linux /dev/sda3 651 2088 11550735 5 Extended /dev/sda5 651 700 401593+ fd Linux raid autodetect /dev/sda6 701 750 401593+ fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks.
Here we are implementing software RAID on two partitions. If you have two different identical disks, you can do the same. Just repeat the steps of creating a partition and assigning it an ID for the two disks.
Create Mirrored Array
Now that our devices are ready, we can now proceed towards creating the RAID Mirror.
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda5 /dev/sda6 mdadm: array /dev/md1 started.
The /proc/mdstat contains the statistics for the md (RAID) device. We can view the current state of the device by viewing the contents of this file.
# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda6 sda5 401472 blocks [2/2] [UU] [===================>.] resync = 96.1% (387712/401472) finish=0.0min speed=77542K/sec unused devices:
Now the RAID device is ready, we need to format it.
Create a filesystem on RAID device
We will format the RAID device with ext3 filesystem. The command used to format the device with ext3 filesystem is mkfs.ext3. If you wish to format it with ext2 or ext4 filesystem, you can use mkfs.ext2 or mkfs.ext4 command respectively.
# mkfs.ext3 /dev/md1 mke2fs 1.39 (29-May-2006) warning: 63 blocks unused. Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 100744 inodes, 401409 blocks 20073 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 49 block groups 8192 blocks per group, 8192 fragments per group 2056 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 25 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Mount the device
Now our device is ready to be used. To use it actually, we need to mount it. We create a directory for mounting this device and create an entry in /etd/fstab file for automatically mounting on boot.
# mkdir /raid-mirror
# mount /dev/md1 /raid-mirror
We can create files on the new device
# touch /raid-mirror/file1.txt
Now we need to modify /etc/fstab for automatically mounting of the device on boot. We can append the entry with echo command
# echo "/dev/md1 /raid-mirror ext3 defaults 0 0" >> /etc/fstab
You can now unmount the device and run the command 'mount -a' to check if everything is ok. This command will read all the entries of /etc/fstab file and mount the device.
# umount /raid-mirror/
# mount -a
# mount /dev/sda2 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/md1 on /raid-mirror type ext3 (rw)
The output of mount command shows that the device /dev/md1 is successfully mounted on the directory '/raid-mirror'.
In this tutorial, we learned how to create Linux RAID 1(Mirror) using mdadm software. I hope you enjoyed reading this and please leave your suggestions in the below comment section.