linux/mdadm<Â Linux
mdadm is a Linux utility used to manage software RAID devices. The name is derived from the md (multiple device) device nodes it administers or manages, and it replaced a previous utility mdctl. The original name was "Mirror Disk", but was changed as the functionality increased. It is free software licensed under version 2 or later of the GNU General Public License - maintained and copyrighted to Neil Brown of Suse. Functionality[edit]Types of physical device[edit]mdadm can handle anything which presents to the kernel as a block device. This can encompass whole disks (/dev/sda), partitions (/dev/sda1) and USB flash drives. RAID Configurations[edit]Main article: Standard RAID levels
Non-RAID Configurations[edit]
Types of MD device[edit]The original (standard) form was /dev/mdn where n is a number between 0 and 99. More recent kernels have supported the use of names such as /dev/md/Home. Under kernel 2.4 and earlier these two were the only options. Both of them are non-partitionable. From kernel 2.6 a new type of MD device was introduced, a partitionable array. The device names were modified by changing md to md_d. The partitions were identified by adding pn; thus /dev/md/md_d2p3 for example. From kernel 2.6.28 non-partitionable arrays can be partitioned, the partitions being referred to in the same way as for partitionable arrays: /dev/md/md1p2. Booting[edit]Since support for MD is found in the kernel, there is an issue with using it before the kernel is running. Specifically it will not be present if the boot loader is either (e)LiLo or GRUB legacy. It may not be present for GRUB 2. In order to circumvent this problem a /boot filesystem must be used either without md support, or else with RAID1. In the latter case the system will boot by treating the RAID1 device as a normal filesystem, and once the system is running it can be remounted as md and the second disk added to it. This will result in a catch-up, but /boot filesystems ought to be small. Quick reference[edit]Create an array[edit]Create a RAID 1 (mirror) array from two partitions. If the partitions differ in size, the array is the size of the smallest partition. You can create a RAID 1 array with more than two devices. This gives you multiple copies. Whilst there is little extra safety in this, it makes sense when you are creating a RAID 5 array for most of your disk space and using RAID 1 only for a small /boot partition. Using the same partitioning for all member drives keeps things simple. Create a RAID 5 volume from three partitions. If the partitions used in your RAID array are not the same size, mdadm will use the size of the smallest from each partition. If you receive an error, such as: "mdadm: RUN_ARRAY failed: Invalid argument", make sure your kernel supports (either via a module or by being directly compiled in) the raid mode you are trying to use. Most modern kernels do, but you never know... It is possible to create a degraded mirror, with one half missing by replacing a drive name with "missing": The other half mirror is added to the set thus: This is useful when you are adding a disk to a computer which currently isn't mirrored. The new drive is...
The computer is then booted off the secondary drive (or a rescue disk), the now idle original disk can be repartitioned if required (no need to format), and then the primary drive submirrors are added. Note that the partition types should be changed to 0xFD with fdisk to indicate that they are mirrored devices. Replace failing disk[edit]First mark the failing disk failed.... Then remove it from the array After which it is safe to power down and physically replace the disk. Then create partition as needed, and add new disk to the array; Then check /proc/mdadm for the sync of the device Recording the array[edit]View the status of the multi disk array md0. This adds md0 to the configuration file so that it is recognised next time you boot. You may wish to keep a safe copy of /proc/mdstat. The information will allow you to restart the array manually if mdadm fails to do so. Growing an array by adding devices[edit]This adds the new device to the array then grows the array to use its space. In some configurations you may not be able to grow the array until you have removed the internal bitmap. You can add the bitmap back again after the array has been grown. Growing an array by upgrading devices[edit]An array may be upgraded by replacing the devices one by one, either as a planned upgrade or ad hoc as a result of replacing failed devices. Allow the new drive to resync. If replacing all the devices repeat the above for each device, allowing the array to resync between repetitions. Finally, grow the array to use the maximum space available and then grow the filesystem(s) on the RAID array to use the new space. Deleting an array[edit]Convert an existing partition to RAID 5[edit]Assume that the existing data is on /dev/sda1: Notes:
Known problems[edit]A common error when creating RAID devices is that the dmraid-driver has taken control of all the devices that are to be used in the new RAID device. Error-messages like this will occur: mdadm: Cannot open /dev/sdb1: Device or resource busy Typically, the solution to this problem involves adding the "nodmraid" kernel parameter to the boot loader config. Another way this error can present itself is if the device mapper has its way with the drives. Issue 'dmsetup table' see if the drive in question is listed. 'dmsetup remove <drive id>' will remove the drive from device mapper and the "Device or resource busy" error will go away as well. RAID already running[edit]First check if the device isn't in use in another array: Probably you will have to stop the array with: Check the /etc/mdadm/mdadm.conf file (and restart system if possible): Then you should be able to delete the superblock of this device: Now the device shouldn't be busy any more. Sometimes dmraid "owns" the devices and will not let them go. There is a solution.[1] Tweaking the kernel[edit]To solve this problem, you need to build a new initrd without the dmraid-driver. The following command does this on a system with the "2.6.18-8.1.6.el5"-kernel: After this, the system has to be rebooted with the new initrd. Edit your /boot/grub/grub.conf to achieve this. Alternatively if you have a self customized and compiled kernel from a distro like Gentoo (the default option in gentoo) which doesn't use initrd then check kernel .conf file in /usr/src/linux for the line If the above line is set as follows: then You might have to disable that option, recompile the kernel, put it in /boot and finally edit grub conf file in /boot/grub. PLEASE be careful NOT to disable (Note the MD instead of DM) which is essential for raid to work at all! If both methods have not helped you then booting from live CD probably will (the below example is for starting a degraded raid-1 mirror array and adding a spare hdd to it and syncing. Creating a new one shouldn't be more difficult because the underlying problem was 'Device or resource busy' error): It might be easier to try and automatically assemble the devices Remember to change the corresponding md* and hd* values with the corresponding ones from your system. You can monitor the sync progress using: When the sync is done you can reboot in your Linux normally. Zeroing the superblock[edit]Another way to prevent the kernel autostarting the raid is to remove all the previous raid-related information from the disks before proceeding with the creation, for example: And now the usual create, for example: Recovering from a loss of raid superblock[edit]There are superblocks on the drives themselves and on the raid (apparently). If you have a power failure, hardware failure, that does not include the drives themselves, and you cannot get the raid to recover in any other way, and wish to recover the data, proceed as follows: Get a list of the devices in the raid in question: Result something like this: RaidDevice order (sdc1,sdd1,sdf1,sde1) and Chunk Size are critical Record all your raid member parameters: Look carefully at the Update time. If you have raid members attached to the motherboard and others attached to a raid card, and the card fails, but leaves enough members to keep the raid alive, you want to make a note of that. Look at Array State and Update Time. For example: Devices sdc1, sdd1, sde1 and sdf1 are the last members in the array and will rebuild correctly. sdk1 and sdl1 left the array (in my case due to a raid card failure). Also note the raid member, starting with 0, the raid needs to be rebuilt in the same order. Chunk size is also important. Zero the drive superblocks Reassemble the raid 'missing' tell the create command to rebuild the raid in a degraded state. sdk1 and sdl1 can be added later Edit /etc/mdadm.conf and add an ARRAY line with a UUID. First get the UUID for your raid: then: and add something similar to the file (notice there is no # in front of the active line you are adding Save with <Ctrl-o> Last, mark the array possilbly dirty with: Monitor the rebuild with All your data should be recovered! You can also use Netdata to monitor your mdadm disk arrays.[2] Increasing RAID ReSync Performance[edit]In order to increase the resync speed, we can use a bitmap, which mdadm will use to mark which areas may be out-of-sync. Add the bitmap with the grow option like below :
then verify that the bitmap was added to the md2 device using you can also adjust Linux kernel limits by editing these files and You can also edit this with the sysctl utility Increasing RAID5 Performance[edit]To help RAID5 read/write performance, setting the read-ahead & stripe cache size[3] [4] for the array provides noticeable speed improvements. Note: This tip assumes sufficient RAM availability to the system. Insufficient RAM can lead to data loss/corruption Write Performance: Read Performance: These changes must be done on any reboot (add to an init script to set on start-up) Monitoring Status[edit]
cat /proc/mdstat[edit]Review different Output with no devices configured: |
Home‎ > ‎Server config‎ > ‎