Chapter 5 Section 2 RAID 4.16

mdadm:
   -C / --creat: create a new array
   -A: activate disk array
   -D / --detail: print array device details
   -s / --scan: scan configuration file or /proc/mdstat to get array Missing information
   -f: locate the device status fault
   -a / --add: add a device to the array
          and agree to create a device. If this parameter is not added, you must first use the mknod command to create a RAID device,
          but it is recommended to use the -a yes parameter once -v / --verbose :
   display detailed information
  
   -r: remove the device
   -l / --level=0 1 4 5 6: set the level of the array, the array mode, the supported array modes are linear, raid0,
                              raid1, raid4, raid5, raid6, raid10, multipath, faulty, container;
   -n / --raid-devices=: Specify the number of array members (partitions/disks) (number of active disks)
   -x / --spare-devicds =: Specifies the number of hot spares in the array. The number of active disks in the array,
                             plus the number of spare disks should be equal to the total number of disks in the array;
   -c / --chunk: Set the block size of the array in KB
   -G /--grow: Change the size or shape of the formation (convert the active disk to a hot spare disk, and the hot spare disk to an active disk)


The name of the disk participating in the creation of the array: {}, []
   for example: /dev/sd{b,c}1 or /dev/sd[b,c]1


1. Create RAID0
(1) New partition: need two hard disks, /dev/sdb and /dev/sdc
    fdisk /dev/sdb(/dev/sdc) -->n, create a new partition (only one P can be created, All size) --> p, view the partition
(2) Modify the partition type: the default type is 83: Linux
     changes it to raid type: t (modified type) --> l, view the type list --> fd: Linux raid auto -->p, view partition
(3) Save partition: w
(4) View status: fdisk -l /dev/sdb /dec/sdc (multiple disks can be viewed at the same time)

(5) Start creating RAID0:
           mdadm -C /dev/md0 -ayes -l0 -n2 /dev/sd{b,c}1
(6) Check the raid0 status:
           1) cat /proc/mdstat
           2) mdadm -D / dev/md0: Print the detailed information of the array /dev/md0
(7) Create a RAID configuration file /etc/mdadm.conf (the configuration file must be created, and the content is entered)
             Create /etc/mdadm.conf
                  # echo DEVICE /dev/ sd{b,c}1 >> /etc/mdadm.conf
                  # mdadm –Ds >> /etc/mdadm.conf    
(8) Format the disk array:
            mkfs.ext4 /dev/md0
(9) Create a mount point and Mount
            mkdir/mnt/raid0
            mount /dev/md0 /mnt/raid0
          Write to /etc/fstab
In order to use our RAID device normally next time we boot, we need to write the mount information into the /etc/fstab file.
Then restart reboot, and then df -h

2. Create RAID1
(1) Create a new partition and modify the partition type to (raid0)
(2) Start to create RAID1:
           mdadm -C /dev/md1 -ayes -l1 -n2 /dev/sd[d,e]1
(3) Check the status of raid1:
           1) cat /proc/mdstat
           2) mdadm -D /dev/md1: print the details of the array /dev/md1       
(4) add raid1 to the RAID configuration file /etc/mdadm.conf and modify
               # echo DEVICE /dev/sd{d,e}1 >> /etc/mdadm.conf
               # mdadm –Ds >> /etc/mdadm.conf
(5) Format the disk array
               mkfs.ext4 /dev/md1
(6) Create a mount Mount the point and mount
                 mkdir/mnt/raid1
                 mount /dev/md1 /mnt/raid1/
         Write the mount information to the /etc/fstab file. Then restart reboot, and then df -h

3. Create RAID5 of
    these four hard disks, three of which are used as active disks and the other as a hot spare disk.
(1) Create a new partition and modify the partition type to (raid0)
(2) Create RAID5
          mdadm -C /dev/md5 -ayes -l5 -n3 -x1 /dev/sd[f,g,h,i]1
(3) Check the raid5 status:
           1) cat /proc/mdstat
           2) mdadm -D /dev/md5: print the details of the array /dev/md5       
(4) add raid5 to the RAID configuration file /etc/mdadm.conf and modify
               # echo DEVICE /dev/sd{f,g,h,i}1 >> /etc/mdadm.conf
               # mdadm –Ds >> /etc/mdadm.conf
(5) Format the disk array
               mkfs.ext4 /dev/md5
( 6) Create a mount point and mount
                 mkdir /mnt/raid5
                 mount /dev/md5 /mnt/raid5/
 Write the mount information into the /etc/fstab file. Then restart reboot, and then df -h

4. RAID maintenance
(1) Start the array:
              mdadm -As /dev/md0
                     -A: Enable the existing array
                     -s: Based on the /etc/mdadm.conf configuration file
         If the mdadm.conf configuration file (or file is not created) error):
              mdadm –A /dev/md0 /dev/sd[bc]1
(2) stop array
              mdadm -S /dev/md0
(3) show array details
             mdadm -D /dev/md0

5. Simulate the disk damage in raid5 to explain the maintenance operation of soft RAID
(1) To the /mnt/raid5 mount directory, write test data (the created file should be larger, use dd)
            cd /mnt/raid5
           dd if=/ dev/zero of=test_raid5.failed bs=100G count=1
                     test_raid5.failed
                     test_raid5.fail
(2) simulates disk damage, /dev/sdh1 simulates a failed disk
            mdadm /dev/md5 -f /dev/sdh1
(3 ) to check the reconstruction status (you must check it at the first time, otherwise it will be repaired).
       When a disk is marked as faulty, the hot spare disk will automatically replace the faulty disk, and the
    array can also be reconstructed in a short time. You can see
    the current array status by viewing the "/proc/mdstat" file
          cat /proc/mdstat
          mdadm -D /dev/md5
(4) Check whether the previously written test data exists (should be normal, not lost)
            test_raid5. faild and test_raid5.fail
(5) Check the array status after rebuilding (should be back to normal)
            cat /proc/mdstat
(6) Remove the damaged disk
             mdadm /dev/md5 -r /dev/sdh1
(7) New hot spare disk
          Add the newly simulated damaged hard disk to raid5
               mdadm /dev/md5 -a /dev/sdh1
(8) View raid5 array status
            mdadm -D /dev/md5---->/dev/sdh1 has become a hot spare

6. Add a storage hard disk to the RAID
(1) Add a new hard disk (adding a hot spare disk)
          mdadm /dev/md5 -a /dev/sdj1
(2) Convert the hot spare disk to an active disk
           mdadm -G /dev /md5 -n4
(3) Expand the file system (that is, refresh the total capacity and add the capacity of the new disk)
          df -h
          resize2fs /dev/md5
          df -h
(4) Modify the configuration file /etc/mdadm. cong
(5) restart reboot

7. Deletion of software disk array
(1) Stop RAID
           mdadm--stop /dev/md0
           mdadm--misc--zero-superblock /dev/sdb (/dev/sdb is the disk in /dev/md0)
           mdadm-- misc--zero-superblock /dev/sdc (/dev/sdc is the disk in /dev/md0)
           mdadm--misc--zero-superblock /dev/sdd (/dev/sdd is the disk in /dev/md0) )
(2) In order to prevent the system from starting the raid
          vim /etc/mdadm.conf configuration file to delete the device-related information
(3) To reduce the number of disks in the RAID,
          use the grow mode of mdadm

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325986159&siteId=291194637