wsjz_01 发表于 2018-5-19 07:08:14

Linux命令:mdadm

  mdadm命令简介:

mdadm是linux下用于创建和管理软件RAID的命令,是一个模式化命令。
由于现在服务器一般都带有RAID阵列卡,并且RAID阵列卡也很廉价,且由于软件RAID不能用作启动分区、使用CPU实现,降低CPU利用率的自身缺陷,因此在生产环境下并不适用。
但为了学习可模拟了解RAID原理和管理。


1.命令格式:
mdadm
2.命令功能:
      显示每个文件和目录的磁盘使用空间。
  

        3.命令参数: 模式化的命令
      3.1.创建模式 -CCreate
      专用选项:

        -l: 级别raid0, raid1, raid5, raid6, raid10, raid50
        -n #: 设备个数
        -a {yes|no}: 是否自动为其创建设备文件
        -c: CHUNK大小, 2^n,默认为64Kchunk64K /Block4K = stride16
        -x #: 指定空闲盘个数
       eg1.mdadm -C /dev/md0 -a yes -l 5 -n 3 /dev/sdb{5,6,7}
  

      3.2.管理模式
      -a: --add,
      -r: --remove,
      -f: --fail
       eg2.mdadm /dev/md2 --fail /dev/sda7
  

      3.3.查看RAID阵列的详细信息

      -D --detail   eg3.mdadm -D /dev/md2
  

      3.4.停止阵列:
      -S --stop    eg4. mdadm -S /dev/md0
  

  
      3.5.监控模式 -FFollow or Monitor   

      3.6.增长模式 -GGrow:改变raid容量或阵列中的device数目   

      3.7.装配模式 Assemble,Add
     -A :将以前定义的某个阵列加入当前在用阵列。
     使用此命令方可自动装配:mdadm -D --scan > /etc/mdadm.conf
  

  
4.使用实例:
实例1.使用3个逻辑盘创建一个RAID0分区,一个RAID5分区,热备盘,删除分区。
  # fdisk /dev/sdb
  Command (m for help): p

Disk /dev/sdb: 17.1 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   IdSystem
/dev/sdb1               1         244   1959898+83Linux
/dev/sdb2             245         488   1959930   83Linux
/dev/sdb3             489         732   1959930   83Linux
/dev/sdb4             733      2088    10892070    5Extended
/dev/sdb5             733         982   2008093+fdLinux raid autodetect
/dev/sdb6             983      1232   2008093+83Linux
/dev/sdb7            1233      1482   2008093+83Linux
/dev/sdb8            1483      1732   2008093+83Linux
  

  Command (m for help): t
Partition number (1-8): 6
Hex code (type L to list codes): fd
Changed system type of partition 6 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-8): 7
Hex code (type L to list codes): fd
Changed system type of partition 7 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-8): 8
Hex code (type L to list codes): fd
Changed system type of partition 8 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdb: 17.1 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   IdSystem
/dev/sdb1               1         244   1959898+83Linux
/dev/sdb2             245         488   1959930   83Linux
/dev/sdb3             489         732   1959930   83Linux
/dev/sdb4             733      2088    10892070    5Extended
/dev/sdb5             733         982   2008093+fdLinux raid autodetect
/dev/sdb6             983      1232   2008093+fdLinux raid autodetect
/dev/sdb7            1233      1482   2008093+fdLinux raid autodetect
/dev/sdb8            1483      1732   2008093+fdLinux raid autodetect
  Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
# partprobe /dev/sdb
  # cat/proc/partitions
major minor#blocksname
   8    16   16777216 sdb
   8    17    1959898 sdb1
   8    18    1959930 sdb2
   8    19    1959930 sdb3
   8    20          0 sdb4
   8    21    2008093 sdb5
   8    22    2008093 sdb6
   8    23    2008093 sdb7
   8    24    2008093 sdb8
  # mdadm -C /dev/md0 -a yes -l 0 -n 3 /dev/sdb{5,6,7}
mdadm: /dev/sdb5 appears to contain an ext2fs file system
    size=3919616Kmtime=Tue Apr 19 15:45:33 2016
Continue creating array? (y/n) y            ***确认创建在用条带(-l 0)分区md0
  mdadm: array /dev/md0 started.
# cat /proc/mdstat
Personalities :
md0 : active raid0 sdb7 sdb6 sdb5
      6024000 blocks 64k chunks      
unused devices: <none>


  # mke2fs -j /dev/md0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
753664 inodes, 1506000 blocks
75300 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1543503872
46 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736
  Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
  This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.Use tune2fs -c or -i to override.
  

  # fdisk-l
  Disk /dev/md0: 6168 MB, 6168576000 bytes
2 heads, 4 sectors/track, 1506000 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
  Disk /dev/md0 doesn't contain a valid partition table
  

  # mount /dev/md0   /mnt/
# ls   /mnt/
lost+found
  

   实例2.删除在用分区
  # umount /mnt/
# mdadm -S /dev/md0
mdadm: stopped /dev/md0
# rm /dev/md0
rm: remove block special file `/dev/md0'? y
  

   实例3.确认创建RAID5(-l 5)分区md0
  # mdadm -C /dev/md0 -a yes -l 5 -n 3 /dev/sdb{5,6,7}
mdadm: /dev/sdb5 appears to contain an ext2fs file system
    size=6024000Kmtime=Wed Nov2 13:58:51 2016
mdadm: /dev/sdb5 appears to be part of a raid array:
    level=raid0 devices=3 ctime=Wed Nov2 13:38:28 2016
mdadm: /dev/sdb6 appears to be part of a raid array:
    level=raid0 devices=3 ctime=Wed Nov2 13:38:28 2016
mdadm: /dev/sdb7 appears to be part of a raid array:
    level=raid0 devices=3 ctime=Wed Nov2 13:38:28 2016
Continue creating array? y
  mdadm: array /dev/md0 started.
  

  # cat /proc/mdstat          ***建立RAID建立中
  Personalities :    
md0 : active raid5 sdb7 sdb6 sdb5
      4016000 blocks level 5, 64k chunk, algorithm 2
      [==>..................]recovery = 10.8% (217256/2008000) finish=0.8min speed=36209K/sec


  # cat /proc/mdstat               ***建立RAID完成后
Personalities :
md0 : active raid5 sdb7 sdb6 sdb5
      4016000 blocks level 5, 64k chunk, algorithm 2
  

   实例4.查看/dev/md0的RAID信息
  # mdadm -D /dev/md0

  /dev/md0:
      Version : 0.90
Creation Time : Wed Nov2 14:10:43 2016
   Raid Level : raid5
   Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
   Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Nov2 14:11:46 2016
          State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

         Layout : left-symmetric
   Chunk Size : 64K

          UUID : 4c9da4ca:a6715458:70702cb1:47eb89e2
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8       21      0      active sync   /dev/sdb5
       1       8       22      1      active sync   /dev/sdb6
       2       8       23      2      active sync   /dev/sdb7
  

   实例5.在RAID5中加一个热备盘
  # mdadm /dev/md0 -a /dev/sdb8

  # mdadm -D /dev/md0
  Raid Level : raid5
  Number   Major   Minor   RaidDevice State
       0       8       21      0      active sync   /dev/sdb5
       1       8       22      1      active sync   /dev/sdb6
       2       8       23      2      active sync   /dev/sdb7

       3       8       24      -      spare   /dev/sdb8
  

实例6.RAID5中模拟损坏/dev/sdb6盘
  # mdadm /dev/md0 -f /dev/sdb6
mdadm: set /dev/sdb6 faulty in /dev/md0
# mdadm -D /dev/md0
  Number   Major   Minor   RaidDevice State
       0       8       21      0      active sync   /dev/sdb5
       3       8       24      1    sparerebuilding   /dev/sdb8
       2       8       23      2      active sync   /dev/sdb7

       4       8       22      -      faulty spare   /dev/sdb6
  # mdadm /dev/md0 -r /dev/sdb6    (--remove)
mdadm: hot removed /dev/sdb6
  # mdadm -D /dev/md0
  Number   Major   Minor   RaidDevice State
       0       8       21      0      active sync   /dev/sdb5
       1       8       24      1      active sync   /dev/sdb8
       2       8       23      2      active sync   /dev/sdb7
  

实例7.RAID5中直接建热备盘
  # mdadm -C /dev/md0 -a yes -l 5 -n 3 -x 1 /dev/sdb{5,6,7,8}
mdadm: /dev/sdb5 appears to contain an ext2fs file system
    size=6024000Kmtime=Wed Nov2 13:58:51 2016

  mdadm: /dev/sdb5 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Nov2 14:10:43 2016
mdadm: /dev/sdb6 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Nov2 14:10:43 2016
mdadm: /dev/sdb7 appears to contain an ext2fs file system
    size=6024000Kmtime=Wed Nov2 13:58:51 2016
mdadm: /dev/sdb7 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Nov2 14:10:43 2016
mdadm: /dev/sdb8 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Wed Nov2 14:10:43 2016
Continue creating array? y
mdadm: array /dev/md0 started.
# cat /proc/mdstat
Personalities :
md0 : active raid5 sdb7 sdb8(S) sdb6 sdb5
      4016000 blocks level 5, 64k chunk, algorithm 2
      [>....................]recovery =4.7% (95608/2008000) finish=0.9min speed=31869K/sec
  

实例8.将当前RAID信息保存至配置文件以自动装配
  # mdadm -D --scan>/etc/mdadm.conf
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=d97ca386:1349206d:89aa5d1e:96d23bd4
  

实例9.指定条带大小,提升硬盘访问速度
  # cat /proc/mdstat       ***chunk64K /Block4K = stride16
  Personalities :
md0 : active raid5 sdb7 sdb8(S) sdb6 sdb5
      4016000 blocks level 5, 64k chunk, algorithm 2

# mke2fs -j -E stride=16 -b 4096 /dev/md0
  

  

---end---
页: [1]
查看完整版本: Linux命令:mdadm