[root@linuxidc.com ~]# mdadm -D /dev/md0 ###显示下我们阵列的详细信息 /dev/md0: Version : 1.2 Creation Time : Sat Jun 4 10:17:02 2016 Raid Level : raid5 ##raid级别 Array Size : 251527168 (239.88 GiB 257.56 GB) Used Dev Size : 125763584 (119.94 GiB 128.78 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jun 4 10:27:34 2016 State : clean ##状态正常 Active Devices : 3 ##活动设备的磁盘块数量 Working Devices : 4 ##总共工作设备的磁盘数量 Failed Devices : 0 ##没出现损坏的磁盘 Spare Devices : 1 ##备份的磁盘数量
Layout : left-symmetric Chunk Size : 512K
Name : linuxidc.com:0 (local to host linuxidc.com) UUID : 0ad970f7:f655d497:bbeeb6ad:aca1241d Events : 127
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd
3 8 64 - spare /dev/sde ##此硬盘处于空闲状态 四、将磁盘格式化 [root@linuxidc.com ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 15720448 inodes, 62881792 blocks 3144089 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2210398208 1919 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done ##格式成功!五、挂载设备然后我们使用看看是否正常 [root@linuxidc.com ~]# mkdir /md0dir [root@linuxidc.com ~]# mount /dev/md0 /md0dir/ [root@linuxidc.com ~]# mount tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=100136k,mode=700) /dev/md0 on /md0dir type ext4 (rw,relatime,seclabel,stripe=256,data=ordered)##临时挂载成功
[root@linuxidc.com ~]# vim /etc/fstab ##设置开机自动挂载设备 # /etc/fstab # Created by anaconda on Wed May 11 18:44:18 2016 # # Accessible filesystems, by reference, are maintained under "/dev/disk" # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=267aae0a-088b-453f-a470-fec8fcdf772f / xfs defaults 0 0 UUID=d8d9403c-8fa1-4679-be9b-8e236d3ae57b /boot xfs defaults 0 0 UUID=7f62d6d9-9eda-4871-b2d7-2cbd2bc4cc89 /testdir xfs defaults 0 0 UUID=abba10f4-18b3-4bc3-8cca-22ad619fadef swap swap defaults 0 0 /dev/md0 /md0dir ext4 defaults 0 0 ~ [root@linuxidc.com ~]# mount -a ##使fsta文件中没挂载的都挂上来 [root@linuxidc.com ~]# cd /md0dir/ ##进入挂载目录中创建文件测试正常! [root@linuxidc.com md0dir]# ls lost+found [root@linuxidc.com md0dir]# touch 1.txt [root@linuxidc.com md0dir]# ls 1.txt lost+found [root@linuxidc.com md0dir]#六、现在我们来模拟下磁盘出现故障,然后看看raid会有什么变化 [root@linuxidc.com md0dir]# mdadm /dev/md0 -f /dev/sdd ##标记/dev/sdd为损坏 mdadm: set /dev/sdd faulty in /dev/md0
[root@linuxidc.com md0dir]# mdadm -D /dev/md0 ##显示下raid的信息看看 /dev/md0: Version : 1.2 Creation Time : Sat Jun 4 10:17:02 2016 Raid Level : raid5 Array Size : 251527168 (239.88 GiB 257.56 GB) Used Dev Size : 125763584 (119.94 GiB 128.78 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jun 4 11:55:39 2016 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 ####此处的状态也跟前的不一样了,标记了损坏的块数 Spare Devices : 1
Layout : left-symmetric Chunk Size : 512K
Rebuild Status : 0% complete
Name : linuxidc.com:0 (local to host linuxidc.com) UUID : 0ad970f7:f655d497:bbeeb6ad:aca1241d Events : 129
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 3 8 64 2 spare rebuilding /dev/sde ##此时/dev/sdd开始rebuild数据
[root@linuxidc.com md0dir]# cd [root@linuxidc.com ~]# cd /md0dir/ [root@linuxidc.com md0dir]# ls 1.txt lost+found [root@linuxidc.com md0dir]# touch 2.txt [root@linuxidc.com md0dir]# ls 1.txt 2.txt lost+found ###看来一切正常,嘻嘻。七、接下了我们把刚刚损坏的磁盘给移除掉[root@linuxidc.com md0dir]# mdadm /dev/md0 -r /dev/sdd mdadm: hot removed /dev/sdd from /dev/md0
[root@linuxidc.com md0dir]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Jun 4 10:17:02 2016 Raid Level : raid5 Array Size : 251527168 (239.88 GiB 257.56 GB) Used Dev Size : 125763584 (119.94 GiB 128.78 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jun 4 12:07:12 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : linuxidc.com:0 (local to host linuxidc.com) UUID : 0ad970f7:f655d497:bbeeb6ad:aca1241d Events : 265
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 3 8 64 2 active sync /dev/sde ##此时我们就只有三块盘在raid阵列中了八、如果在坏一块盘那我们数据将会有损坏,所有我们在添加一块盘来做备份 [root@linuxidc.com md0dir]# mdadm /dev/md0 -a /dev/sdd ##由于我磁盘不够了所有就把移走那块添加了 mdadm: re-added /dev/sdd
[root@linuxidc.com md0dir]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Jun 4 10:17:02 2016 Raid Level : raid5 Array Size : 251527168 (239.88 GiB 257.56 GB) Used Dev Size : 125763584 (119.94 GiB 128.78 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jun 4 12:11:54 2016 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1
Layout : left-symmetric Chunk Size : 512K
Name : linuxidc.com:0 (local to host linuxidc.com) UUID : 0ad970f7:f655d497:bbeeb6ad:aca1241d Events : 266
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 3 8 64 2 active sync /dev/sde 4 8 48 - spare /dev/sdd##ok我们又有备份盘了