Restoring your RAID array

I'm very lucky those days. 2 weeks ago my laptop hard drive died. Today I lost a drive in my RAID array.

Few month ago I moved my RAID array from SATA to USB but USB components seems to be of lower quality than SATA ones. So from time to time I see some "USB reset" when reading/writing a lot of data, fortunately it only stop transfers for a few seconds before resuming.

Today my server was fscking the array when a disk failed badly. It looks like the external drive's USB controller crashed. The disk was not visible anymore from my server (I had to manually restart the external drive).

Since it's a RAID 5 array, everything was still working fine, in degraded mode:

$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sat Jan 19 17:03:46 2008
     Raid Level : raid5
     Array Size : 1465151808 (1397.28 GiB 1500.32 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Jul 22 22:16:04 2009
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 02918692:35547ed8:ccb6a325:e4cda885 (local to host seth)
         Events : 0.3774

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       0        0        1      removed
       2       8       81        2      active sync   /dev/sdf1
       3       8       33        3      active sync   /dev/sdc1

Then adding back the missing drive:

$ sudo mdadm --manage /dev/md0 --add /dev/sdd1
mdadm: re-added /dev/sdd1
$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sat Jan 19 17:03:46 2008
     Raid Level : raid5
     Array Size : 1465151808 (1397.28 GiB 1500.32 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Jul 22 22:19:23 2009
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 02918692:35547ed8:ccb6a325:e4cda885 (local to host seth)
         Events : 0.3780

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       4       8       49        1      spare rebuilding   /dev/sdd1
       2       8       81        2      active sync   /dev/sdf1
       3       8       33        3      active sync   /dev/sdc1

Now I can monitor the slow process of rebuilding the array:

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sde1[0] sdc1[3] sdf1[2]
      1465151808 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
      [>....................]  recovery =  2.2% (11184896/488383936) finish=661.5min speed=12019K/sec

unused devices: <none>

At least 11 hours to rebuild!!! So, for people as stupid as me: don't use USB for a RAID array if you are not obliged to, it's not very safe and it's a bit slow.

Comments Add one by sending me an email.

  • From Nats ·
    Life is tough.