Moving a RAID array

I blogged about how to create a RAID-5 array with GNU/Linux in a previous entry.

So my home server is running a 4 disks array (about 500 GB each, for a total of 1.5 TB available). I used SATA disks plugged in the server. For some silly reasons I wanted to move the disks out of the server and plug them using a USB interface (no, not a eSATA one, USB (I told you, it's silly, but it could have been worse, like creating a floppy RAID array)).

I bought 4 Icy Box enclosures and a USB hub. Shut down the server, move the disks into the external enclosures, plug everything (lots of wires), switch on the server, cry.

Icyboxes

As quite expected, it didn't work right away. My Debian server stopped on a maintenance shell, complaining that it was not able to check /dev/md0 (the array).

No problem, I tried a simple command:

# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Array assembled, exit the shell, Debian finished booting, everything works. But, in doubt, I did a reboot. Again, array not recognized. After a bit of googling and man reading, I tried the same command with a little option added:

# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --update=homehost

Not really sure about what it did, but after rebooting, the array was recognized and assembled automatically.

So YES, you can change the controller interface used to plug your disks in an RAID array.

What about speed? Well… high speed is obviously not the purpose of this experiment 😉 (but at least it's > 30 MB/s in continuous read).

What about reliability? Well… it's Raid-5 so I might have to rebuild from time to time. So far I only get some USB reset events so transfer stall during ≈20 seconds then resume. Of course I get more resets when doing a big transfer.

Comments Add one by sending me an email.

  • From Laurent ·

    As usual, each time I play with my RAID array, something wrong happens. This time was a power outage due to a storm. My server is of course plugged on a UPS, but it's not the case of the RAID hard drives. So the array lost the 4 disks at once then they came back (after the power outage ends). mdadm told me that 4 disks were down and 4 "spare" disks were available ?!

    Since I didn't have time to play, I just rebooted the server (I knew that nothing had been written on the disks and the state should have been OK), the array came back, clean state. Easy.