Replacing all drives in a Terastation Live
From NAS-Central Buffalo - The Linkstation Wiki
This manual describes step by step the replacment of all drives in a Terastation Live. It can eventually applied also to other Terastations. The requirements are only a RAID-System based on mdadm and a XFS file system.
Buffalo Terastation comprising 4x Samsung SP2504C 250GB SATA-II
New drives: 4x Samsung HD501LJ 500GB SATA-II
We often start with a full array as shown here by "df command"
root@DATEN:~# df -h Filesystem Size Used Available Use% Mounted on /dev/md1 481.6M 306.9M 174.7M 64% / /dev/ram1 15.0M 116.0k 14.9M 1% /mnt/ram /dev/md0 281.0M 15.0M 265.9M 5% /boot /dev/md2 695.6G 631.4G 64.2G 91% /mnt/array1
and obviously we like to enlarge the array with out any backup which is very time consuming and we often do not have an appropriate backup device to store several hundred Gigabytes.
- Shut the Terastation down via webinterface, command line or power button
- Start with the 4th drive, remove it and replace it with another disc of same size or bigger (you can follow the instructions from the Buffalo Terastation manual.
- Start the Terastation and it will show you a failed drive - don't panic :-)
- Now you have to gain root console access via telnet or ssh, do not try to repair the raids from the webinterface
- Take a look at the partition table of a working hard drive like /dev/sda
fdisk -l /dev/sda Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 37 297171 83 Linux /dev/sda2 38 99 498015 83 Linux /dev/sda4 100 30401 243400815 5 Extended /dev/sda5 100 116 136521 82 Linux swap /dev/sda6 117 30390 243175873+ 83 Linux
- Start to create the identical partition table using fdisk on the new inserted drive /dev/sdd with fdisk /dev/sdd - Important - If you insert a bigger drive, then you would extend the size of the extended partition as well as partition 6 to the available maximum
Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 37 297171 83 Linux /dev/sdd2 38 99 498015 83 Linux /dev/sdd4 100 60801 487588815 5 Extended /dev/sdd5 100 116 136521 82 Linux swap /dev/sdd6 117 60801 487452231 83 Linux
- don't forget to specify the partition type of partition 5 as Linux Swap (82)
- apply mkswap to the fifth partition
- now rebuild all three raids (0-2), the steps are a follows
mdadm -a /dev/md0 /dev/sdd1 mdadm -a /dev/md1 /dev/sdd2 mdadm -a /dev/md2 /dev/sdd6
- now you can check if this was done correctly with
root@daten# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Wed Jun 13 15:47:40 2007 Raid Level : raid1 Array Size : 297088 (290.17 MiB 304.22 MB) Device Size : 297088 (290.17 MiB 304.22 MB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Oct 30 07:39:29 2007 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 UUID : ec67ebdb:01b029ce:000bb777:9d5fd144 Events : 0.730 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1
- The rebuild of array 0 and 1 runs quickly, but the large array takes a while (the web interface gives you an approximination for the rebuild time. It is usually more than 3-4 hours.
- Important - do not replace a further drive until the rebuild is done.
- repeat all these steps, including the shutdown and restart for all drives
Growing the array and file system
- After all drives are replaced and the rebuild finished you can grow you array using
mdadm --grow /dev/md2 -z max
- the array daemon starts a resync subsequently after growing the array
root@DATEN:~# mdadm --detail /dev/md2 /dev/md2: Version : 00.90.03 Creation Time : Wed Jun 13 08:29:22 2007 Raid Level : raid5 Array Size : 1462356480 (1394.61 GiB 1497.45 GB) Device Size : 487452160 (464.87 GiB 499.15 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Oct 31 08:36:13 2007 State : active, resyncing Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 50% complete UUID : 05c76307:69dff621:a11d513a:e4478523 Events : 0.1320002 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 22 1 active sync /dev/sdb6 2 8 38 2 active sync /dev/sdc6 3 8 54 3 active sync /dev/sdd6
- this step again takes about 5 hours
- the last step is growing the file system which runs very quick, just run
- and you can do that before the resync finished.
- so we and up with a enlarged array without any data loss or backup :-)
root@DATEN:~# df -h Filesystem Size Used Available Use% Mounted on /dev/md1 481.6M 307.0M 174.6M 64% / /dev/ram1 15.0M 116.0k 14.9M 1% /mnt/ram /dev/md0 281.0M 15.0M 265.9M 5% /boot /dev/md2 1.4T 631.4G 763.1G 45% /mnt/array1
- you can watch the progress by using the web interface or looking into /proc/mdstat
root@DATEN:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [raid4] md2 : active raid5 sda6 sdb6 sdd6 sdc6 1462356480 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [============>........] resync = 64.8% (315880576/487452160) finish=191.9min speed=14865K/sec md1 : active raid1 sda2 sdb2 sdd2 sdc2 497920 blocks [4/4] [UUUU] md0 : active raid1 sda1 sdb1 sdd1 sdc1 297088 blocks [4/4] [UUUU] unused devices: <none>