Replacing all drives in a Terastation Live
This manual describes step by step the replacement of all drives in a Terastation Live. It can eventually applied also to other Terastations. The requirements are only a RAID-System based on mdadm and a XFS file system.
This document not only outlines the steps necessary to swap the drives but to also upgrade your Terrastation's storage capacity without the need to backup all of its data. It is, however, important to note that you undertake the following with the potential risk of losing your data.
Buffalo Terastation comprising 4x Samsung SP2504C 250GB SATA-II
New drives: 4x Samsung HD501LJ 500GB SATA-II
- Java Runtime Environment
- You will need console (telnet) access to your TerraStation in order to complete this process. If you are not familiar with how to gain this access, please read (at least the first part of) the "Open Stock Firmware" page here on NAS-Central.
We often start with a full array as shown by the 'df -h' command:
root@DATEN:~# df -h Filesystem Size Used Available Use% Mounted on /dev/md1 481.6M 306.9M 174.7M 64% / /dev/ram1 15.0M 116.0k 14.9M 1% /mnt/ram /dev/md0 281.0M 15.0M 265.9M 5% /boot /dev/md2 695.6G 631.4G 64.2G 91% /mnt/array1
- Shut down the Terastation down via web interface, command line or power button
- Start with the 4th drive, remove it and replace it with another disc of same size or bigger (you can follow the instructions from the Buffalo Terastation manual.
- Start the Terastation and it will show you a failed drive - don't panic :-)
- Now you have to gain root console access via telnet or ssh. It is important that you do not try to repair the raid(s) from the web interface!
- Take a look at the partition table of a working hard drive like /dev/sda
fdisk -l /dev/sda Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 37 297171 83 Linux /dev/sda2 38 99 498015 83 Linux /dev/sda4 100 30401 243400815 5 Extended /dev/sda5 100 116 136521 82 Linux swap /dev/sda6 117 30390 243175873+ 83 Linux
- Create an identical partition table using fdisk on the newly inserted drive (/dev/sdd). This can be done through the command:
- Important - If you are upgrading to larger drives, you will want to extend the size of the extended partition and partition 6 to the maximum available size (last cylinder number).
- Create /dev/sdd1 & sdd2 as primary partitions number 1 & 2, respectively. Use starting & ending cylinder numbers to match /dev/sda1 & sda2 from your 'fdisk -l /dev/sda' output.
- Create /dev/sdd4 as an extended partition using the remaining cylinders.
- Create /dev/sdd5 & sdd6 as logical partitions. Use the same starting & ending cylinder number matching--as shown in your 'fdisk -l /dev/sda' output--for these partitions.
- Change partition number 5 to a Linux Swap partition. Press 'l' (lowercase L) within fdisk to verify that Linux Swap partitions are assigned hexadecimal 82. And then press 't' to change partition 5 to the appropriate type/number.
Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 37 297171 83 Linux /dev/sdd2 38 99 498015 83 Linux /dev/sdd4 100 60801 487588815 5 Extended /dev/sdd5 100 116 136521 82 Linux swap /dev/sdd6 117 60801 487452231 83 Linux
- apply mkswap to the fifth partition
- now rebuild all three raids (0-2), the steps are a follows
mdadm -a /dev/md0 /dev/sdd1 mdadm -a /dev/md1 /dev/sdd2 mdadm -a /dev/md2 /dev/sdd6
- now you can check if this was done correctly with
root@daten# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Wed Jun 13 15:47:40 2007 Raid Level : raid1 Array Size : 297088 (290.17 MiB 304.22 MB) Device Size : 297088 (290.17 MiB 304.22 MB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Oct 30 07:39:29 2007 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 UUID : ec67ebdb:01b029ce:000bb777:9d5fd144 Events : 0.730 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1
- The rebuilding of arrays 0 & 1 runs quickly, but the large array takes a while (the web interface gives you an approximation for the rebuild time. It is usually more than 3 - 4 hours.
- Important - do not replace any other drives until the array rebuilding is done.
- Repeat all these steps, including the shutdown and restart for all drives.
Growing the array and file system
- After all drives are replaced and the rebuild finished you can grow your storage array using
mdadm --grow /dev/md2 -z max
- the array daemon starts a resync subsequently after growing the array
root@DATEN:~# mdadm --detail /dev/md2 /dev/md2: Version : 00.90.03 Creation Time : Wed Jun 13 08:29:22 2007 Raid Level : raid5 Array Size : 1462356480 (1394.61 GiB 1497.45 GB) Device Size : 487452160 (464.87 GiB 499.15 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Wed Oct 31 08:36:13 2007 State : active, resyncing Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 50% complete UUID : 05c76307:69dff621:a11d513a:e4478523 Events : 0.1320002 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 22 1 active sync /dev/sdb6 2 8 38 2 active sync /dev/sdc6 3 8 54 3 active sync /dev/sdd6
- this step will also take several hours (about 5 hours in this case)
- the last step is growing the file system which runs very quick, just run
- and you can do that before the resync finished.
- so we end up with a enlarged array without any data loss or need to backup the data! :-)
root@DATEN:~# df -h Filesystem Size Used Available Use% Mounted on /dev/md1 481.6M 307.0M 174.6M 64% / /dev/ram1 15.0M 116.0k 14.9M 1% /mnt/ram /dev/md0 281.0M 15.0M 265.9M 5% /boot /dev/md2 1.4T 631.4G 763.1G 45% /mnt/array1
- you can watch the progress by using the web interface or looking into /proc/mdstat
root@DATEN:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [raid4] md2 : active raid5 sda6 sdb6 sdd6 sdc6 1462356480 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [============>........] resync = 64.8% (315880576/487452160) finish=191.9min speed=14865K/sec md1 : active raid1 sda2 sdb2 sdd2 sdc2 497920 blocks [4/4] [UUUU] md0 : active raid1 sda1 sdb1 sdd1 sdc1 297088 blocks [4/4] [UUUU] unused devices: <none>