TeraStation Larger Disks and RAID5

ARTICLE UNDER CONSTRUCTION

This article covers using disks larger than 500GB when you need RAID5 support.

If you do not need RAID5 support then look at the article called [TeraStation Larger Disks]

Although the focus in this article is on using disks larger than 500GB, the techniques described here should work with any size disk if you have requirements not met by the Standard Buffalo firmware.

=Background= The largest disks used in PPC based TeraStations as supplied by Buffalo are 500GB giving a maximum of 2TB (4x500GB) on a system. It is now possible to get IDE disks up to 750Gb in size, and SATA disks up to 2.0TB in size. Larger disks may become available in the future. Many users would like to be able to use such disks to upgrade their TeraStations capacity.

The systems that are currently covered by this article include all the PPC based TeraStation Models: These all come with a Linux 2.4 Kernel. This version of Linux has a limitation where a single file system cannot exceed 2Tb. As Buffalo have assumed that normal use is to use a RAID array treating all 4 disks as a single file system the largest disks that is supported by the standard Buffalo firmware is 500Gb. This article covers techniques for using disks larger than this (albeit with some restrictions).
 * Original TeraStation
 * TeraStation Home Server
 * TeraStation Pro v1

The following ARM based TeraStation models are not covered: These systems come with a Linux 2.6 kernel. This does not suffer from the limit of 2TB in a single file system. Although there is no reason to suspect that the instructions described in this article would not work they should not be required as the standard Buffalo firmware should handle larger disks without issues.
 * Terastation Live
 * TeraStation Pro v2

=SATA Disks on IDE Based Systems= It is possible to use disks larger than 750GB on an IDE based system via a SATA/IDE converter, although you have to make sure you get one small enough to fit into the available space. The ones tested were obtained via eBay, and consisted of a small PCB that was the same size as the back of a 3.5" drive with a SATA connector on one side and an IDE socket on the other as shown below:   For large disks a SATA drive plus a converter is normally cheaper than the equivalent size IDE drive, so this is attractive from a price point of view.



The board plugged into the back of the drive and simply converted the connectors from SATA to IDE and there was just enough clearance inside the case to fit the power and IDE cables.

TIP: The cables are a very tight fit, but you can give yourself a little more space by drilling new holes in the cage that holds the drives about a 1/4 inch nearer the front than the existing holes. This can only be done on the top part of the cage but that is normally enough to hold the drive firmly.

=Approach=

This wiki article discusses an approach that can be used if the use of RAID5 arrays is critical to you. If RAID5 is not crtical then you might find the approach discussed in the wiki article called TeraStation Larger Disks to be more suitable.

There approachs discussed here is based on using Telnet enabled firmware and setting up the RAID5 arrays used to store data via manual commands issued during a telnet session.

Advantages

 * You can use drives larger than the 500GB maximum supported by the standard Buffalo firmware.
 * A single drive failure does not lose any data.
 * Recovering the system back to a fully working state after a drive failure is relatively simple.
 * The Buffalo provided firmware continues to be used as the basis of day-to-day operation of the system
 * The Buffalo browser based GUI can still be used to manage nearly all aspects of the system such as users and groups.
 * The software upgrades available from the itimpi website can still be used with the system.
 * Buffalo firmware upgrades can still be used (although at this late date it is unlikely that they will provide new ones for these models as they have been superseded by the ARM based models).

Disadvantages

 * You have to use a telnet enabled firmware release rather than the standard Buffalo supplied ones.  Many might consider this to be an advantage rather than a disadvantage!
 * Manual steps are required to set up the RAID5 data arrays - the Buffalo Web GUI facilities cannot be used for this purpose.
 * Manual steps are required to recover from a disk failure - you cannot use the Buffalo GUI to achieve this.  However as these are the very similar steps to those required to set the system up in the first place this will probably not be an issue.

=Telnet Enabled Firmware= TeraStations internally run the Linux operating system. Buffalo hide this from the average user providing the system "packaged" and with a browser based GUI to control and configure the system. Telnet enabled firmware allows users to log in using a telnet client to control and manipulate the system at the Linux command line level.

This allows users to do things like:
 * Configure the system at a more detailed level than allowed for by the Buffalo GUI.
 * Upgrade components of the underlying software to newer versions with additional feature and/or bug fixes included.
 * Install new applications to extend the functionality of the Terastation.
 * In the event of any problems being encountered allow for a level of access that gives more chance of recovering user data without loss.

The changes described in this article require the use of a telnet enabled release. Hopefully the instructions provided are detailed enough that users can carry out the steps provided without needing much linux knowledge.

The standard software supplied with buffalo PPC based TeraStations does not provide for telnet access. Telnet enabled releases of firmware corresponding to virtually all Buffalo firmware releases can be found at itimpi's website. These are identical in functionality to the corresponding Buffalo firmware releases - the modification to add telnet functionality being trivial. This means they will have exactly the same bugs (if any) as are found in the Buffalo releases.

The itimpi firmware releases are the ones that have been used while preparing this article. Firmware from other sources should work fine as long as it is telnet enabled.

To use telnet you need a telnet client. A rudimentary one is included with Windows which you can invoked by typing a command of the following form into the Windows Run box: telnet TeraStation_address A freeware one called PuTTY is recommended as a much better alternative. In addition to the standard telnet protocol PuTTY also supports the more secure SSH variant (although additional components need installing at the TeraStation end to support this).

TIP: If you already have a telnet enabled version of the firmware installed and you want to continue to use that version then you can run the Firmware updater in Debug mode and elect to not rewrite the linux kernel or boot image in flash, but merely update the hard disk. This is slightly safer as the flash chips have been known to fail.

=TeraStation use of Partitions and RAID arrays= This section provides some simple background information on the way that the TeraStation partitions the disk and the way it makes use of RAID arrays. Although it is probably not critical that you understand this section it does help to make sense of the command that are used later when setting up the partitions and RAID arrays.

Partition layout
The TeraStation models all come with 4 primary partitions set up.

Partition 1 (System Partition)
This partition is used to hold the Linux software that is used when the system is a normal running state.

The default size is around 380MB.

The 4 disks are set up with Partion 1 on each disk as part of a RAID1 array so that the system can still load as long as a single drive is operative.

partition 2 (Swap Partition)
This is set up on each drive to be used as swap space. there is normally 128MB per drive allocated to this use giving a total swap space available of 512MB.

On some firmware releases, these partitions ar not used and instead a 128MB file called .swapfile is created on partition 1 and used as swap space. This is OK for standard use, but it has the downside that there is less space available to install additional software such as the development tools or the OpenTera packages from the [itimpi] web site.

Partition 3 (Data Partition 1)
This is the partition that is normally used to hold the users data.

The 4 drives can be configured by the Buffalo GUI to use this partition as part of a RAID0, RAID1 or RAID5 array. Most users seem to use RAID5 to give a level of redundancy at the cost of only haveing the capacity of 3 drives available for data storage.

Partition 4 (Spare/Data Partition 2)
This partition is normally present and very small in size (typcially around 8MB). However if you install disks larger than 500GB you will find that the standard Buffalo firmware makes Partition 3 to be about 585GB and then makes partition4 much larger by using it to fill the remainder of the disk.

The normal purpose of Partition 4 is believed to be to allow Buffalo to use different brands of drive which have the same nominal ddisk capacity but vary slightly when one gets down to the fine detail of the exact number of sectors available.

In this document this partition is used instead as another data array and used to set up additional RAID5 space that can be used for data storage purpose.

Extended Partitions
As mentioned earlier, all TeraStations come as standard with the disk set up to use 4 primary partitions (which is the maximum number allowed by the partition table in the boot sector). It is not known whether the system can operate satifactorily with Partition 4 set up as an Extended Partition instead of a primary one. Linux should be able to handle this without issues, so it becomes a case of how this would affect the Buffalo specific parts of the firmware.

The reason one might want to do this is to allow more than 2 RAID5 arrays to be created. This would allow all the space on 1.5TB disks to be used, and also make sense of using 2TB disks.

There would almost certainly be implications on the Web GUI as it is designed with a maximum of 2 RAID arrays in mind.

Any feedback on this would be welcomed, and ideally should be incoporated into this article.

EDIT: It IS possible to setup partition 4 as an extended one, no problem for the Linux. The Web GUI is not able to handle more than 2 arrays, but it will still recognize the second array in the extended partition, so feel free to use disks as large as you like, even if they are more than 2 TB (if available).

RAID arrays
The TeraStation makes use of the software support for RAID arrays that is built into Linux. The version of linux used on the TeraStation supports the following RAID types:


 * RAID0: (striped disks) distributes data across several disks in a way that gives improved speed at any given instant. If one disk fails, however, all of the data on the array will be lost, as there is neither parity nor mirroring. In this regard, RAID 0 is somewhat of a misnomer, in that RAID is 0 is non-redundant. A RAID 0 array requires a minimum of two drives. In the TeraStation this mode is called spanning and is used if you want all 4 drives to appear as a single larger drive with all their space available for use.  Note that if using this mode you are limited to 500GB disks because of the 2TB limit on a single file system inherent in linux 2.4 kernels.


 * RAID1: mirrors the contents of the disks, making a form of 1:1 ratio realtime backup. The contents of each disk in the array are identical to that of every other disk in the array. A RAID 1 array requires a minimum of two drives. The system will continue to function as long as at least one of the drives in the array are working correctly.  The TeraStation uses this mode for Partition 1 (system partition) which is replicated on all 4 drives and accessed as /dev/md0.


 * RAID5: (striped disks with distributed parity) combines three or more disks in a way that protects data against the loss of any one disk. The storage capacity of the array is a function of the number of drives minus the space needed to store parity. This is the way most TeraStation users tend to have their data partitions configured to run as it gives protection against a single drive failure.

The RAID arrays are administered by using the linux mdadm tool.

The Buffalo firmware assumes that up to two RAID arrays can be used and that they are accessed as /dev/md1 and /dev/md2. if using RAID5 or RAID0 it assumes that all 4 drives are

=Setting up the RAID5 Arrays= This section covers how to set up a single RAID5 array if you are using drives larger than 500GB. A single RAID5 array is limited to 2TB usable space. This means that the maximum amount of space that can be used on each drive in the first RAID5 array is a little under 750GB drives. Larger arrays are not possible due to the 2TB limit on a single file system imposed by the 2.4 kernel. If you are using larger drives, you can, however, set up more RAID5 arrays as described later in this article.

The steps involved in setting up the first RAID5 array are:

0. You need to use a telnet enabled version of the firmware. When connecting via telnet you need to login with a username that has root privileges. With the itimpi variants of the firmware this will be the user myroot.

1. Start with 4 unpartitioned drives and flash the firmware to get the basic setup. Any RAID5 array setup at this stage will be limited to about 1.6TB in size as this is the maximum that the Buffalo provided firmware knows how to set up. Do not worry as we are going to change this in the subsequent steps described here.

2. View what's mounted: root@CONTROLS1:~# df -h Filesystem           Size  Used Avail Use% Mounted on /dev/shm               15M  120k   14M   1% /mnt/ram /dev/ram1             14M  211k   13M   2% /tmp /dev/md1             1.4T  5.8M  1.4T   1% /mnt/array1

3. Unmount mounted raid array: root@CONTROLS1:~# umount /dev/md1

4. Change each disk's (sda,sdb,sdc,sdd) partition table to give partition 3 the space to be used for the first RAID5 array, and partitions 4, 5 ... any remaining space. First delete the existing partitions 3 and 4 and then recreate them with their new sizes. The size of partition 3 must not exceed 715, 816,238 1K blocks - you may have to experiment a bit to work out what the start and end tracks need to be. Then set the type for the partitions 3 and up to be ' '. The example below is setting partition 3 to be 500GB & partition 4 the remaining space. Note: For even larger discs (1.5TB and up) setup partition 4 as extended and refer to step 4.1. Do not forget to repeat these steps for any of the 4 discs. root@CONTROLS1:~# mfdisk -c /dev/sda

Command (m for help): d Partition number (1-4): 3

Command (m for help): d Partition number (1-4): 4

Command (m for help): n Command action e  extended p  primary partition (1-4) p Partition number (1-4): 3 First cylinder (66-121601, default 66): Using default value 66 Last cylinder or +size or +sizeM or +sizeK (66-121601, default 121601): +500000M

Command (m for help): n Command action e  extended p  primary partition (1-4) p Partition number (1-4): 4 First cylinder (63808-121601, default 63808): Using default value 63808 Last cylinder or +size or +sizeM or +sizeK (63808-121601, default 121601): Using default value 121601

Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 82

Command (m for help): p  Device Boot    Start       End    Blocks   Id  System /dev/sda1            1        48    385528+  83  Linux /dev/sda2           49        65    136552+  82  Linux swap /dev/sda3           66     63807 512007615   83  Linux /dev/sda4        63808    121601 464230305   83  Linux

Command (m for help): w The partition table has been altered!

Re-read table failed with error 16: Device or resource busy. Reboot your system to ensure the partition table is updated. Syncing disks.

4.1. Creating more than 4 partitions: Note: Can be skipped when using only 2 arrays. After creating partition 4 as extended you will not be prompted to enter the wanted size, instead you will have to type 'n' again, then you will be prompted to enter the size for partition 5 and after that for partition 6 and so on. At this point you will have to install itimpi's updated [busybox] to get an mknod command available. Now type ls -l /dev/sda* and you will see which nodes are already existing for disc sda. sda6 will probably not exist, so add it. The 755 is the bitmask for the access rights, just enter the same as the other partitions. The same for the 3. The ?? must be replaced by a number matching the highest you find for partitions on this disc increased by 1. Example: sda1 has the number 21, sda2 has 22, sda5 should have 25 and for sda6 you must enter 26. mknod -m 755 /dev/sda6 b 3 ?? Now type ls -l /dev/sda* again and you should see the new device. Repeat this for all discs.

5. Reboot: ro@CONTROLS1:/etc/rc.d/rc3.d# reboot

Broadcast message from root (pts/0) Mon Aug 10 15:49:05 2009...

The system is going down for reboot NOW !!

6. Create Raid arrays for 1 TB discs: Note: For 1.5TB discs or even larger please skip to step 6.1. root@CONTROLS1:~# mdadm --create /dev/md1 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]3 mdadm: array /dev/md1 started. root@CONTROLS1:~# mdadm --create /dev/md2 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]4 mdadm: array /dev/md2 started.

6.1. Create Raid arrays for 1.5 TB discs: Note: This step can be skipped if you are using 1TB discs. Make sure not to use partition 4 as a raid array! root@CONTROLS1:~# mdadm --create /dev/md1 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]3 mdadm: array /dev/md1 started. root@CONTROLS1:~# mdadm --create /dev/md2 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]5 mdadm: array /dev/md2 started. root@CONTROLS1:~# mdadm --create /dev/md3 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]6 mdadm: array /dev/md3 started.

7. Edit fstab to look like below (leave it to you to learn how to use VI) root@CONTROLS1:~# vi /etc/fstab /dev/md0       /               auto    defaults,noatime                0 0 proc           /proc           proc    defaults                        0 0 /.swapfile     swap            swap    defaults                        0 0 /dev/md1       /mnt/array1     xfs     rw,noatime                      0 0 /dev/md2       /mnt/array2     xfs     rw,noatime                      0 0 For more than 2 arrays don't forget to add the line /dev/md3       /mnt/array3     xfs     rw,noatime                      0 0
 * 1) /etc/fstab: static file system information.

8. Edit diskinfo to look like below (again learning vi is up to you) ]root@CONTROLS1:/# vi /etc/melco/diskinfo array1=raid5 array2=raid5 disk1=array1 disk2=array1 disk3=array1 disk4=array1 usb_disk1= usb_disk2= usb_disk3= usb_disk4= Again for 3 arrays add the following as 3rd line: array3=raid5

9. Format raid arrays Format all arrays you created above starting from md1 root@CONTROLS1:/# mkfs.xfs -f /dev/md1 mkfs.xfs: warning - cannot get sector size from block device /dev/md1: Invalid argument meta-data=/dev/md1              isize=256    agcount=367, agsize=1048576 blks =                      sectsz=512 data    =                       bsize=4096   blocks=384005616, imaxpct=25 =                      sunit=16     swidth=48 blks, unwritten=1 naming  =version 2              bsize=4096 log     =internal log           bsize=4096   blocks=32768, version=1 =                      sectsz=512   sunit=0 blks realtime =none                  extsz=196608 blocks=0, rtextents=0

root@CONTROLS1:/# mkfs.xfs -f /dev/md2 mkfs.xfs: warning - cannot get sector size from block device /dev/md2: Invalid argument meta-data=/dev/md2              isize=256    agcount=333, agsize=1048576 blks =                      sectsz=512 data    =                       bsize=4096   blocks=348172656, imaxpct=25 =                      sunit=16     swidth=48 blks, unwritten=1 naming  =version 2              bsize=4096 log     =internal log           bsize=4096   blocks=32768, version=1 =                      sectsz=512   sunit=0 blks realtime =none                  extsz=196608 blocks=0, rtextents=0

root@CONTROLS1:/# mkfs.xfs -f /dev/md3 .....

10. Mount raid arrays Mount all arrays you created above starting from md1 root@CONTROLS1:/# mount /dev/md1 root@CONTROLS1:/# mount /dev/md2 root@CONTROLS1:/# mount /dev/md3

11. Create a startup script as follows Note: For larger discs jump to 11.1. root@CONTROLS1:/# vi /etc/init.d/restart_my_array.sh       echo "-- rebuild mdadm.conf for md1--" echo 'DEVICE /dev/ts_disk?_3' > /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_3 >>/etc/mdadm.conf echo "-- rebuild mdadm.conf for md2--" echo 'DEVICE /dev/ts_disk?_4' >> /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_4 >>/etc/mdadm.conf mdadm -As --force mount /dev/md2
 * 1) !/bin/sh

11.1. Startup script for 3 arrays Note: Can be skipped if you are using only 2 arrays root@CONTROLS1:/# vi /etc/init.d/restart_my_array.sh       echo "-- rebuild mdadm.conf for md1--" echo 'DEVICE /dev/ts_disk?_3' > /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_3 >>/etc/mdadm.conf echo "-- rebuild mdadm.conf for md2--" echo 'DEVICE /dev/ts_disk?_5' >> /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_5 >>/etc/mdadm.conf echo "-- rebuild mdadm.conf for md3--" echo 'DEVICE /dev/ts_disk?_6' >> /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_6 >>/etc/mdadm.conf mdadm -As --force mount /dev/md2 mount /dev/md3
 * 1) !/bin/sh

12. Make script executable root@CONTROLS1:/etc/init.d# chmod +x restart_my_array.sh

13. Create a link to startup script in rc3.d root@CONTROLS1:/# ln -s /etc/init.d/restart_my_array.sh /etc/rc.d/rc3.d/S99z_restart_my_array

14. Reboot & enjoy :up:. You should have 2 (or more) raid5 arrays working and should be able to configure them & their shares from the web application.

14.1. The web application will not be able to configure shares for array3 and up. To easily access those arrays, just create a share for array1 (or array2) in the web GUI, delete the folder created then with the linux shell (telnet) and create one in array3. Now make a symlink in array1 (or array2) with the same name as the one you just deleted and let it point to the folder you created in array3. Optional: I have also created a 4th share in array1 and made symlinks within that folder to all 3 arrays, so I only have to mount one sharfe to get access to all arrays.

edit: Fixed step 4 to include changing partition 3 to a linux swap partition. It would have still functioned without being set to swap, but better safe than sorry. the end result of step 4 should look like this

Disk /dev/hdg: 255 heads, 63 sectors, 91201 cylinders Units = cylinders of 16065 * 512 bytes
 * Use the approach described above for Changing the Partition Sizes on each of the four disks.    As an example on a Seagate 750GB drive this works out as follows:

Device Boot   Start       End    Blocks   Id  System /dev/hdg1            1        48    385528+  fd  Linux raid autodetect /dev/hdg2           49        65    136552+  82  Linux swap /dev/hdg3           66     89180 715816237+  fd  Linux raid autodetect /dev/hdg4        89181     91201  16233682+  fd  Linux raid autodetect

mkfs.xfs -f /dev/hd?3 style command.
 * Format the new data partitions using the
 * Repeat repartitioning and formatting the new partition for each of the four disks
 * Use the Web GUI to create the RAID5 partition and any wanted shares.

=Using Symbolic Links to give Single View of Arrays=

If you follow the normal process of setting up each array with its own share then when working at the Client level you will see each array independently. By using linux symbolic links it is possible to make the contents of one array appear within the other array.

As an example: cd /mnt/array1/share ln -s /mnt/array2 _array2 This would make the contents of array2 appear under the '_array2' folder within the first array.

=Recovering After a Drive Failure= One of the big advantages of a RAID5 approach is that if a single drive fails, then your data is still intact. This section covers what needs to be done after such a failure to replace the failed drive and get the RAID5 array fully functional with 4 drives.

The standard Buffalo firmware will detect that an array has failed, but it will not be able to recover that array due to the fact that the RAID arrays are not set up exactly as Buffalo firmware expects. Instead manual intervention is required along the same line as was originally used to create the RAID5 arrays.

In the following commands replace the '?' by a, b, c or d to correspond to drive 1, 2, 3 or 4 depending on what drive you are trying to replace.

mfdisk -c /dev/sda and use the 'p' command to see the partition details, and then use the 'q' command to quit. If it is drive 1 you are trying to replace then use /dev/sdb instead to look at the settings on drive 2. mdadm /dev/md0 -f /dev/sd?1 mdadm /dev/md0 -r /dev/sd?1 mdadm /dev/md0 -a /dev/sd?1 mkswap /dev/sd?2 mkfs.xfs -f /dev/sd?3 mkfs.xfs -f /dev/sd?4 If partition 4 is an extended partition, format partition 5 and up instead of partition 4. mdadm --manage --add /dev/md1 /dev/sd?3 mdadm --manage --add /dev/md2 /dev/sd?4 mdadm --detail /dev/md0 reboot
 * Insert the replacement disk.  It should be unformatted and have no partitions already allocated.
 * Boot the system and then login via telnet. You should will be getting an indication that a disk has failed - you can ignore it at this time.
 * Re-partition drive as described earlier.  If you are not sure of the sizes of the partitions then you can use the command
 * Add partition 1 back into the /dev/md0 array
 * Format partition 2 for swap purposes
 * Format partitions 3 and up
 * Add partition 3 back into the /dev/md1 array
 * Add partition 4 (or 5 and 6) (if required) back into the /dev/md2 (and /dev/md3) array
 * Check if the System partition (/dev/sd?1) has finished re-building
 * Reboot system
 * System should now boot as normal.

=Flashing with new Firmware versions= The tests done show that it should be possible to flash a TeraStation setup as described in this article without any issues. However caution should be taken as this is a none-standard setup and this cannot be guaranteed to be true in all cases.

If you attempt a flash upgrade on a system that has been setup as described in this article and the firmware updater program gives any warnings about invalid partition structure or wants to format any of the disks you should abandon the firmware update as otherwise you will almost certainly lose data.

=Scripts for Automating Process= The steps involved are a little error prone, so the following scripts can be used to automate this process. They can also serve as further examples of the steps that are required to get everything working.

After each of the scripts has been created, then you need to ensure that they are set to be executable by issuing a command of the form: chmod +x scriptname

These Scripts are not yet finished and are still under development. In the meantime you should be able to carry out the requisite process using the manual steps described in the earlier sections

/usr/local/sbin/prepare_disk
This script used to prepare a disk ready for it to be added to the RAID5 arrays.
 * 1) !/bin/sh
 * 2) This is a custom script for the PPC based Terastations
 * 3) The script partitions a disc into the format used by the TS
 * 4) The partition structure should match the one below
 * 5) which shows the details for 1 1TB disc.
 * 6) Partition 1 & 2 are system partitions used by the TS
 * 7) Partitions 3 & 4 share the rest of the drive space by assigning
 * 8) partition 3 500GB and partition 4 the rest.
 * 9) Example partition structure
 * 10)   Device Boot    Start       End    Blocks   Id  System
 * 11)   /dev/sda1             1        48    385528+  83  Linux
 * 12)   /dev/sda2            49        65    136552+  82  Linux swap
 * 13)   /dev/sda3            66     63807 512007615   83  Linux
 * 14)   /dev/sda4         63808    121601 464230305   83  Linux
 * 1)   /dev/sda3            66     63807 512007615   83  Linux
 * 2)   /dev/sda4         63808    121601 464230305   83  Linux

clear echo "-" echo " Terastation Drive Replacement Script ---" echo "-" echo "" echo "This script formats a new 1TB drive to enable its use by the" echo "Terastation firmware." echo "" echo "Please choose the new disk location in NAS:" echo "[a] Disk 1" echo "[b] Disk 2" echo "[c] Disk 3" echo "[d] Disk 4" echo "[q] Quit" echo "" echo -n "Disk [a,c,c,d,q]: " read disk
 * 1) Disk choice menu

case $disk in "a")   disk_no="Disk 1";; "b")    disk_no="Disk 2";; "c")   disk_no="Disk 3";; "d")    disk_no="Disk 4";; "q")   echo "Exiting Script ..."        exit 1;;        echo "Exiting Script ...."        exit 1;; esac case `grep "PRODUCTID" /etc/linkstation_release | cut -d= -f2` in	"0x00002001")	echo "[INFO]   Seems to be a TeraStation (original)" case $disk in			"a")   disk_ltr="/dev/hda";;			"b")    disk_ltr="/dev/hdc";; "c")   disk_ltr="/dev/hde";;			"d")    disk_ltr="/dev/hdg";; esac ;;	"0x00002002")  echo "[INFO]   Seems to be a TeraStation Pro"			case $disk in			"a")    disk_ltr="/dev/sda";; "b")   disk_ltr="/dev/sdb";;			"c")    disk_ltr="/dev/sdc";; "d")   disk_ltr="/dev/sdd";;			esac			;;	"0x00002003")   echo "[INFO]   Seems to be a TeraStation Home Server" case $disk in			"a")   disk_ltr="/dev/hda";;			"b")    disk_ltr="/dev/hdc";; "c")   disk_ltr="/dev/hde";;			"d")    disk_ltr="/dev/hdg";; esac ;;
 * 1) Check disk chice and set variables
 * 2) if not valid then error & exit
 * 1) if not valid then error & exit
 * )     echo "Invalid input, must be [a,b,c,d,q]"
 * 1) Now set the device dependent on the TeraStation type

*)		echo "[ERROR] TeraStation type not recognized"       		echo "Exiting Script ...."			exit 1			;; esac

case $disk in "a")   disk_ltr="/dev/sda"        disk_no="Disk 1";; "b")    disk_ltr="/dev/sdb" disk_no="Disk 2";; "c")   disk_ltr="/dev/sdc"        disk_no="Disk 3";; "d")    disk_ltr="/dev/sdd" disk_no="Disk 4";; "q")   echo "Exiting Script ..."        exit 1;;        echo "Exiting Script ...."        exit 1;; esac
 * )     echo "Invalid input, must be [a,b,c,d,q]"

part3=$disk_ltr"3" part4=$disk_ltr"4"

case $disk in "a")   disk_size="640GB";; "b")    disk_size="750GB";; "c")   disk_size="1.0TB";; "d")    disk_size="1.5TB";; "d")   disk_size="2.0TB";; "q")    echo "Exiting Script ..." exit 1;; echo "Exiting Script ...." exit 1;; esac
 * 1) Check disk size and set variables
 * 2) if not valid then error & exit
 * 1) if not valid then error & exit
 * )     echo "Invalid input, must be [a,b,c,d,e,q]"

echo "" echo "You chose $disk_no" echo "The script will now PARTITION and FORMAT the disk" echo "Any existing contents will be destroyed" echo -n "Are you sure you chose the right drive and want to continue [y/n]? " read user_ok echo ""
 * 1) Warn user and proceed if he accepts
 * 1) Warn user and proceed if he accepts

case $user_ok in "y")   break;; "n")    echo "Exiting Script ..." exit 1;; echo "Exiting Script ..." exit 1;; esac
 * )     echo "Invalid input, must be [y/n]"

clear echo "-" echo "       The script will now repartition the drive" echo "-" echo "" echo -n "Press any key to continue ... " read wait_ok echo ""
 * 1) Drive partitioning using mfdisk
 * 1) Drive partitioning using mfdisk

mfdisk -c $disk_ltr << EOF p o n p 1 1 48 n p 2 49 65 n p 3 66 +500000M n p 4

t 2 82 p w EOF

echo "" echo "       Recovering System partitions echo "" echo "System partition '"${disk_ltr}1' as part of /dev/mdo" mdadm /dev/md0 -f ${disk_ltr}1 mdadm /dev/md0 -r ${disk_ltr}1 mdadm /dev/md0 -a ${disk_ltr}1 echo "Swap partition '${disk_ltr}2'" mkswap ${disk_ltr}2

clear echo "-" echo "       The script will now format the drive" echo "-" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mkfs.xfs -f $part3 mkfs.xfs -f $part4
 * 1) Format drive using mkfs.xfs
 * 1) Format drive using mkfs.xfs

/usr/local/sbin/create_arrays
This script is used to create the initial RAID5 arrays after the disk has been partitioned and formatted. It makes use of the prepare_disk script to handle the partitioning and formatting of each disk.

/usr/local/sbin/recover_arrays
This script is used to recover the RAID5 arrays after a single disk has failed. It makes use of the prepare_disk script to handle the partitioning and formatting of each disk. Since i will not be the end-user of the NAS i wanted to create an easier way of modifying the partitions and formating them, in case a drive fails and a new one needs to be configured to match our above modifications.

I created the following script (add_disk) and placed it in /bin/ A user would be able to telnet into the NAS when a drive fails and just run it against the replacement drive. Upon reboot the drive should be seen and rebuilt by the TSP.

I tested it by removing removing the drive from the array, deleting all partitions, and then running the script.

As always its attached below for anyone who needs it but please let me know of your personal experience. DISCLAIMER: USE AT YOUR OWN RISK - I'm a newbie that needed this functionality and i did my best to implement it.


 * 1) !/bin/sh
 * 2) This is a custom script for the PPC based Terastations
 * 3) The script partitions a disc into the format used by the TS
 * 4) The partition structure should match the one below
 * 5) which shows the details for 1 1TB disc.
 * 6) Partition 1 & 2 are system partitions used by the TS
 * 7) Partitions 3 & 4 share the rest of the drive space by assigning
 * 8) partition 3 500GB and partition 4 the rest.
 * 9) Example partition structure
 * 10)   Device Boot    Start       End    Blocks   Id  System
 * 11)   /dev/sda1             1        48    385528+  83  Linux
 * 12)   /dev/sda2            49        65    136552+  82  Linux swap
 * 13)   /dev/sda3            66     63807 512007615   83  Linux
 * 14)   /dev/sda4         63808    121601 464230305   83  Linux
 * 1)   /dev/sda3            66     63807 512007615   83  Linux
 * 2)   /dev/sda4         63808    121601 464230305   83  Linux

clear echo "-" echo " Terastation Drive Replacement Script ---" echo "-" echo "" echo "This script formats a new 1TB drive to enable its use by the" echo "Terastation firmware." echo "" echo "Please choose the new disk location in NAS:" echo "[a] Disk 1" echo "[b] Disk 2" echo "[c] Disk 3" echo "[d] Disk 4" echo "[q] Quit" echo "" echo -n "Disk [a,c,c,d,q]: " read disk
 * 1) Disk choice menu

case $disk in "a")   disk_no="Disk 1";; "b")    disk_no="Disk 2";; "c")   disk_no="Disk 3";; "d")    disk_no="Disk 4";; "q")   echo "Exiting Script ..."        exit 1;;        echo "Exiting Script ...."        exit 1;; esac case `grep "PRODUCTID" /etc/linkstation_release | cut -d= -f2` in	"0x00002001")	echo "[INFO]   Seems to be a TeraStation (original)" case $disk in			"a")   disk_ltr="/dev/hda";;			"b")    disk_ltr="/dev/hdc";; "c")   disk_ltr="/dev/hde";;			"d")    disk_ltr="/dev/hdg";; esac ;;	"0x00002002")  echo "[INFO]   Seems to be a TeraStation Pro"			case $disk in			"a")    disk_ltr="/dev/sda";; "b")   disk_ltr="/dev/sdb";;			"c")    disk_ltr="/dev/sdc";; "d")   disk_ltr="/dev/sdd";;			esac			;;	"0x00002003")   echo "[INFO]   Seems to be a TeraStation Home Server" case $disk in			"a")   disk_ltr="/dev/hda";;			"b")    disk_ltr="/dev/hdc";; "c")   disk_ltr="/dev/hde";;			"d")    disk_ltr="/dev/hdg";; esac ;;
 * 1) Check disk chice and set variables
 * 2) if not valid then error & exit
 * 1) if not valid then error & exit
 * )     echo "Invalid input, must be [a,b,c,d,q]"
 * 1) Now set the device dependent on the TeraStation type

*)		echo "[ERROR] TeraStation type not recognized"       		echo "Exiting Script ...."			exit 1			;; esac

case $disk in "a")   disk_ltr="/dev/sda"        disk_no="Disk 1";; "b")    disk_ltr="/dev/sdb" disk_no="Disk 2";; "c")   disk_ltr="/dev/sdc"        disk_no="Disk 3";; "d")    disk_ltr="/dev/sdd" disk_no="Disk 4";; "q")   echo "Exiting Script ..."        exit 1;;        echo "Exiting Script ...."        exit 1;; esac
 * )     echo "Invalid input, must be [a,b,c,d,q]"

part3=$disk_ltr"3" part4=$disk_ltr"4"

echo "" echo "You chose $disk_no" echo "The script will now PARTITION and FORMAT the disk" echo "Any existing contents will be destroyed" echo -n "Are you sure you chose the right drive and want to continue [y/n]? " read user_ok echo ""
 * 1) Warn user and proceed if he accepts
 * 1) Warn user and proceed if he accepts

case $user_ok in "y")   break;; "n")    echo "Exiting Script ..." exit 1;; echo "Exiting Script ..." exit 1;; esac
 * )     echo "Invalid input, must be [y/n]"

clear echo "-" echo "       The script will now repartition the drive" echo "-" echo "" echo -n "Press any key to continue ... " read wait_ok echo ""
 * 1) Drive partitioning using mfdisk
 * 1) Drive partitioning using mfdisk

mfdisk -c $disk_ltr << EOF p o n p 1 1 48 n p 2 49 65 n p 3 66 +500000M n p 4

t 2 82 p w EOF

echo "" echo "       Recovering System partitions echo "" echo "System partition '"${disk_ltr}1' as part of /dev/mdo" mdadm /dev/md0 -f ${disk_ltr}1 mdadm /dev/md0 -r ${disk_ltr}1 mdadm /dev/md0 -a ${disk_ltr}1 echo "Swap partition '${disk_ltr}2'" mkswap ${disk_ltr}2

clear echo "-" echo "       The script will now format the drive" echo "-" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mkfs.xfs -f $part3 mkfs.xfs -f $part4
 * 1) Format drive using mkfs.xfs
 * 1) Format drive using mkfs.xfs

clear echo "-" echo "     The script will now add drive back to arrays" echo "-" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mdadm --manage --add /dev/md1 $part3 mdadm --manage --add /dev/md2 $part4
 * 1) Add Partitions back to raid arrays
 * 1) Add Partitions back to raid arrays

clear echo "-" echo "       The script will now restart the NAS" echo "-" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" reboot
 * 1) Reboot the NAS using reboot
 * 1) Reboot the NAS using reboot