TeraStation Larger Disks and RAID5
ARTICLE UNDER CONSTRUCTION
This article covers using disks larger than 500GB when you need RAID5 support.
If you do not need RAID5 support then look at the article called [TeraStation Larger Disks]
Although the focus in this article is on using disks larger than 500GB, the techniques described here should work with any size disk if you have requirements not met by the Standard Buffalo firmware.
Contents
- 1 Background
- 2 SATA Disks on IDE Based Systems
- 3 Approach
- 4 Telnet Enabled Firmware
- 5 TeraStation use of Partitions and RAID arrays
- 6 Setting up a the RAID5 Arrays
- 7 Using Symbolic Links to give Single View of Arrays
- 8 Recovering After a Drive Failure
- 9 Flashing with new Firmware versions
- 10 Scripts for Automating Process
Background
The largest disks used in PPC based TeraStations as supplied by Buffalo are 500GB giving a maximum of 2TB (4x500GB) on a system. It is now possible to get IDE disks up to 750Gb in size, and SATA disks up to 2.0TB in size. Larger disks may become available in the future. Many users would like to be able to use such disks to upgrade their TeraStations capacity.
The systems that are currently covered by this article include all the PPC based TeraStation Models:
- Original TeraStation
- TeraStation Home Server
- TeraStation Pro v1
These all come with a Linux 2.4 Kernel. This version of Linux has a limitation where a single file system cannot exceed 2Tb. As Buffalo have assumed that normal use is to use a RAID array treating all 4 disks as a single file system the largest disks that is supported by the standard Buffalo firmware is 500Gb. This article covers techniques for using disks larger than this (albeit with some restrictions).
The following ARM based TeraStation models are not covered:
- Terastation Live
- TeraStation Pro v2
These systems come with a Linux 2.6 kernel. This does not suffer from the limit of 2TB in a single file system. Although there is no reason to suspect that the instructions described in this article would not work they should not be required as the standard Buffalo firmware should handle larger disks without issues.
SATA Disks on IDE Based Systems
The original TeraStation and the TeraStation Home Server are IDE based. It is possible to use disks larger than 750GB on an IDE based system via a SATA/IDE converter, although you have to make sure you get one small enough to fit into the available space. The ones tested were obtained via eBay, and consisted of a small PCB that was the same size as the back of a 3.5" drive with a SATA connector on one side and an IDE socket on the other as shown below:
The board plugged into the back of the drive and simply converted the connectors from SATA to IDE and there was just enough clearance inside the case to fit the power and IDE cables. For large disks a SATA drive plus a converter is normally cheaper than the equivalent size IDE drive, so this is attractive from a price point of view.
Approach
This wiki article discusses an approach that can be used if the use of RAID5 arrays is critical to you. If RAID5 is not crtical then you might find the approach discussed in the wiki article called TeraStation Larger Disks to be more suitable.
There approachs discussed here is based on using Telnet enabled firmware and setting up the RAID5 arrays used to store data via manual commands issued during a telnet session.
Advantages
- You can use drives larger than the 500GB maximum supported by the standard Buffalo firmware.
- A single drive failure does not lose any data.
- Recovering the system back to a fully working state after a drive failure is relatively simple.
- The Buffalo provided firmware continues to be used as the basis of day-to-day operation of the system
- The Buffalo browser based GUI can still be used to manage nearly all aspects of the system such as users and groups.
- The software upgrades available from the itimpi website can still be used with the system.
- Buffalo firmware upgrades can still be used (although at this late date it is unlikely that they will provide new ones for these models as they have been superseded by the ARM based models).
Disadvantages
- You have to use a telnet enabled firmware release rather than the standard Buffalo supplied ones. Many might consider this to be an advantage rather than a disadvantage!
- Manual steps are required to set up the RAID5 data arrays - the Buffalo Web GUI facilities cannot be used for this purpose.
- Manual steps are required to recover from a disk failure - you cannot use the Buffalo GUI to achieve this. However as these are the very similar steps to those required to set the system up in the first place this will probably not be an issue.
Telnet Enabled Firmware
TeraStations internally run the Linux operating system. Buffalo hide this from the average user providing the system "packaged" and with a browser based GUI to control and configure the system. Telnet enabled firmware allows users to log in using a telnet client to control and manipulate the system at the Linux command line level.
This allows users to do things like:
- Configure the system at a more detailed level than allowed for by the Buffalo GUI.
- Upgrade components of the underlying software to newer versions with additional feature and/or bug fixes included.
- Install new applications to extend the functionality of the Terastation.
- In the event of any problems being encountered allow for a level of access that gives more chance of recovering user data without loss.
The changes described in this article require the use of a telnet enabled release. Hopefully the instructions provided are detailed enough that users can carry out the steps provided without needing much linux knowledge.
The standard software supplied with buffalo PPC based TeraStations does not provide for telnet access. Telnet enabled releases of firmware corresponding to virtually all Buffalo firmware releases can be found at itimpi's website. These are identical in functionality to the corresponding Buffalo firmware releases - the modification to add telnet functionality being trivial. This means they will have exactly the same bugs (if any) as are found in the Buffalo releases.
The itimpi firmware releases are the ones that have been used while preparing this article. Firmware from other sources should work fine as long as it is telnet enabled.
To use telnet you need a telnet client. A rudimentary one is included with Windows which you can invoked by typing a command of the following form into the Windows Run box:
telnet TeraStation_address
A freeware one called PuTTY is recommended as a much better alternative. In addition to the standard telnet protocol PuTTY also supports the more secure SSH variant (although additional components need installing at the TeraStation end to support this).
TIP: If you already have a telnet enabled version of the firmware installed and you want to continue to use that version then you can run the Firmware updater in Debug mode and elect to not rewrite the linux kernel or boot image in flash, but merely update the hard disk. This is slightly safer as the flash chips have been known to fail.
TeraStation use of Partitions and RAID arrays
This section provides some simple background information on the way that the TeraStation partitions the disk and the way it makes use of RAID arrays. Although it is probably not critical that you understand this section it does help to make sense of the command that are used later when setting up the partitions and RAID arrays.
Partition layout
Partition 1 (System Partition)
partition 2 (Swap Partition)
Partition 3 (Data Partition 1)
Partition 4 (Data Partition 2)
Setting up a the RAID5 Arrays
This section covers how to set up a single RAID5 array if you are using drives larger than 500GB. A single RAID5 array is limited to 2TB useable space. This means that tThe amximum amount of space that can be used on each drive in the first RAID5 array is a little under 750GB drives. Larger arrays are not possible due to the 2TB limit on a single file system imposed by the 2.4 kernel. If you are using larger drives, you can, hoever, set up a second RAID5 array as described later in this article.
The steps involved in setting up the first RAID5 array are:
- You need to use a telnet enabled version of the firmware.
- Start with 4 unpartitioned drives and flash the firmware to get the basic setup. Any RAID5 array setup at this stage will be limited to about 1.6TB in size as this is the maximum that the Buffalo provided firmware knows how to set up. Do not worry as we are going to change this in the subsequent steps described here.
- View what's mounted:
root@CONTROLS1:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/shm 15M 120k 14M 1% /mnt/ram /dev/ram1 14M 211k 13M 2% /tmp /dev/md1 1.4T 5.8M 1.4T 1% /mnt/array1
- Unmount mounted raid array:
root@CONTROLS1:~# umount /dev/md1
- Change each disks (sda,sdb,sdc,sdd) partition table to give partition 3 the space to be used for the first RAID5 array, and Partition 4 any remainng space. First delete the existing partitions 3 and 4 and then recreate them with their new sizes. The size of Partition 3 must not exceed 715, 816,238 1K blocks - you may have to experiment a bit to work out what the start and end tracks need to be. Then set the type for partitions 3 and 4 to be '
fd
'.
The example below is setting partition 3 to be 500GB & partition 4 the remaining space:
root@CONTROLS1:~# mfdisk -c /dev/sda Command (m for help): d Partition number (1-4): 3 Command (m for help): d Partition number (1-4): 4 Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 First cylinder (66-121601, default 66): 66 Last cylinder or +size or +sizeM or +sizeK (66-121601, default 121601): +500000M Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 4 First cylinder (63808-121601, default 63808): Using default value 63808 Last cylinder or +size or +sizeM or +sizeK (63808-121601, default 121601): Using default value 121601 Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 82 Command (m for help): p Device Boot Start End Blocks Id System /dev/sda1 1 48 385528+ 83 Linux /dev/sda2 49 65 136552+ 82 Linux swap /dev/sda3 66 63807 512007615 83 Linux /dev/sda4 63808 121601 464230305 83 Linux Command (m for help): w The partition table has been altered! Re-read table failed with error 16: Device or resource busy. Reboot your system to ensure the partition table is updated. Syncing disks.
5. Reboot:
ro@CONTROLS1:/etc/rc.d/rc3.d# reboot Broadcast message from root (pts/0) Mon Aug 10 15:49:05 2009... The system is going down for reboot NOW !!
6. Create Raid arrays:
root@CONTROLS1:~# mdadm --create /dev/md1 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]3 mdadm: array /dev/md1 started. root@CONTROLS1:~# mdadm --create /dev/md2 --level=5 --raid-devices=4 --force --run /dev/sd[a-d]4 mdadm: array /dev/md2 started.
7. Edit fstab to look like below (leave it to you to learn how to use VI)
root@CONTROLS1:~# vi /etc/fstab # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> /dev/md0 / auto defaults,noatime 0 0 proc /proc proc defaults 0 0 /.swapfile swap swap defaults 0 0 /dev/md1 /mnt/array1 xfs rw,noatime 0 0 /dev/md2 /mnt/array2 xfs rw,noatime 0 0
8. Edit diskinfo to look like below (again learning vi is up to you)
]root@CONTROLS1:/# vi /etc/melco/diskinfo array1=raid5 array2=raid5 disk1=array1 disk2=array1 disk3=array1 disk4=array1 usb_disk1= usb_disk2= usb_disk3= usb_disk4=
9. Format raid arrays
root@CONTROLS1:/# mkfs.xfs -f /dev/md1 mkfs.xfs: warning - cannot get sector size from block device /dev/md1: Invalid argument meta-data=/dev/md1 isize=256 agcount=367, agsize=1048576 blks = sectsz=512 data = bsize=4096 blocks=384005616, imaxpct=25 = sunit=16 swidth=48 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=196608 blocks=0, rtextents=0 root@CONTROLS1:/# mkfs.xfs -f /dev/md2 mkfs.xfs: warning - cannot get sector size from block device /dev/md2: Invalid argument meta-data=/dev/md2 isize=256 agcount=333, agsize=1048576 blks = sectsz=512 data = bsize=4096 blocks=348172656, imaxpct=25 = sunit=16 swidth=48 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=196608 blocks=0, rtextents=0
10. Mount raid arrays
root@CONTROLS1:/# mount /dev/md1 root@CONTROLS1:/# mount /dev/md2
11. Create a startup script as follows
root@CONTROLS1:/# vi /etc/init.d/restart_my_array.sh #!/bin/sh echo "-- rebuild mdadm.conf for md1--" echo 'DEVICE /dev/ts_disk?_3' > /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_3 >>/etc/mdadm.conf echo "-- rebuild mdadm.conf for md2--" echo 'DEVICE /dev/ts_disk?_4' >> /etc/mdadm.conf mdadm -Eb /dev/ts_disk?_4 >>/etc/mdadm.conf mdadm -As --force mount /dev/md2
12. Make script executable
root@CONTROLS1:/etc/init.d# chmod +x restart_my_array.sh
13. Create a link to startup script in rc3.d
root@CONTROLS1:/# ln -s /etc/init.d/restart_my_array.sh /etc/rc.d/rc3.d/S99z_restart_my_array
14. Reboot & enjoy :up: . You should have 2 raid5 arrays working and should be able to configure them & their shares from the web application.
edit: Fixed step 4 to include changing partition 3 to a linux swap partition. It would have still functioned without being set to swap, but better safe than sorry. the end result of step 4 should look like this
- Use the approach described above for Changing the Partition Sizes on each of the four disks. As an example on a Seagate 750GB drive this works out as follows:
Disk /dev/hdg: 255 heads, 63 sectors, 91201 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/hdg1 1 48 385528+ fd Linux raid autodetect /dev/hdg2 49 65 136552+ 82 Linux swap /dev/hdg3 66 89180 715816237+ fd Linux raid autodetect /dev/hdg4 89181 91201 16233682+ fd Linux raid autodetect
- Format the new data partitions using the
mkfs.xfs -f /dev/hd?3
style command.
- Repeat repartitioning and formatting the new partition for each of the four disks
- Use the Web GUI to create the RAID5 partition and any wanted shares.
Using Symbolic Links to give Single View of Arrays
If you follow the normal process of setting up each array with its own share then when working at the Client level you will see each array independently.
cd /mnt/array1/share ln -s /mnt/array2 _array2
This would make the contents of array2 appear under the '_array2' folder with the first array.
Recovering After a Drive Failure
One of the big advantages of a RAID5 approach is that if a single drive fails, then your data is still intact. This section covers what needs to be done after such a failure to replace the failed drive and get the RAID5 array fully functional with 4 drives.
The standard Buffalo firmware will detect that an array has failed, but it will not be able to recover that array due to the fact that the RAID arrays are not set up exactly as Buffalo firmware expects. Instead manual intervention is required along the same line as was originally used to create the RAID5 arrays.
In the following commands replace the '?' by a, b, c or d to correspond to drive 1, 2, 3 or 4 depending on what drive you are trying to replace.
- Re-partition drive as described earlier. If you are not sure of the sizes of the partitions then you can use the command
mfdisk -c /dev/sda
and use the 'p' command to see the partition details, and then use the 'q' command to quit. If it is drive 1 you are trying to replace then use /dev/sdb instead to look at the settings on drive 2.
- Add partition 1 back into the /dev/md0 array
mdadm /dev/md0 -f /dev/sd?1 mdadm /dev/md0 -r /dev/sd?1 mdadm /dev/md0 -a /dev/sd?1
- Format partition 2 for swap purposes
mkswap /dev/sd?2
- Add partition 3 back into the /dev/md1 array
mdadm --manage --add /dev/md1 /dev/sd?3
- Add partition 4 (if reuqired) back into the /dev/md2 array
mdadm --manage --add /dev/md2 /dev/sd?4
- Check if the System partition (/dev/sd?1) has finished re-building
mdadm --detail /dev/md0
- Reboot system
reboot
Flashing with new Firmware versions
The tests done show that it should be possible to flash a TeraStation setup as described in this article without any issues. However caution should be taken as this is a none-standard setup and this cannot be guaranteed to be true in all cases.
If you attempt a flash upgrade on a system that has been setup as described in this article and the firmware updater program gives any warnings about invalid partition structure or wants to format any of the disks you should abandon the firmware update as otherwise you will almost certainly lose data.
Scripts for Automating Process
The steps involved are a little error prone, so the following scripts can be used to automate this process. They can also serve as further examples of the steps that are required to get everything working.
After each of the scripts has been created, then you need to ensure that they are set to be executable by issuing a command of the form:
chmod +x scriptname
These Scripts are not yet finished and are still under development. In the meantime you should be able to carry out the requisite process using the manual steps described in the earlier sections
/usr/sbin/prepare_disk
This script used to prepare a disk ready for it to be added to the RAID5 arrays.
#!/bin/sh # This is a custom script for the PPC based Terastations # The script partitions a disc into the format used by the TS # The partition structure should match the one below # which shows the details for 1 1TB disc. # Partition 1 & 2 are system partitions used by the TS # Partitions 3 & 4 share the rest of the drive space by assigning # partition 3 500GB and partition 4 the rest. # --------------------------------------------------------------- # Example partition structure # -------------------------------------------------------------- # Device Boot Start End Blocks Id System # /dev/sda1 1 48 385528+ 83 Linux # /dev/sda2 49 65 136552+ 82 Linux swap # /dev/sda3 66 63807 512007615 83 Linux # /dev/sda4 63808 121601 464230305 83 Linux # Disk choice menu clear echo "-----------------------------------------------------------------" echo "------------ Terastation Drive Replacement Script ---------------" echo "-----------------------------------------------------------------" echo "" echo "This script formats a new 1TB drive to enable its use by the" echo "Terastation firmware." echo "" echo "Please choose the new disk location in NAS:" echo "[a] Disk 1" echo "[b] Disk 2" echo "[c] Disk 3" echo "[d] Disk 4" echo "[q] Quit" echo "" echo -n "Disk [a,c,c,d,q]: " read disk #------------------------------------ # Check disk chice and set variables # if not valid then error & exit #------------------------------------ case $disk in "a") disk_no="Disk 1";; "b") disk_no="Disk 2";; "c") disk_no="Disk 3";; "d") disk_no="Disk 4";; "q") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [a,b,c,d,q]" echo "Exiting Script ...." exit 1;; esac # Now set the device dependent on the TeraStation type case `grep "PRODUCTID" /etc/linkstation_release | cut -d= -f2` in "0x00002001") echo "[INFO] Seems to be a TeraStation (original)" case $disk in "a") disk_ltr="/dev/hda";; "b") disk_ltr="/dev/hdc";; "c") disk_ltr="/dev/hde";; "d") disk_ltr="/dev/hdg";; esac ;; "0x00002002") echo "[INFO] Seems to be a TeraStation Pro" case $disk in "a") disk_ltr="/dev/sda";; "b") disk_ltr="/dev/sdb";; "c") disk_ltr="/dev/sdc";; "d") disk_ltr="/dev/sdd";; esac ;; "0x00002003") echo "[INFO] Seems to be a TeraStation Home Server" case $disk in "a") disk_ltr="/dev/hda";; "b") disk_ltr="/dev/hdc";; "c") disk_ltr="/dev/hde";; "d") disk_ltr="/dev/hdg";; esac ;; *) echo "[ERROR] TeraStation type not recognized" echo "Exiting Script ...." exit 1 ;; esac case $disk in "a") disk_ltr="/dev/sda" disk_no="Disk 1";; "b") disk_ltr="/dev/sdb" disk_no="Disk 2";; "c") disk_ltr="/dev/sdc" disk_no="Disk 3";; "d") disk_ltr="/dev/sdd" disk_no="Disk 4";; "q") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [a,b,c,d,q]" echo "Exiting Script ...." exit 1;; esac part3=$disk_ltr"3" part4=$disk_ltr"4" #------------------------------------ # Check disk size and set variables # if not valid then error & exit #------------------------------------ case $disk in "a") disk_size="640GB";; "b") disk_size="750GB";; "c") disk_size="1.0TB";; "d") disk_size="1.5TB";; "d") disk_size="2.0TB";; "q") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [a,b,c,d,e,q]" echo "Exiting Script ...." exit 1;; esac #------------------------------------- # Warn user and proceed if he accepts #------------------------------------- echo "" echo "You chose $disk_no" echo "The script will now PARTITION and FORMAT the disk" echo "Any existing contents will be destroyed" echo -n "Are you sure you chose the right drive and want to continue [y/n]? " read user_ok echo "" case $user_ok in "y") break;; "n") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [y/n]" echo "Exiting Script ..." exit 1;; esac #--------------------------------- # Drive partitioning using mfdisk #--------------------------------- clear echo "---------------------------------------------------------" echo " The script will now repartition the drive" echo "---------------------------------------------------------" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mfdisk -c $disk_ltr << EOF p o n p 1 1 48 n p 2 49 65 n p 3 66 +500000M n p 4 t 2 82 p w EOF echo "--------------------------------------------------------" echo " Recovering System partitions echo "--------------------------------------------------------" echo "System partition '"${disk_ltr}1' as part of /dev/mdo" mdadm /dev/md0 -f ${disk_ltr}1 mdadm /dev/md0 -r ${disk_ltr}1 mdadm /dev/md0 -a ${disk_ltr}1 echo "Swap partition '${disk_ltr}2'" mkswap ${disk_ltr}2 #----------------------------- # Format drive using mkfs.xfs #----------------------------- clear echo "---------------------------------------------------------" echo " The script will now format the drive" echo "---------------------------------------------------------" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mkfs.xfs -f $part3 mkfs.xfs -f $part4
/usr/sbin/create_arrays
This script is used to create the initial RAID5 arrays after the disk has been partitioned and formatted. It makes use of the prepare_disk script to handle the partitioning and formatting of each disk.
/usr/sbin/recover_arrays
This script is used to recover the RAID5 arrays after a single disk has failed. It makes use of the prepare_disk script to handle the partitioning and formatting of each disk. Since i will not be the end-user of the NAS i wanted to create an easier way of modifying the partitions and formating them, in case a drive fails and a new one needs to be configured to match our above modifications.
I created the following script (add_disk) and placed it in /bin/ A user would be able to telnet into the NAS when a drive fails and just run it against the replacement drive. Upon reboot the drive should be seen and rebuilt by the TSP.
I tested it by removing removing the drive from the array, deleting all partitions, and then running the script.
As always its attached below for anyone who needs it but please let me know of your personal experience. DISCLAIMER: USE AT YOUR OWN RISK - I'm a newbie that needed this functionality and i did my best to implement it.
#!/bin/sh # This is a custom script for the PPC based Terastations # The script partitions a disc into the format used by the TS # The partition structure should match the one below # which shows the details for 1 1TB disc. # Partition 1 & 2 are system partitions used by the TS # Partitions 3 & 4 share the rest of the drive space by assigning # partition 3 500GB and partition 4 the rest. # --------------------------------------------------------------- # Example partition structure # -------------------------------------------------------------- # Device Boot Start End Blocks Id System # /dev/sda1 1 48 385528+ 83 Linux # /dev/sda2 49 65 136552+ 82 Linux swap # /dev/sda3 66 63807 512007615 83 Linux # /dev/sda4 63808 121601 464230305 83 Linux # Disk choice menu clear echo "-----------------------------------------------------------------" echo "------------ Terastation Drive Replacement Script ---------------" echo "-----------------------------------------------------------------" echo "" echo "This script formats a new 1TB drive to enable its use by the" echo "Terastation firmware." echo "" echo "Please choose the new disk location in NAS:" echo "[a] Disk 1" echo "[b] Disk 2" echo "[c] Disk 3" echo "[d] Disk 4" echo "[q] Quit" echo "" echo -n "Disk [a,c,c,d,q]: " read disk #------------------------------------ # Check disk chice and set variables # if not valid then error & exit #------------------------------------ case $disk in "a") disk_no="Disk 1";; "b") disk_no="Disk 2";; "c") disk_no="Disk 3";; "d") disk_no="Disk 4";; "q") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [a,b,c,d,q]" echo "Exiting Script ...." exit 1;; esac # Now set the device dependent on the TeraStation type case `grep "PRODUCTID" /etc/linkstation_release | cut -d= -f2` in "0x00002001") echo "[INFO] Seems to be a TeraStation (original)" case $disk in "a") disk_ltr="/dev/hda";; "b") disk_ltr="/dev/hdc";; "c") disk_ltr="/dev/hde";; "d") disk_ltr="/dev/hdg";; esac ;; "0x00002002") echo "[INFO] Seems to be a TeraStation Pro" case $disk in "a") disk_ltr="/dev/sda";; "b") disk_ltr="/dev/sdb";; "c") disk_ltr="/dev/sdc";; "d") disk_ltr="/dev/sdd";; esac ;; "0x00002003") echo "[INFO] Seems to be a TeraStation Home Server" case $disk in "a") disk_ltr="/dev/hda";; "b") disk_ltr="/dev/hdc";; "c") disk_ltr="/dev/hde";; "d") disk_ltr="/dev/hdg";; esac ;; *) echo "[ERROR] TeraStation type not recognized" echo "Exiting Script ...." exit 1 ;; esac case $disk in "a") disk_ltr="/dev/sda" disk_no="Disk 1";; "b") disk_ltr="/dev/sdb" disk_no="Disk 2";; "c") disk_ltr="/dev/sdc" disk_no="Disk 3";; "d") disk_ltr="/dev/sdd" disk_no="Disk 4";; "q") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [a,b,c,d,q]" echo "Exiting Script ...." exit 1;; esac part3=$disk_ltr"3" part4=$disk_ltr"4" #------------------------------------- # Warn user and proceed if he accepts #------------------------------------- echo "" echo "You chose $disk_no" echo "The script will now PARTITION and FORMAT the disk" echo "Any existing contents will be destroyed" echo -n "Are you sure you chose the right drive and want to continue [y/n]? " read user_ok echo "" case $user_ok in "y") break;; "n") echo "Exiting Script ..." exit 1;; *) echo "Invalid input, must be [y/n]" echo "Exiting Script ..." exit 1;; esac #--------------------------------- # Drive partitioning using mfdisk #--------------------------------- clear echo "---------------------------------------------------------" echo " The script will now repartition the drive" echo "---------------------------------------------------------" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mfdisk -c $disk_ltr << EOF p o n p 1 1 48 n p 2 49 65 n p 3 66 +500000M n p 4 t 2 82 p w EOF echo "--------------------------------------------------------" echo " Recovering System partitions echo "--------------------------------------------------------" echo "System partition '"${disk_ltr}1' as part of /dev/mdo" mdadm /dev/md0 -f ${disk_ltr}1 mdadm /dev/md0 -r ${disk_ltr}1 mdadm /dev/md0 -a ${disk_ltr}1 echo "Swap partition '${disk_ltr}2'" mkswap ${disk_ltr}2 #----------------------------- # Format drive using mkfs.xfs #----------------------------- clear echo "---------------------------------------------------------" echo " The script will now format the drive" echo "---------------------------------------------------------" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mkfs.xfs -f $part3 mkfs.xfs -f $part4 #------------------------------------ # Add Partitions back to raid arrays #------------------------------------ clear echo "---------------------------------------------------------" echo " The script will now add drive back to arrays" echo "---------------------------------------------------------" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" mdadm --manage --add /dev/md1 $part3 mdadm --manage --add /dev/md2 $part4 #------------------------------ # Reboot the NAS using reboot #------------------------------ clear echo "---------------------------------------------------------" echo " The script will now restart the NAS" echo "---------------------------------------------------------" echo "" echo -n "Press any key to continue ... " read wait_ok echo "" reboot