Difference between revisions of "RAID your LSpro (v.1) mnt partition functioning like a Duo"

From NAS-Central Buffalo - The Linkstation Wiki
Jump to: navigation, search
Line 32: Line 32:
  
 
*2. If you are completely satisfied with the LS is functioning as before. Partition your internal new drive (/dev/sdb1-5) identical to your original drive (dev/sda1-5). If your new disk is larger than the original disk, you should partition sdb4 and sdb6 to the maximum remaining capacity after partitioning sdb1&2. If you are not familiar with partitioning, please refer to WiKi [[Custom Partitions on the LS Pro]]
 
*2. If you are completely satisfied with the LS is functioning as before. Partition your internal new drive (/dev/sdb1-5) identical to your original drive (dev/sda1-5). If your new disk is larger than the original disk, you should partition sdb4 and sdb6 to the maximum remaining capacity after partitioning sdb1&2. If you are not familiar with partitioning, please refer to WiKi [[Custom Partitions on the LS Pro]]
*3. Create RAID-1 mount drive, with only one disk (/dev/sdb6) as interim (original mnt disk will become second member after data has been migrated to the RIAD drive) by using the following command: '''''mdadm -C /dev/md0 -l 1 -n 2 missing /dev/sdb6.'''''' Then mkfs of your choice, like '''''mkfs.jfg /dev/md0''''for JFS.
+
*3. Create RAID-1 mount drive, with only one disk (/dev/sdb6) as interim (original mnt disk will become second member after data has been migrated to the RIAD drive) by using the following command: '''''mdadm -C /dev/md0 -l 1 -n 2 missing /dev/sdb6.'''''' Then mkfs of your choice, like '''''mkfs.jfs /dev/md0''''for JFS.
  
 
Mdadm should start building RAID disk /dev/md0, the progress can be monitor with '''''cat /proc/mdstat''''' command.
 
Mdadm should start building RAID disk /dev/md0, the progress can be monitor with '''''cat /proc/mdstat''''' command.

Revision as of 21:39, 24 December 2010

Contents

Preface & disclaimer

After reading Kuroguy's Hardware Hacks for the LS Pro article on Adding an eSATA port, I decided to take it further and have successfully add a second external SATA drive to my LSPro, and RAID protect the data drive (or /mnt/disk1), and like to share my experience here, hope it will be useful to others. This hack or modification does require some hardware and software skill. And is relatively safe, but still there is a risk of bricking your LSPro.

Kurobrick.png
WARNING!

There is a possibility that you could brick your NAS with these instructions. Please make sure that you read the entire page carefully. Do it at your own risk



Prequisite

LSPro running FreeLink Kernel 2.16.16 (one I am using)or above with mdadm support.

Hardware Modification


Secon SATA.JPG Second SATA.JPG


  • 2. Purchase external eSATA case to house second drive. You can go cheap (like I do) by use an external USB to PATA case with the USB to ATA PCB removed, and connect the SATA cable directly into the drive via the case cut out or hole. I manage to pick up a cheap external firewire case with build in Power Supply from eBay.


External Case.JPG


  • 3. Move the internal working bootable hard disk into the external case. When 2 hard drives are installed, the external hard disk becomes the boot drive /dev/sda.
  • 4. Install new hard drive (of the same size or larger) into the LSPro, replacing the origin drive.


LSPRO RAID.JPG

Configuration and Migration

  • 1. Power up external drive and LSPro, and make sure it boot up normally and verify everything functioning normal and NAS available on network as before. Dmesg log should indicate 2 drives installed, similar to the following screen, showing 2 SAMSUNG 750G drives are detected.


Dmesg.JPG

  • 2. If you are completely satisfied with the LS is functioning as before. Partition your internal new drive (/dev/sdb1-5) identical to your original drive (dev/sda1-5). If your new disk is larger than the original disk, you should partition sdb4 and sdb6 to the maximum remaining capacity after partitioning sdb1&2. If you are not familiar with partitioning, please refer to WiKi Custom Partitions on the LS Pro
  • 3. Create RAID-1 mount drive, with only one disk (/dev/sdb6) as interim (original mnt disk will become second member after data has been migrated to the RIAD drive) by using the following command: mdadm -C /dev/md0 -l 1 -n 2 missing /dev/sdb6.' Then mkfs of your choice, like mkfs.jfs /dev/md0'for JFS.

Mdadm should start building RAID disk /dev/md0, the progress can be monitor with cat /proc/mdstat command.

Or check status of the RAID disk with the following command : mdadm --detail /dev/md0

  • 4 On successful creation of /dev/md0. Temporary mount RAID drive as /mnt/disk2. You may need to create mount point /mnt/disk2 if it does not exist.
  • 5. Copy Data from /mnt/disk1 (original NAS data) to /mnt/disk2 (RAID-1 drive with only 1 disk for the moment), by using the following command (please check syntax, as I am not 100%): cp -p -R /mnt/disk1/* /mnt/disk2/
  • 6 When copy completed, modify your /etc/fstab line entry from: /dev/sda6 /mnt/disk1 to /dev/md0 /mnt/disk1
  • 7. Reboot your LSpro, verify RAID disk /dev/md0 is now being mounted as NAS device (/mnt/disk1).
  • 8. If you are absolutely sure that all is working and all files are available on the network as before, and all the data has been copied to the RAID drive, then you can put configure your old disk /dev/sda6 as part of the RAID member and resync the data by the following command: mdadm /dev/md0 -a /dev/sda6

The resync process can be monitor by using the following commad mdadm --detail /dev/md0 On successful completion of resynchronize of the data, you should see similar display (I was using /dev/md1 instead of /dev/md0)


Mdadm sync.JPG

  • 9 The RAID conversion should complete now. The next step is to utilize the /dev/sdb1 and /dev/sdb2 to back up the boot and system partition in case if the external disk (/dev/sda) failed, you can move the internal disk (/dev/sdb) to the external case the LS should boot up (in theory, as I have not tried it myself) and then you can re-install the replacement drive into the internal case. And if the internal disk fail, will be just a simple replacement and re-sync using mdadm command.

Be aware that your external case power is not control by the LS. You need to be cautious to make sure both boxes are power up or down at the same time. The power sequence for power up, is to power up the external case first before the LS. For power down, is the opposite, turn the power off on the LS first before the external case.

Backing up /dev/sda1&2 to /dev/sdb1&2

  • 1. If you are completely happy with the RAID conversion and everything is stable, you may consider backing up the boot and system partition /dev/sda1, /dev/sda2.
  • 2. You need to reboot the LS into EM mode by using the following commands:
  cd /boot
  mv rootfs_ok rootfs_booting
  echo '****' > rootfs_booting
  reboot
  • 3. Login to LS using Telnet and copy boot and system partition from sda to sdb using the following commands
  optional if you may want to do step 4 here, less the reboot command here, then your back up will be ready to come up in normal
  mode, otherwise it will inherit the EM mode, after the DD copy.
  dd if=/dev/sda1 of=/dev/sdb1
  dd if=/dev/sda2 of=/dev/sdb2
  • 4. Reboot back to normal using the following commands
  cd /boot
  mv rootfs_booting rootfs_ok
  echo `date` > rootfs_ok
  reboot
  • 5. LS should come up as normal with the data disk RAID protected and boot and system partition completely backed up.

Failure Simulation

I have tried to simulate a /dev/sda failure, by removing the SATA cable of the external drive after shutdown, and power it up again (boot from internal drive). And it comes up OK, but complaint that one member of the RAID group was removed, with only /dev/sda6 present. (Remember with external drive removed, the internal becomes /dev/sda).


Sda failure.jpg


Shut down, and reboot up with SATA cable reconnected, /dev/sda6 added back to the RAID group with mdadm --add /dev/md0 /dev/sda6 command, and the rebuilt start again. Screen with Sync in progress, showing 8% completed.


Sda rebuilt.JPG



Dommer 12:06, 1 August 2008 (BST)