WARNING: Please note that this article was published a long time ago. The information contained might be outdated.
WARNING: Please note that this article describes my experience and the results you might get reproducing what I did will surely be different. Please understand that I am not responsible for the results you might get reproducing what I did. If you choose to follow my notes and anything bad happens, don't blame me, you are responsible for what you do. In doubt, my advice is to go and read some news on https://news.google.com/.
This is a step-by-step guide on how I got back access to the data stored in a RAID 1 volume created on a Seagate Blackarmor NAS.
The content of this tutorial is based on other posts:
- http://www.linux-sxs.org/storage/fedora2ubuntu.html
- https://blog.sleeplessbeastie.eu/2012/05/08/how-to-mount-software-raid1-member-using-mdadm/
- http://serverfault.com/questions/676638/mdadm-drive-replacement-shows-up-as-spare-and-refuses-to-sync
- http://superuser.com/questions/429776/simple-mdadm-raid-1-not-activating-spare
- http://unix.stackexchange.com/questions/72279/how-do-i-recover-files-from-a-single-degraded-mdadm-raid1-drive-not-enough-to
The process was done using a GNU/Linux computer (a Raspberry Pi in my case) and an external USB powered connector to connect the disk to the Raspberry Pi.
I had four disks inside the NAS, organized in two RAID 1 volumes: disks one and two for first RAID 1 volume, disks three and four for the second RAID 1 volume.
As first thing I prepared the Raspberry Pi, making sure it had mdadm
and lvm2
installed, the dm-mod
kernel module loaded, and the directories for the mount points:
sudo apt-get update sudo apt-get install mdadm sudo apt-get install lvm2 sudo modprobe dm-mod sudo mkdir -p /mnt/seagate/raid/v1/ sudo mkdir -p /mnt/seagate/raid/v2/ sudo mkdir -p /mnt/seagate/raid/v3/ sudo mkdir -p /mnt/seagate/raid/v4/
mdadm
is a RAID management tool, lvm
is a logical volume management tool, dm-mod
is the device mapper module.
As soon as I took the disks out of the NAS I applied a label on them to be able to identify the position they had in the NAS.
After attaching one of the disks to the computer, the `dmesg` command gave me the following:
[ 2228.677193] usb 1-1.3: new high-speed USB device number 4 using dwc_otg [ 2228.778427] usb 1-1.3: New USB device found, idVendor=152d, idProduct=2338 [ 2228.778448] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=5 [ 2228.778462] usb 1-1.3: Product: USB to ATA/ATAPI bridge [ 2228.778474] usb 1-1.3: Manufacturer: JMicron [ 2228.778486] usb 1-1.3: SerialNumber: 000000000000 [ 2228.779883] usb-storage 1-1.3:1.0: USB Mass Storage device detected [ 2228.782849] scsi host0: usb-storage 1-1.3:1.0 [ 2229.778109] scsi 0:0:0:0: Direct-Access ST3000DM 001-1CH166 PQ: 0 ANSI: 5 [ 2229.779566] sd 0:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16). [ 2229.779851] sd 0:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB) [ 2229.780726] sd 0:0:0:0: [sda] Write Protect is off [ 2229.780750] sd 0:0:0:0: [sda] Mode Sense: 28 00 00 00 [ 2229.781221] sd 0:0:0:0: [sda] No Caching mode page found [ 2229.781240] sd 0:0:0:0: [sda] Assuming drive cache: write through [ 2229.782469] sd 0:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16). [ 2229.790022] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 2229.859446] sda: sda1 sda2 sda3 sda4 [ 2229.861563] sd 0:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16). [ 2229.862595] sd 0:0:0:0: [sda] Attached SCSI disk
As you can see from the "[ 2229.859446] sda: sda1 sda2 sda3 sda4
" line, the device /dev/sda
has 4 partitions. Using fdisk
to get more info out of the partition table:
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 16B85A47-8C17-4052-A96B-2741CCB346FB Device Start End Sectors Size Type /dev/sda1 195312 2283203 2087892 1019.5M Linux RAID /dev/sda2 2283204 4373046 2089843 1020.4M Linux RAID /dev/sda3 4373047 5416015 1042969 509.3M Linux RAID /dev/sda4 5416016 5860517599 5855101584 2.7T Linux RAID
This is what I god trying to mount the biggest partition, the /dev/sda4
:
$ sudo mount /dev/sda4 /mnt/seagate/raid/v3/ mount: unknown filesystem type 'linux_raid_member'
To get more info about the RAID level I used mdadm
to examine the /dev/sda4
partition running sudo mdadm --examine /dev/sda4
$ sudo mdadm --examine /dev/sda4 /dev/sda4: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 1d121553:6712f248:d49a9aaa:31df2e71 Name : 4 Creation Time : Mon Sep 23 01:53:40 2013 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5855101312 (2791.93 GiB 2997.81 GB) Array Size : 2927550656 (2791.93 GiB 2997.81 GB) Data Offset : 272 sectors Super Offset : 8 sectors Unused Space : before=192 sectors, after=0 sectors State : clean Device UUID : 2d7348a4:bfd0d282:c25de601:e94cca43 Update Time : Sun May 15 11:06:53 2016 Checksum : 307a2445 - correct Events : 991 Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
Then I used mdadm
again to create the virtual md
device, used to mount the logical volume:
$ sudo mdadm -A -R /dev/md9 /dev/sda4 mdadm: /dev/md9 has been started with 1 drive (out of 2).
Running fdisk /dev/md9
I could see clearly that the device contains a logical volume:
Welcome to fdisk (util-linux 2.25.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. /dev/md9: device contains a valid 'LVM2_member' signature, it's strongly recommended to wipe the device by command wipefs(8) if this setup is unexpected to avoid possible collisions. Device does not contain a recognized partition table. The size of this disk is 2.7 TiB (2997811871744 bytes). DOS partition table format can not be used on drives for volumes larger than 4294966784 bytes for 512-byte sectors. Use GUID partition table format (GPT). Created a new DOS disklabel with disk identifier 0x2af87d64. Command (m for help): q
As next step I used lvm
to activate the logical volume. First I used the vgscan
tool to scan for available volume groups:
$ sudo vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2
In my case the application showed me the existence of one volume group called vg1
. The name of the group is needed by the vgchange
tool to activate the volume:
$ sudo vgchange -ay vg1 1 logical volume(s) in volume group "vg1" now active
Clearly, the vg1
is now active. The lvs
command gives more information:
$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi-a----- 2.73t
At this point I had a new device, /dev/vg1/lv1
, which I used to mount the partition:
sudo mount /dev/vg1/lv1 /mnt/seagate/raid/v3/ -o ro,user
The mount
command gave me positive feedback on the mounted drive:
/dev/mapper/vg1-lv1 on /mnt/seagate/raid/v3 type ext3 (ro,nosuid,nodev,noexec,relatime,data=ordered,user)
To unmount the device I used the following commands:
$ sudo umount /mnt/seagate/raid/v3 $ sudo mdadm -S /dev/md9
TL;DR
Using mdadm
sudo apt-get install mdadm sudo apt-get install lvm2 sudo modprobe dm-mod sudo mdadm -A -R /dev/md9 /dev/sda4 sudo vgchange -ay vg1 sudo mount /dev/vg1/lv1 /mnt/seagate/raid/v3/ -o ro,user
Using vgscan
sudo mdadm --examine /dev/sda4
From the command above read the "Data Offset" (in my case was "272" sectors) and use the value in the losetup
command.
sudo apt-get install lvm2 sudo losetup --find --show --read-only --offset $((272*512)) /dev/sda4 sudo vgscan sudo vgchange -ay vg1 sudo mount /dev/vg1/lv1 /mnt/seagate/raid/v3/ -o ro,user