Jump to content

QNAP RAID5 Inactive

Guest

Hi,

 

My girlfriend brough her works QNAP Turbo NAS for me to try retrieve data off, which is in RAID 5 and Drive 2 has died. i thought with RAID 5 a single disk can fail and by replacing the fault drive it should rebuild the RAID. The raid will not rebuild even when i login via the website and select recover. when i select recover it states there are not enough drives for the RAID when i see all four drives listed. only thing that i can see is drive one is set as a global spare.

 

can i get some help on this please. 

Link to comment
Share on other sites

Link to post
Share on other sites

I presume you have replaced the failed disk since the rebuild is failing?

I'm not familiar with the Cisco NAS', do they use mdadm or the same logic? Theres plenty of material for using mdadm directly to diagnose a failed RAID on QNAP where its not rebuilding.

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Hi @Jarsky,

 

sorry, i have re-done my question. my issue was with my. as built the RAID but didn't format it :/ 

However, some of your questions can be answered towards the QNAP NAS

 

I presume you have replaced the failed disk since the rebuild is failing?

 Yes, still failes. I get a message along the lines of " Not enough drives available"
 

I'm not familiar with the Cisco NAS', do they use mdadm or the same logic? Theres plenty of material for using mdadm directly to diagnose a failed RAID on QNAP

I am not too sure what my one uses is there a way to check? Also could you please point me in the correct direction for the mdadm material please?

Also on the QNAP NAS it has a global spare setup but the RAID states that it uses all 4 drives. By removing the Global spare drive, what would the outcome be?

Link to comment
Share on other sites

Link to post
Share on other sites

OK so reading the re-wording - a RAID5 will continue to provide data availability if 1 drive fails, however it will be in a degraded state - i.e no protection against another drive failure. You cannot rebuild a RAID5 parity until the failed disk has been replaced.

 

If the new drive has been formatted previously, then you need to re-zero that drive for it to be picked up as available for the RAID to use. 

You can do this through the QNAP shell with the below command, where sdX is the name of the drive to be formatted:

dd if=/dev/zero /of=/dev/sdX bs=512 count=1

If it is a new drive that hasnt been formatted before, then something else is causing the issue. 

 

Was the RAID showing in a degraded state before you pulled and replaced the new disk? 

You may need to try pulling the new disk, and see if the RAID goes to a degraded state, then try inserting it and see if the rebuild starts. 

 

If not, are you able to post the QNAP logs? 

 

My main concern with the Cisco NAS, is im not sure what effect thats had on the array, and it complicates fixing the issue as it could have mixed with the superblocks (which is how the underlying linux mdadm logic on the QNAP knows the position of each disk)

 

Edit: The QNAP logs would be good, as going back to my initial statement in this post, the data should still be available - so it sounds like when the raid failed something happened to another disk. This posit is all on the assumption that we're only dealing with a single disk failure, not a second disk falling out of the array or the QNAP already being in a degraded state prior to this disk failure that dropped it....

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Jarsky,

 

Was the RAID showing in a degraded state before you pulled and replaced the new disk?

When I got the NAS and logged in it only stated not active. Didn't state "Degraded State" 

If the new drive has been formatted previously, then you need to re-zero that drive for it to be picked up as available for the RAID to use.

The new drive has been formatted multiple times in the passed as it's my spare backup drive that I have used. I have also used Linux to format it to ext4 before putting it into the NAS.

 

The QNAP logs would be good

Unfortunately, I have made a mistake with the logs. When it notified me of the error I selected clear thinking it was going to clear the notifications but it cleared the log *noob mistake* I have been recording everything I have done in a text document as well but this will not help with telling what the logs stated prior the stuff up. 
 

NAS Repair Log.txt

system-log.csv

RAID-ScreenCap.PNG

Link to comment
Share on other sites

Link to post
Share on other sites

Can you login to the QNAP CLI or SSH into it, and run these commands one line at a time, and post the outputs of each

 

echo "Model: $(getsysinfo model)"
df -h
cat /proc/mdstat
cat /etc/mdadm.conf
mount
mdadm --query /dev/mdX   (where mdX is what 'mount' shows - probably /dev/md0)
mdadm -D --scan

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Here is the info requested, hope its correct. Not sure why it's saying RAID 1 :/ 

 

 

[~] # echo "Model: $(getsysinfo model)"                                                                  
Model: TS-419P+
[~] # df -h                                                                                              
Filesystem                Size      Used Available Use% Mounted on
/dev/ramdisk             32.9M     16.8M     16.1M  51% /
tmpfs                    64.0M    208.0k     63.8M   0% /tmp
/dev/sda4               371.0M    344.5M     26.5M  93% /mnt/ext
/dev/md9                509.5M    132.2M    377.2M  26% /mnt/HDA_ROOT
/dev/ram2                32.9M    419.0k     32.5M   1% /mnt/update
tmpfs                    64.0M      2.3M     61.7M   4% /samba
tmpfs                    16.0M     32.0k     16.0M   0% /samba/.samba/lock/msg.lock
tmpfs                    16.0M         0     16.0M   0% /mnt/ext/opt/samba/private/msg.sock
tmpfs                     1.0M         0      1.0M   0% /mnt/rf/nd
[~] # cat /proc/mdstat                                                                                   
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4] 
md1 : inactive sdc3[2](S) sdd3[3](S) sdb3[1](S)
                 5855836800 blocks
                  
md13 : active raid1 sdd4[0] sdc4[2] sda4[1]
                 458880 blocks [4/3] [UUU_]
                 bitmap: 45/57 pages [180KB], 4KB chunk

md9 : active raid1 sda1[0] sdd1[3] sdc1[2]
                 530048 blocks [4/3] [U_UU]
                 bitmap: 35/65 pages [140KB], 4KB chunk

unused devices: <none>
[~] # cat /etc/mdadm.conf                                                                                
[~] # mount                                                                                              
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/ram2 on /mnt/update type ext2 (rw)
tmpfs on /samba type tmpfs (rw,size=64M)
tmpfs on /samba/.samba/lock/msg.lock type tmpfs (rw,size=16M)
tmpfs on /mnt/ext/opt/samba/private/msg.sock type tmpfs (rw,size=16M)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)                                  
[~] # mdadm --query /dev/md9                                                                             
/dev/md9: 517.63MiB raid1 4 devices, 0 spares. Use mdadm --detail for more detail.
[~] # mdadm --query /dev/md0                                                                             
/dev/md0: is an md device which is not active
[~] # mdadm -D --scan                                                                                    
ARRAY /dev/md9 level=raid1 num-devices=4 UUID=653fbf93:43895257:5e2ae3a1:5444933b
ARRAY /dev/sda4 level=raid1 num-devices=4 UUID=4ba6a4e9:9711623a:24c33b81:55ece08b
mdadm: md device /dev/md1 does not appear to be active.
[~] #                                                                                                    

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Fallen Soul said:

Here is the info requested, hope its correct. Not sure why it's saying RAID 1 :/ 

 

 

Don't worry about teh RAID1 partitions, we're interested in the large inactive one which in this case is md1

 

[~] # cat /proc/mdstat                                                                                   
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4] 
md1 : inactive sdc3[2](S) sdd3[3](S) sdb3[1](S)
                 5855836800 blocks

 

So looking at this info, it looks like it should be about a 3TB volume? 

We can see there are are currently 3 disks assigned to the RAID. 

 

Can you also run these 2 commands:

mdadm --query /dev/md1
fdisk -l

Edit and can you also run:

 

cat /etc/mdadm.conf

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

Quote

So looking at this info, it looks like it should be about a 3TB volume? 

I am not 100% sure as i have never seen the unit when it worked. I will ask the misses when I see her if she can remember the over all size.

 

there are 4x3TB HDDs in the unit one set as a Global spare 

 

Quote

Can you also run these 2 commands:


mdadm --query /dev/md1
fdisk -l

 

[~] # mdadm --query /dev/md1                                                                             
/dev/md1: is an md device which is not active
[~] # fdisk -l                                                                                           

Disk /dev/mtdblock0: 0 MB, 524288 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock0 doesn't contain a valid partition table

Disk /dev/mtdblock1: 2 MB, 2097152 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock1 doesn't contain a valid partition table

Disk /dev/mtdblock2: 9 MB, 9437184 bytes
255 heads, 63 sectors/track, 1 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock2 doesn't contain a valid partition table

Disk /dev/mtdblock3: 3 MB, 3145728 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock3 doesn't contain a valid partition table

Disk /dev/mtdblock4: 0 MB, 262144 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock4 doesn't contain a valid partition table

Disk /dev/mtdblock5: 1 MB, 1310720 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock5 doesn't contain a valid partition table

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          66      530125   83  Linux
/dev/sda2              67         132      530142   83  Linux
/dev/sda3             133      243138  1951945693   83  Linux
/dev/sda4          243139      243200      498012   83  Linux

Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/sda4 doesn't contain a valid partition table

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1          66      530125   83  Linux
/dev/sdc2              67         132      530142   83  Linux
/dev/sdc3             133      243138  1951945693   83  Linux
/dev/sdc4          243139      243200      498012   83  Linux

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1          66      530125   83  Linux
/dev/sdd2              67         132      530142   83  Linux
/dev/sdd3             133      243138  1951945693   83  Linux
/dev/sdd4          243139      243200      498012   83  Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn't contain a valid partition table

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          66      530125   83  Linux
/dev/sdb2              67         132      530142   83  Linux
/dev/sdb3             133      243138  1951945693   83  Linux
/dev/sdb4          243139      243200      498012   83  Linux
[~] # 

Link to comment
Share on other sites

Link to post
Share on other sites

and just also this one

 

cat /etc/mdadm.conf

or it might be 

cat /etc/mdadm/mdadm.conf

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Jarsky said:

and just also this one

 


cat /etc/mdadm.conf

or it might be 


cat /etc/mdadm/mdadm.conf

 

 neither of them did anything

 

[~] # cat /etc/mdadm.conf                                                                                
[~] # cat /etc/mdadm/mdadm.conf                                                                          
cat: /etc/mdadm/mdadm.conf: No such file or directory
[~] # 

 

Will be away from my local network for a few hours. Will try any more suggestions when i get back. 

 

Thanks for the help so far.

 

Link to comment
Share on other sites

Link to post
Share on other sites

OK. So essentially what looks to have happened, is for whatever reason, MD (linux raid) appears to have lost its metadata, so seems to be unsure how to arrange the disks or of the superblock positions. If it comes down to it, you may need to recreate the RAID.  

 

 

From the information you've gathered, the main things we're looking at are:

 

This tells us disks sdc3, sdd3, sdb3 are all part of md1 which is inactive

[~] # cat /proc/mdstat                                                                                   
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4] 
md1 : inactive sdc3[2](S) sdd3[3](S) sdb3[1](S)
                 5855836800 blocks

We can see from fdisk that these are the main partitions that we're looking at for the raid

Device Boot      Start         End      Blocks   Id  System
/dev/sda3             133      243138  1951945693   83  Linux
/dev/sdc3             133      243138  1951945693   83  Linux
/dev/sdd3             133      243138  1951945693   83  Linux
/dev/sdb3             133      243138  1951945693   83  Linux

 

As we can see, sda3 is missing from md1. However we know if this is a raid5 that it should be active with 3 out of 4 disks. 

 

 

At this point, I would strongly recommend talking to QNAP support - as they can often assist with RAID failures like this. 

If you do want to attempt recovery though, before taking any steps you will want to clone the disks (using something like dd). 

 

 

Following the RAID Recovery steps, first what you will want to do, is to see how much of the RAID superblock information we can preserve by running:

mdadm --examine /dev/sd[abcd]3 >> raid.status

 

The guide tells us this:

Quote

If the event count closely matches but not exactly, use "mdadm --assemble --force /dev/mdX <list of devices>" to force mdadm to assemble the array anyway using the devices with the closest possible event count. If the event count of a drive is way off, this probably means that drive has been out of the array for a long time and shouldn't be included in the assembly. Re-add it after the assembly so it's sync:ed up using information from the drives with closest event counts.

 

So first, we will need to check the drives 

mdadm --examine /dev/sd[a-d] | egrep 'Event|/dev/sd'

If the counts are low then we can attempt to force reassembly. such as

mdadm --assemble --force /dev/md1 sda3 sdb3 sdc3 sdd3

 

Now assuming that sda3 is the new one and the count is completely different, then it may look like this

 

mdadm --assemble --force /dev/md1 sdb3 sdc3 sdd3

 

So assuming this all fails, then the last step is to try recreating the array, the catch is we need to get these disks back in the correct order. 

With 4 disks - that means there are 24 possible combinations for the disk order, to restore the array. So it may be trial and error. 

 

What you want to keep in mind, is that you do not want to write to the disks, while trying to fix the config, under any circumstances. 

Recreating the raid rewrites the superblocks, 

 

To recover, you'll need to use mdadm and you'll want to do it in read-only mode using the various configurations until you get an active (but degraded) raid. 

 

You would need to run something like this:

mdadm --create /dev/md1 --assume-clean --chunk=4 --level=5 --parity=left-asymmetric --raid-device=4 /dev/sdb3 /dev/sdc3 /dev/sdd3 missing

Create a test mount point, e.g:

mkdir /mnt/recovery

Then try and mount md1 as read-only:

mount -o ro /dev/md1 /mnt/recovery

If that fails and you cant read the mount e.g

 

ls -l /mnt/recovery

Then you will need to stop it again:

mdadm --manage /dev/md1 --stop

Then try changing the order of the 4 raid devices, and repeat for the 24 possible combinations. 

 

 

---------------------------------------------------------------------------------------------------------------------------------------

 

If you manage to get a working directory - then your best option, would be to copy all of the data off the working array which is now mounted in read only mode. 

One all data is copied off, then destroy the entire raid and rebuild it again. 

 

If thats not an option, then you can re-add the failed disk by using:

mdadm --manage /dev/md1 --add /dev/sda

Then running this command, you should see a rebuild in progress:

mdadm --detail /dev/md1

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

P.S I have not worked with mdadm for about 6 years, so feel free to scrutinize my commands, and to double check that ive validated your device names before just copy&pasting these commands. And keep in mind, as i stated above, you *should* have a clone of all disks before running any of the recovery steps yourself. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for all the help and information so far. 

I will look over all this in the morning when I am back in front of my computer. I have also put a support ticket in with QNAP to see what they have to say also. No clue when they will get back to me. 

In regards to making clones of all the hard drives, that's not an option as I do not have any enough spare drives with the same capacity or more.  Is there a way to create an image or ghost file of the whole drive?

Link to comment
Share on other sites

Link to post
Share on other sites

So I have gone through the first two lines of commands in the recovery process as it's just reading information and I would image it doesn't effect anything. 

 

this is the outcome so far which by the looks of it and if I understand it correctly this isn't looking too good.

 

mdadm: No md superblock detected on /dev/sda3.
mdadm: No md superblock detected on /dev/sdb3.
/dev/sdc3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 13dba0f6:0fd68170:cad74292:62e7ba42
  Creation Time : Mon Jun 27 23:03:24 2011
     Raid Level : raid5
  Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
     Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Sun Apr  2 07:15:00 2017
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 15fc2c5e - correct
         Events : 0.977084

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       35        2      active sync   /dev/sdc3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       0        0        1      faulty removed
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
/dev/sdd3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 13dba0f6:0fd68170:cad74292:62e7ba42
  Creation Time : Mon Jun 27 23:03:24 2011
     Raid Level : raid5
  Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
     Array Size : 5855836800 (5584.56 GiB 5996.38 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Sun Apr  2 07:15:00 2017
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 15fc2c70 - correct
         Events : 0.977084

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       51        3      active sync   /dev/sdd3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       0        0        1      faulty removed
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
--------------------------------------------------------------------------------------------------------------

[~] #
[~] # mdadm --examine /dev/sd[a-d] | egrep 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sda.
mdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdc.
mdadm: No md superblock detected on /dev/sdd.
[~] # mdadm --examine /dev/sd[a-d]3 | egrep 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sda3.
mdadm: No md superblock detected on /dev/sdb3.
/dev/sdc3:
         Events : 0.977084
this     2       8       35        2      active sync   /dev/sdc3
   0     0       8        3        0      active sync   /dev/sda3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
/dev/sdd3:
         Events : 0.977084
this     3       8       51        3      active sync   /dev/sdd3
   0     0       8        3        0      active sync   /dev/sda3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
[~] #

I can see sdc3 and sdd3 have low event counts where sda3 and sdb3 have not got any information recording which I can assume at this point means there is no chance or extremely little chance of Data recovery. 

 

17 hours ago, Jarsky said:

Now assuming that sda3 is the new one and the count is completely different, then it may look like this

If the lettering goes by drive number 1 - 4 (a - d)  then the replacement drive is sdb not sda. When I first logged into the NAS it showed sda as a global drive with 0 data on the drive. If i understand this correctly then this does not look very promising.

Link to comment
Share on other sites

Link to post
Share on other sites

I would say thats why its staying inactive - i don't know what the chances are of recovery with this state - but just as you said, it appears that 2 of the drives raid data is intact, but the other 2 are not (1 being the replacement drive). You may want to wait until QNAP support can have a look before proceeding past those 2 steps. 

 

It's at a point it goes beyond my level of support. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank your for all the help you have supplied. I will do that and if QNAP can't help i am just going to tell my partners work that there is very small chance of recovery to none.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×