Solving problems invented by others...
How to access your data if your Synology has broken but the disks still working

How to access your data if your Synology has broken but the disks still working

A few weeks ago, one of my disks in my Synology drive showed s.m.a.r.t. errors. The error message told me that this disk is about to fail. After i replaced the disk, i asked myself what would happen if the Synology itself has a malfunction? Besides the backups i make, will i have a chance to access my data as long as the disks are ok?
So i took an sata to usb adapter, plugged in the not-yet broken disk and did some research to find a way to extract the data from it.
Since the operating system of the Synology is linux based, i did the following tests from a linux machine.

Step 1: Linux raid

After connecting the disk to the vm i first checked the partition table.

root@mint:~# fdisk -l /dev/sdb
Disk /dev/sdb: 2,75 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: ASM1153E        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 530DE175-A149-4F4C-9E70-24822622CD4B

Device       Start        End    Sectors  Size Type
/dev/sdb1      256    4980735    4980480  2,4G Linux RAID
/dev/sdb2  4980736    9175039    4194304    2G Linux RAID
/dev/sdb5  9453280 5860519007 5851065728  2,7T Linux RAID

I could see 3 partitions. Each with a software raid on it.

To access this software raid partitions i used the “mdadm” tool to automatically scan and access this partitions.

mdadm —assemble —scan

After that i checked what have been found.

root@mint:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sdb5[1]
      2925531648 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

I can see that there is now a device called “/dev/md2” with the content of the software raid from partition “/dev/sdb5“. Sadly the first two raid partitions (“/dev/sdb1” + “/dev/sdb2“) on the disk could not automatically be detected. The reason here is that they are using version 0.90 of linux raid. This version is not capable of switching between big and little endian systems. (My synology has an arm cpu, my laptop has a x86_64 cpu.) Hence the superblock is stored in reverse order. You can examine this problem with the following command.

root@mint:~# mdadm --examine /dev/sdb1 -e 0
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got fc4e2ba9)

As you can see, the super block is the wrong order. “fc4e2ba9” instead of “a92b4efc“.

If you want to convert the super block, which changes it permanently, you can convert the raid with the help of the following “mdadm” command.

root@mint:~# mdadm --assemble --run /dev/md0 --update=byteorder /dev/sdb1
mdadm: /dev/md0 has been started with 1 drive (out of 2).

After that you can see another raid is showing up.

root@mint:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdb1[0]
      2490176 blocks [2/1] [U_]
      
md2 : active (auto-read-only) raid1 sdb5[0]
      2925531648 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

While looking at the 3 partitions, i see that only the last one is worth looking at. The first ist the synology operating system, the second one is the swap partition. But i want to look after my data, so i focus on the last one. The last partition is using version 1.2 of linux raid, which can handle differend endian-ness. So, let’s check the details of the raid partition that could be read. A more detailed view of this device can be shown with “mdadm -D“.

root@mint:~# mdadm -D /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Wed Dec 10 19:00:01 2014
        Raid Level : raid1
        Array Size : 2925531648 (2790.00 GiB 2995.74 GB)
     Used Dev Size : 2925531648 (2790.00 GiB 2995.74 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Mon Dec 13 21:42:25 2021
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : DiskStation:2
              UUID : 7a666d15:b462337d:4157d452:6bdebe0c
            Events : 116

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       21        1      active sync   /dev/sdb5

I can see that this is a software raid1 in degraded state because one disk is missing. This is of course expectable. I had two disks in my Synology drive and only one of them is actually connected to my machine. But nevertheless the raid is functional. Because it is raid. 😉

Step 2: lvm

Next step is to check what is on this new device “/dev/md2“.

root@mint:~# lsblk -f /dev/md2 
NAME        FSTYPE      LABEL       UUID                                   FSAVAIL FSUSE% MOUNTPOINT
md2         LVM2_member             oZYriu-pycS-JrPa-whcV-kVPt-Isio-WUeSpM                
└─vg1000-lv ext4        1.42.6-5004 73b3fc38-dcec-4ef8-814a-f088c05226fb

Luckily i could see already the next two steps here. First i see that on this device there is a lvm partition. Second i already see the name of the logical volume and the filesystem in it. But to walk it step by step, first i want to see some informations about the physical volume of this lvm partition.

root@mint:~# pvdisplay /dev/md2
  WARNING: PV /dev/md2 in VG vg1000 is using an old PV header, modify the VG to update.
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1000
  PV Size               2,72 TiB / not usable 4,50 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              714240
  Free PE               0
  Allocated PE          714240
  PV UUID               oZYriu-pycS-JrPa-whcV-kVPt-Isio-WUeSpM

I see that there is a logical volume group called “vg1000“. Next it want to see informations about this logical volume group.

root@mint:~# lvdisplay vg1000
  WARNING: PV /dev/md2 in VG vg1000 is using an old PV header, modify the VG to update.
  --- Logical volume ---
  LV Path                /dev/vg1000/lv
  LV Name                lv
  VG Name                vg1000
  LV UUID                2b3SnQ-mF0C-GTpM-hzLa-WMll-P7vc-FHTUZ3
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                2,72 TiB
  Current LE             714240
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

So i have a logical volume device called “/dev/vg1000/lv“. Let’s check the filesystem on this device.

root@mint:~# lsblk -f /dev/vg1000/lv
NAME      FSTYPE LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINT
vg1000-lv ext4   1.42.6-5004 73b3fc38-dcec-4ef8-814a-f088c05226fb   

Step 3: mount the volume

Now i have a device name and know the filesystem on it. Next step is to mount it.

mount /dev/vg1000/lv /mnt

Step 4: decryption

With a “ls /mnt” i can now see my “shared folders” i configured on my Synology. Now i could access my data, if i didn’t encrypt my shared folders on my Synology. With enabled encryption i only see scrambled file and directory names and of course the data in the files is encrypted. Synology uses eCryptfs for encryption. I’ve found a very good documentation about it and how to manually decrypt your files here ->
https://blog.elcomsoft.com/2019/11/synology-nas-encryption-forensic-analysis-of-synology-nas-devices/

In short:

First i created another mount folder.

mkdir /mnt2

After that i used the tool eCryptfs to access my data.

The following command ask for the encryption password and mount the decrypted folder to “/mnt2“.

mount -t ecryptfs -o ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=no,ecryptfs_enable_filename_crypto=yes /mnt/@<MY-SHARED-FOLDER>@/ /mnt2

Now i could finally access my data.


Edit 1/9/2022:

As addition to the above steps i discovered something you might need to know. In my case i replaced not only the broken disk. I took the chance to replace both disks with bigger capacity. So I first replaced the broken one, let the synology synchronize the contents of the disks and then replaced the second (the still good) one.

After everything was synchronized again i could increase the storage pool in the synology to use the additional space on the new disks.

When i ssh into the synology and look at the partition table the /proc/mdstat and lvdisplay, i could see that the synology os did not increase the existing partitions. I creates new partitions, created a new raid on it (this time version 1.2) and then increased the logical volume to this new md device.

So, if you went the same way as me and replaced your disks with bigger ones in the past it gets more complicated. You will not only have to do the steps described above, you als have to assemble different partitions with different raid versions on it to multiple md devices and then try to get the logical volume online which spans these multiple devices.

best of luck!

Leave a Reply

Your email address will not be published. Required fields are marked *

eight + one =