Jump to content

[SOLVED] ZFS unmountable although healthy

Hello,
I have a bit of a problem with ZFS.
I created a zpool on with LUKS encrypted drives and now I can't get the filesystem zfs mounted after reboot:
Upon importing the zpool, explicitly mounting it or changing certain properties the command just reports

Quote

filesystem 'Data' can not be mounted due to error 1
cannot mount 'Data': Invalid argument

I've created a smaller test zpool and tested it for functionality and verified that my setup works without problems (storing data, exporting, importing, re-mounting encrypted drives, after reboots), now I copied all of my data from my mass storage to it and sorted most of it, I still have most of the original data oin the source drive but I sorted, deleted, moved data for half a day and It would be a pain to do all that including creating the zpool, getting UUIDs, writing mount/Unmount scripts and re-downloading lost media files again.
What I did in the following order after which the problem occured:

  1. Creating Luks-formatted drives
  2. Creating mirrored zpool on both drives (automatically mounted zpool to /Data)
  3. copying Data from various sources to it
  4. sorting some Data with file manager
  5. unmounting zpool
  6. exporting zpool
  7. removing luks encryption mappings (luksClose)
  8. reboot
  9. mounting encryption mappings (luksOpen)
  10. importing zpool
  11. noticing problem with mounting
  12. zpool scrub
  13. checking zpool status and properties (e.g. for canMount flag)

Other troubleshooting steps I took:
booting Linux iso but failing to load zfs modules after installing them
reducing fstab to just mounting root and /boot
upgrade of zpool (already up-to-date)
checking dmesg (no relevant info found)
stracing mount command (problem not found)
Current zpool setup/status:

# sudo zpool list -v
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
Data      3.62T  2.39T  1.24T        -    40%    65%  1.00x  ONLINE  -
  mirror  3.62T  2.39T  1.24T        -    40%    65%
    Data1    -      -      -         -      -      -
    Data2    -      -      -         -      -      -
# zpool status -v
  pool: Data
 state: ONLINE
  scan: scrub repaired 0 in 5h43m with 0 errors on Mon May 23 05:04:47 2016
config:

    NAME        STATE     READ WRITE CKSUM
    Data        ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        Data1   ONLINE       0     0     0
        Data2   ONLINE       0     0     0

As you can see there are no signs of corruption, errors or any other reported problems, just the inability of mounting them.

As "scrub" and "zdb Data" successfully completes (and latter outputes files) I'm think there's nothing wrong with the zpool and data is readable, I'm probably just doing something wrong, just having no idea what as I could mount zpools without issues on with multiple test zpools with same name and scripts for mounting/unmounting.

Maybe one of you can find the problem, here some more information:
Setup: 4.5.4-1-ARCH, system specs


complete log of what i've done and output of various status commands, zpool history, strace output and package versions:
Pastebin

Guide I followed (mostly): ZFS on Linux with LUKS encrypted disks | make then make install

Any help is appreciated!

 

Edit:

Got the Zpool working and mountable under Arch without errors now with downgraded ZFS packages and Linux Kernel (LTS versions).
Gonna write a bug report this situation on the Github page of ZFS though.

[https://www.reddit.com/r/zfs/comments/4kzksr/zpool_although_healthy_not_mountable/](more info)

Edited by tobben
Solved
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×