Jump to content

namAehT

Member
  • Posts

    7
  • Joined

  • Last visited

Awards

This user doesn't have any awards

namAehT's Achievements

  1. I just keep adding storage and have become a frequent user of /r/DataHoarder. My recently expanded 38TB pool is sitting at the 50% mark and I'm already planning on buy more 8TB Easystores or MyBooks when they go on sale. It's become an addiction... I didn't change the default blocksize since there really isn't a point to it. ZFS automagically decides what the best blocksize is for a given file as you're writing it, so if you're writing a multi-gig file it will write 1MB blocks, but if you're writing a bunch of 1k text files it will use small blocks. At least that's my understanding of it. Since (nearly) everything on this dataset is over 1MB, I'm assuming that all the blocks on it are 1MB for any approximations.
  2. The biggest problem for me is that all my data (Movies & TV) is already compressed, so I'd get about 1.02 if I enabled it.
  3. That's the same conclusion I reached, I'll probably use dedupe when I get around to making an offsite backup server just to cut down on costs and increase capacity.
  4. 48GB, but dedupe was (in therory) using something like 2GB (Duplicate entries * size in core + unique entries * size in core) so RAM size wasn't a bottle neck. My guess is that it takes time to hash the block, look through the dedupe table for matching entries and if it finds a match, verify the block is truely unique. Moving a 50GB file means hashing and checking 50,000 blocks against the dedupe table, even if the whole thing is in RAM it can only process so fast. EDIT: Post to back this up.
  5. Rather than just enabling it on my dataset, I created a test one and copied everything to it. It was all fine until there was about 14TB (10TB unique), then my write performance dropped to ~20MB/s. I deleted the dataset an everything went back to normal. I didn't want to go the hard link route because my library is constantly changing (Radarr & Sonarr) and when I eventually stop seeding I don't want my files to vanish. I've bought refurb drives from GoHardDrive before and have had no problems.
  6. I had only ever heard the "1GB per 1TB" advice, never knew that the ZFS devs think that's bullshit. The FreeNAS Guide doesn't even recommend it as profusely anymore, simply saying: BTW, as per this line in the ZFS on Linux GitHub, the ARC Metadata Limit (à la Dedupe Table) is actually allowed to use 75% of the ARC Limit (all memory), rather than the 12.5% the ZFS dev suggested (don't know when that changed). On FreeBSD (or at least FreeNAS) it is 25% of the ARC Limit. So if you want the max ram used by the dedupe table, it's simply [Max Pool Size in TB]/[Block Size]*320 Bytes. Gonna attempt to dedupe my Media/Seeding dataset.
×