Jump to content

TrueNas Core setting up

Anthony01

Hi,

I recently asked for help on raid setups, thanks again for all the useful responses!

 

I have decided to go with a NAS running TrueNas Core and just wanted to confirm my understanding of some of the setup.

Running Raid Z2 via a IT MODE HBA freenas RAID CARD controller Fujitsu D2607-A21 8i

I have setup alarm settings to e-mail me critical/warning/Info and alert messages. I assume these work as expected, any drive issues I will be emailed? Will I also get scrub/S.M.A.R.T results via email?

if my truenas ssd goes wrong, I have a backup of my config, so I just replace SSD, install true nas and import the config and im back up and running with my pools intact?

If my HBA goes wrong, I can replace with any other passthrough HBA, or does truenas know the hdd locations on the specific hba?

I've setup Smart and scrub checks, anything else I should setup to ensure data is safe?

If I want to add more drives to the pool, am I right in understanding I can mix NAS and standard sata drives, but the disk speeds have to be the same, or can they be different speeds?

What UPS unit do you recommend and how do I link to truenas so it can safely stop a resilver in the event of power loss?

The only issue im having is transfer speeds. Again these forums helped previously, when I was running raid using the controller I was getting slow speeds, but it was due to the cache on the card not being enabed.

I'm currently transfering across approx 8TB over a 10GB connection, I started off at around 170MB/s ( I dont have any cache SSD's, just using hdd's) and after about 10 minutes this has dropped to around 25-35 MB's. Anything that can be causing this? Not sure if cache is enabled on the HBA controller or if you even can use cache now its in IT mode, I cant seem to access any firmware on it during boot. If I pause the transfer and start it up a few minutes later I get good transfer speeds again until it drops off shortly after.

 

I have just ran an iperf test and am getting 2.31 Gbits/sec. Using two emulex oneconnect oce 1102-n-x cards so not sure the reason for decreased speed but is more than enough to be getting a decent transfer speed.

I have tried copying the files locally on the windows PC that is transfering to the NAS and get a consistent 100MB's

Pool is set as lz4, raidz2 currently with x4 8TB's (more to be added)
checksum on
acl mode passthrough
sync standard
enable atime off

I once again appreciate your time reading and any responses

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Anthony01 said:

I have setup alarm settings to e-mail me critical/warning/Info and alert messages. I assume these work as expected, any drive issues I will be emailed? Will I also get scrub/S.M.A.R.T results via email?

just pull a drive and see if you get an emali. 😉

48 minutes ago, Anthony01 said:

if my truenas ssd goes wrong, I have a backup of my config, so I just replace SSD, install true nas and import the config and im back up and running with my pools intact?

i don't think so, if possible you should add a second ssd to your boot pool and mirror them.

49 minutes ago, Anthony01 said:

If my HBA goes wrong, I can replace with any other passthrough HBA, or does truenas know the hdd locations on the specific hba?

you should be able to just replace it, but i recommend replacing it with atleast the same type.

50 minutes ago, Anthony01 said:

I've setup Smart and scrub checks, anything else I should setup to ensure data is safe?

not really, you should be ok with the RaidZ2.

50 minutes ago, Anthony01 said:

If I want to add more drives to the pool, am I right in understanding I can mix NAS and standard sata drives, but the disk speeds have to be the same, or can they be different speeds?

highly recommend using the same drives, but you don't need to. as long as the drive capacity matches you should be fine.

51 minutes ago, Anthony01 said:

What UPS unit do you recommend and how do I link to truenas so it can safely stop a resilver in the event of power loss?

you cannot do this, when the UPS loses power the NAS won't notice untill the UPS battery runs out. maybe if you get an APC with the network interface you can script something but i doubt something like this exists out of the box.

54 minutes ago, Anthony01 said:

The only issue im having is transfer speeds

are you sure the target drive(s) where you are transfering to are not just getting saturated?

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the reply! 

 

Is it to late to raid the boot now? I could do a one off copy of it as now the boot drive won't change, or does it hold parity info/setup on the boot drive? 

 

If I can't setup a ups, what happens with power loss when trying to rebuild the array, corruption or just start over? 

 

Sorry what do you mean by saturated. It's just my one pc transferring to it, so with raidz2 I'd have expected atleast 120mb copy speed if not more? 

Link to comment
Share on other sites

Link to post
Share on other sites

I have done some more testing.

I Plugged an SSD into the second onboard sata slot of the Dell T320 and setup a pool the same way I did the first pool.

On a seperate Windows machine with a 1 gigabit connection going to the onboard Lan of the dell (confirmed via iperf) I transfer 20GB to the SSD pool and get the same transfer speeds of around 30MB/s

This eliminates the HBA, Network card and the spinning disks

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/2/2022 at 1:09 AM, Anthony01 said:

raidz2 currently with x4 8TB's (more to be added)

Not sure why your having write speed issues, may be worth asking on the truenas forums.

 

But, just to make sure, you know you can’t add drives to the Z2 vdev later, correct? You will need to build a new vdev with its own redundancy. This is why ZFS is not great for incrementally adding more space, because each vdev will require it’s own redundancy discs which makes adding more space later much more expensive. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LIGISTX said:

Not sure why your having write speed issues, may be worth asking on the truenas forums.

 

But, just to make sure, you know you can’t add drives to the Z2 vdev later, correct? You will need to build a new vdev with its own redundancy. This is why ZFS is not great for incrementally adding more space, because each vdev will require it’s own redundancy discs which makes adding more space later much more expensive. 

Yes thanks. 

 

I meant adding another vdev to the pool, however I was planning on when needing to rebuild I was going to replace the smr drive with a cmr drive. How bad an idea is it to run smr and cmr in the same vdev? Or is smr really that bad for a 8 Bay vdev where once the data has been written it is likely not going to be looked at again until its deleted. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Anthony01 said:

Yes thanks. 

 

I meant adding another vdev to the pool, however I was planning on when needing to rebuild I was going to replace the smr drive with a cmr drive. How bad an idea is it to run smr and cmr in the same vdev? Or is smr really that bad for a 8 Bay vdev where once the data has been written it is likely not going to be looked at again until its deleted. 

Writing to SMR drives in a RAID array is painfully slow. And if you ever had to do a resilver, it can take an order of magnitude longer then it should…. I’m talking multiple weeks. Basically, just don’t run SMR drives in a NAS.

 

Also, I really recommend not adding more vdevs later. Save up now, but a few more drives, and have a 6 drive pool with Z2 instead of 4 drives. You will double your usable space by only spending 50% more. Or better yet, get all 8 drives now, then you get 6 drives worth of space and only lose 2 to redundancy. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, LIGISTX said:

Writing to SMR drives in a RAID array is painfully slow. And if you ever had to do a resilver, it can take an order of magnitude longer then it should…. I’m talking multiple weeks. Basically, just don’t run SMR drives in a NAS.

 

Also, I really recommend not adding more vdevs later. Save up now, but a few more drives, and have a 6 drive pool with Z2 instead of 4 drives. You will double your usable space by only spending 50% more. Or better yet, get all 8 drives now, then you get 6 drives worth of space and only lose 2 to redundancy. 

What's wrong with multiple vdevs? Isn't that how large servers with 50+ drives do it? Or do you just mean the loss of usable space is a waste by basically mirroring? 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Anthony01 said:

What's wrong with multiple vdevs?

Nothing, as long as you keep in mind your pool is only as resilient as its weakest vdev. (Remember, a pool is basically a bunch of vdevs in RAID 0. If you lose one vdev, you lose the whole pool.) If you want to expand in the future, you'll want to build another z2 vdev at a minimum.

 

If you add another 4 drive z2 vdev, you'll have eight drives but only four drives' worth of usable capacity. (And those two new parity drives will only protect the new vdev, not the original vdev.)

 

If you start off with a 6-drive z2 pool, you'll have 4 drives' worth of usable space and only lose two to parity.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Needfuldoer said:

Nothing, as long as you keep in mind your pool is only as resilient as its weakest vdev. (Remember, a pool is basically a bunch of vdevs in RAID 0. If you lose one vdev, you lose the whole pool.) If you want to expand in the future, you'll want to build another z2 vdev at a minimum.

 

If you add another 4 drive z2 vdev, you'll have eight drives but only four drives' worth of usable capacity. (And those two new parity drives will only protect the new vdev, not the original vdev.)

 

If you start off with a 6-drive z2 pool, you'll have 4 drives' worth of usable space and only lose two to parity.

Thankyou for clarifying. 

 

Yes so x2 raidz2 vdevs each one can lose two before an issue. There's no way of getting it to six drives usable without dropping to raidz1 on each vdev? 

 

If I had two raidz1 vdevs x4 drives on each so 3 usable on each, could I then upgrade them to raidz2 without formatting the vdev and expanding? 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Anthony01 said:

Yes so x2 raidz2 vdevs each one can lose two before an issue. There's no way of getting it to six drives usable without dropping to raidz1 on each vdev? 

 

If I had two raidz1 vdevs x4 drives on each so 3 usable on each, could I then upgrade them to raidz2 without formatting the vdev and expanding? 

vdevs can't be reconfigured like that. You would have to lifeboat all your data off the pool, destroy the pool, and set it back up from scratch.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

Out of interest I just offlined one drive, wiped it and reonlined it. The pool was about 60% full of data, but it resilvered in under 5 minutes. So not sure what that means with it being so quick? 

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Anthony01 said:

Out of interest I just offlined one drive, wiped it and reonlined it. The pool was about 60% full of data, but it resilvered in under 5 minutes. So not sure what that means with it being so quick? 

I assume you didn’t actually wipe it, you just did a quick format. I am a bit surprised ZFS was able to recognize the data was still on the disc, but technically it was still there, so I guess ZFS has some smarts built in for that. 
 

If you actually fully wipe a drive via overwriting it with random data, it would take as long as your drive would take to fill up 60% of its capacity which depending on drive speed, pool utilization, and size, will vary a lot. My 4TB drives usually take 5-10 hours to resilver overnight when I am not using it and only light activity is hitting the pool. 
 

To make my point my clear… I am just saying if you know you will want more space, do that from the start. If you know you will want say 40TB total, buy 7 8 TB drives now in a Z2 instead of buying 4 now which will get you 16 TB usable, then another 4 later which is another 16, because now you have 8 drive and only 32 TB usable. It costs a lot more to do it over time then just investing in drives up front. That is all I’m saying. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Agreed. Thankyou for input. I have just done a deal on x5 cmr drives, so the pool will have 5 cmr and 3 smr. Not quite perfect but I can slowly replace the 3 smr like I would if one failed I guess. 

 

There is a thread going on the truenas forums now about the speed issues. I'll likely make a temp pool with the 5 cmr and see if that fixes speed issues, as when I add the three smr it might impact the speed again? 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Anthony01 said:

Agreed. Thankyou for input. I have just done a deal on x5 cmr drives, so the pool will have 5 cmr and 3 smr. Not quite perfect but I can slowly replace the 3 smr like I would if one failed I guess. 

 

There is a thread going on the truenas forums now about the speed issues. I'll likely make a temp pool with the 5 cmr and see if that fixes speed issues, as when I add the three smr it might impact the speed again? 

Yes, writing to SMR is much slower, and the slowest drive in the pool will bring the entire pool down to the speed as if all drives were that slow (more or less anyways). But like you said, you will do some tests and see how it does. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×