Jump to content

ZFS Best Practices

Go to solution Solved by leadeater,
17 hours ago, ignorantForager said:

 

One option I'm considering is to create a 5-Wide Z2 setup, where each VDEV consists of 5 drives in a Z2 RAID configuration.

5 disks in dual parity is hugely inefficient in usable capacity and performance, definitely would not do that.

 

Quote

No other operation can take place on that vdev until all the disks have finished reading from or writing to those sectors. Thus, IOPS on a RAIDZ vdev will be that of a single disk. While the number of IOPS is limited, the streaming speeds (both read and write) will scale with the number of data disks. Each disk needs to be synchronized in its operations, but each disk is still reading/writing unique data and will thus add to the streaming speeds, minus the parity level as reading/writing this data doesn’t add anything new to the data stream.

https://www.ixsystems.com/wp-content/uploads/2022/02/ZFS_Storage_Pool_Layout_White_Paper_February_2022.pdf

 

Only using 5 disks in a RAIDZ2 is going to limit seq throughput to 3 disks per vdev and also mean best case theoretical seq throughput of 9 disks meaning you are wasting 6 disks worth of capacity and performance to gain only 3x IOPs over a single vdev. If you want IOPs then mirror vdevs is the way to go for that, you'd get 7x or 8x (16 disks) IOPs that way.

 

Quote

1x 12-wide Z3:
• Read IOPS: 250
• Write IOPS: 250
• Streaming read speed: 900 MB/s
• Streaming write speed: 900 MB/s
• Storage space efficiency: 75% (54 TB)

• Fault tolerance: 3

 

Quote

2x 6-wide Z2:
• Read IOPS: 500
• Write IOPS: 500
• Streaming read speed: 800 MB/s
• Streaming write speed: 800 MB/s
• Storage space efficiency: 66.7% (48 TB)
• Fault tolerance: 2 per vdev, 4 total

https://www.ixsystems.com/wp-content/uploads/2022/02/ZFS_Storage_Pool_Layout_White_Paper_February_2022.pdf

 

If you can't use 16 disks total or not easily then go for 14 disks in 2x RAIDZ2. You'll get a better ratio of performance and capacity with a small reduction in IOPs that likely won't be noticed due to caching.

 

Below would be what you are proposing

3x 5-wide Z2:
• Read IOPS: 750
• Write IOPS: 750
• Streaming read speed: 900 MB/s
• Streaming write speed: 900 MB/s
• Storage space efficiency: 60%
• Fault tolerance: 2 per vdev, 6 total

 

Using Mirror vdevs instead

7x 2-wide mirror or 8x 2-wide mirror:
• Read IOPS: 1750 / 2000
• Write IOPS: 1750 / 2000
• Streaming read speed: 700 / 800 MB/s
• Streaming write speed: 700 / 800 MB/s
• Storage space efficiency: 50%
• Fault tolerance: 7 / 8

 

And my suggestion

2x 7-wide Z2 or 2x 8-wide Z2:
• Read IOPS: 500
• Write IOPS: 500
• Streaming read speed: 1000 / 1200 MB/s
• Streaming write speed: 1000 / 1200 MB/s
• Storage space efficiency: 71.4% / 75%
• Fault tolerance: 2 per vdev, 4 total

 

And @Electronics Wizardy's

1x 15-wide Z2:
• Read IOPS: 250
• Write IOPS: 250
• Streaming read speed: 1300 MB/s
• Streaming write speed: 1300 MB/s
• Storage space efficiency: 86.67%
• Fault tolerance: 2 total

 

If you are going to throw away that much to parity you may as well just go mirror vdevs, minor reduction in throughput for huge gain in IOPs with similar usable capacity.

Hello everyone,

I'm currently in the process of planning a new NAS build and I have a question regarding the optimal approach for setting up ZFS. The configuration I'm aiming for involves 15 drives, each with a capacity of 20TB. I'm seeking advice on the most effective way to arrange these drives in terms of VDEVs and similar considerations.

 

One option I'm considering is to create a 5-Wide Z2 setup, where each VDEV consists of 5 drives in a Z2 RAID configuration. This arrangement would result in approximately 161TB of usable space. Alternatively, I'm also contemplating a straightforward Z2 setup utilizing all 15 drives, which would provide 260TB of usable space.

 

While I'm inclined towards maximizing usable space, I'm also interested in adhering to best practices for VDEV implementation. If there's an approach that aligns better with these best practices, I'm open to considering it. Any insights or recommendations you can provide would be greatly appreciated.

 

edit: for clarity

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Electronics Wizardy said:

I'd do that one big raid z2 here unless you need more speed

This, or potentially 2, 8 wide Z2's (which is 16..... not 15). 

 

7 hours ago, ignorantForager said:

Any insights or recommendations you can provide would be greatly appreciated.

It really, really depends what this will be used for. So lacking the use case, we really can't give you much in the way of recommendations.

 

What will this be used for, what speed is the networking, etc. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, ignorantForager said:

 

One option I'm considering is to create a 5-Wide Z2 setup, where each VDEV consists of 5 drives in a Z2 RAID configuration.

5 disks in dual parity is hugely inefficient in usable capacity and performance, definitely would not do that.

 

Quote

No other operation can take place on that vdev until all the disks have finished reading from or writing to those sectors. Thus, IOPS on a RAIDZ vdev will be that of a single disk. While the number of IOPS is limited, the streaming speeds (both read and write) will scale with the number of data disks. Each disk needs to be synchronized in its operations, but each disk is still reading/writing unique data and will thus add to the streaming speeds, minus the parity level as reading/writing this data doesn’t add anything new to the data stream.

https://www.ixsystems.com/wp-content/uploads/2022/02/ZFS_Storage_Pool_Layout_White_Paper_February_2022.pdf

 

Only using 5 disks in a RAIDZ2 is going to limit seq throughput to 3 disks per vdev and also mean best case theoretical seq throughput of 9 disks meaning you are wasting 6 disks worth of capacity and performance to gain only 3x IOPs over a single vdev. If you want IOPs then mirror vdevs is the way to go for that, you'd get 7x or 8x (16 disks) IOPs that way.

 

Quote

1x 12-wide Z3:
• Read IOPS: 250
• Write IOPS: 250
• Streaming read speed: 900 MB/s
• Streaming write speed: 900 MB/s
• Storage space efficiency: 75% (54 TB)

• Fault tolerance: 3

 

Quote

2x 6-wide Z2:
• Read IOPS: 500
• Write IOPS: 500
• Streaming read speed: 800 MB/s
• Streaming write speed: 800 MB/s
• Storage space efficiency: 66.7% (48 TB)
• Fault tolerance: 2 per vdev, 4 total

https://www.ixsystems.com/wp-content/uploads/2022/02/ZFS_Storage_Pool_Layout_White_Paper_February_2022.pdf

 

If you can't use 16 disks total or not easily then go for 14 disks in 2x RAIDZ2. You'll get a better ratio of performance and capacity with a small reduction in IOPs that likely won't be noticed due to caching.

 

Below would be what you are proposing

3x 5-wide Z2:
• Read IOPS: 750
• Write IOPS: 750
• Streaming read speed: 900 MB/s
• Streaming write speed: 900 MB/s
• Storage space efficiency: 60%
• Fault tolerance: 2 per vdev, 6 total

 

Using Mirror vdevs instead

7x 2-wide mirror or 8x 2-wide mirror:
• Read IOPS: 1750 / 2000
• Write IOPS: 1750 / 2000
• Streaming read speed: 700 / 800 MB/s
• Streaming write speed: 700 / 800 MB/s
• Storage space efficiency: 50%
• Fault tolerance: 7 / 8

 

And my suggestion

2x 7-wide Z2 or 2x 8-wide Z2:
• Read IOPS: 500
• Write IOPS: 500
• Streaming read speed: 1000 / 1200 MB/s
• Streaming write speed: 1000 / 1200 MB/s
• Storage space efficiency: 71.4% / 75%
• Fault tolerance: 2 per vdev, 4 total

 

And @Electronics Wizardy's

1x 15-wide Z2:
• Read IOPS: 250
• Write IOPS: 250
• Streaming read speed: 1300 MB/s
• Streaming write speed: 1300 MB/s
• Storage space efficiency: 86.67%
• Fault tolerance: 2 total

 

If you are going to throw away that much to parity you may as well just go mirror vdevs, minor reduction in throughput for huge gain in IOPs with similar usable capacity.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

5 disks in dual parity is hugely inefficient in usable capacity and performance, definitely would not do that.

 

https://www.ixsystems.com/wp-content/uploads/2022/02/ZFS_Storage_Pool_Layout_White_Paper_February_2022.pdf

 

Only using 5 disks in a RAIDZ2 is going to limit seq throughput to 3 disks per vdev and also mean best case theoretical seq throughput of 9 disks meaning you are wasting 6 disks worth of capacity and performance to gain only 3x IOPs over a single vdev. If you want IOPs then mirror vdevs is the way to go for that, you'd get 7x or 8x (16 disks) IOPs that way.

 

 

https://www.ixsystems.com/wp-content/uploads/2022/02/ZFS_Storage_Pool_Layout_White_Paper_February_2022.pdf

 

If you can't use 16 disks total or not easily then go for 14 disks in 2x RAIDZ2. You'll get a better ratio of performance and capacity with a small reduction in IOPs that likely won't be noticed due to caching.

 

Below would be what you are proposing

3x 5-wide Z2:
• Read IOPS: 750
• Write IOPS: 750
• Streaming read speed: 900 MB/s
• Streaming write speed: 900 MB/s
• Storage space efficiency: 60%
• Fault tolerance: 2 per vdev, 6 total

 

Using Mirror vdevs instead

7x 2-wide mirror or 8x 2-wide mirror:
• Read IOPS: 1750 / 2000
• Write IOPS: 1750 / 2000
• Streaming read speed: 700 / 800 MB/s
• Streaming write speed: 700 / 800 MB/s
• Storage space efficiency: 50%
• Fault tolerance: 7 / 8

 

And my suggestion

2x 7-wide Z2 or 2x 8-wide Z2:
• Read IOPS: 500
• Write IOPS: 500
• Streaming read speed: 1000 / 1200 MB/s
• Streaming write speed: 1000 / 1200 MB/s
• Storage space efficiency: 71.4% / 75%
• Fault tolerance: 2 per vdev, 4 total

 

And @Electronics Wizardy's

1x 15-wide Z2:
• Read IOPS: 250
• Write IOPS: 250
• Streaming read speed: 1300 MB/s
• Streaming write speed: 1300 MB/s
• Storage space efficiency: 86.67%
• Fault tolerance: 2 total

 

If you are going to throw away that much to parity you may as well just go mirror vdevs, minor reduction in throughput for huge gain in IOPs with similar usable capacity.

Wow. This was exactly the type of info I was looking for. Thank you! Looks like if I can fit a 16th drive in there, I'll probably try and go that way. And if not, then I'll just go with one big 15-wide Z2.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×