Jump to content

My server consists of a Ryzen 5 3600, Asus B550 Prime MB, 64gb ram, 2 M.2 2tb SSDs on the MB, and 12 total HDD.  4 are running off the onboard SATA, and 8 are running off of an LSI SAS2008.  I just finished up installing two new 2tb m.2 drives cache drives and configuring them in a zfs mirror cache pool.  I have an array of 8 drives with one parity.  Now that I have my new cache pool up and running, I am planning on swapping out my two oldest 2tb array disks and installing two 12tb disks in their place and reducing the number of disks in the array.  After doing that I'm planning on creating another raidz1 pool with four 2tb SATA SSD's for a total of 8 HHDs, 4 SSDs, and 2 M.2's.

 

Years ago, when I built the system, I wasn't paying much attention to where and how I was hooking things up.  Currently I have the parity drive along with 2 other array disks running off of the MB controller and the others on the LSI controller.  Am I leaving performance on the table by having these drives split across two controllers? 

 

Another question I've always had is when doing things like moving files from one array disk to another or parity checks. Does the LSI controller move them directly between the drive without passing them up and then back down the PCI lane?

Awesome signature here.

Link to post
Share on other sites

as far as i'm aware, unraid is all in software, so unless the LSI controller does driver magic i'm not aware of, i dont think it matters which drive is where.

Link to post
Share on other sites

What performance are you seeing? 

 

Gnerally Unraid isn't known for the best performance usint its parity array due to its design, but mixing drive controllers won't matter. It will fill gigabit typically, but don't expect too much more performance wise.

Link to post
Share on other sites

On 2/8/2025 at 11:37 AM, impalaguy1230 said:

zfs mirror

 

On 2/8/2025 at 11:37 AM, impalaguy1230 said:

raidz1


so you are using ZFS? If so, there is no such thing as “a parity drive”, and there also isn’t really a “cache”. ZFS doesn’t work this way. Parity is distributed across all drives simultaneously, and the only “cache” for ZFS would be L2ARC or SLOG (which I pretty much guarantee you don’t need either). 
 

In normal “unraid” (non ZFS), you can use SSD’s to write cache harddrives, but that isn’t a thing in ZFS. So I’m somewhat confused what your setup actually is, and more importantly, what is your goal?
 

Something to remember… A single harddrive can saturate gigabit networking. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - DellAlienware AW3423DWF 34" -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Northern Lights Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 45 HomeLab HL15 15 Drive 4U - - Corsair RM650i - - LSI 9305-16i HBA - - TreuNAS + many other VM’s

 

iPhone 14 Pro - MacBook Air M3

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×