Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Looking for input on a 40GB NAS build.

I'm thinking of building a Raid 6 NAS around Mellanox 40gb QSFP NICs and wanted to know if my configuration is possible, and learn about what things I might not be considering. I'll only be using it for my file transfer, storage, and back up, and would like to use CrashPlan for extra security. No virtual machines, media streaming, or video editing. Just back up and file transfer for my business, which is 1 user.

 

This is the hardware I'm considering:

Motherboard - Gigabyte C246 WU4

CPU - Intel Xeon E-2124

PSU - Corsair HX750

Networking - Mellanox ConnectX-3 40GB Dual Port, connected to two PCs

HDD - IronWolf NAS Pro 10TB or Exos (up to 16)

Case - Fractal Define XL 7

 

It's important to maximize read/write speeds but I'm also trying to do it on a budget, which is flexible. Can anyone recommend an OS I should be using? Will HDDs in Raid 6 actually make use of the 40GB speed when transferring 500GB folders? I'll add ram based on the OS requirements.

Link to post
Share on other sites
6 minutes ago, 1045666 said:

Will HDDs in Raid 6 actually make use of the 40GB speed when transferring 500GB folders? I'll add ram based on the OS requirements.

I have 10Gb to my NAS, and I seldom hit 40% of that (to a single RAIDZ2 (aka RAID6) 4-disk array, and then only when I pull multiple large files simultaneously). Maybe if you striped two or three five-disk arrays together you might hit 10Gb. Honestly, 40Gb is overkill unless you have an array of NVMe drives on both ends.

Main System (Byarlant): Ryzen 5 1600X | Asus B350-F Strix | Corsair H80i V2 | 16GB G.Skill DDR4 3200MHz CAS-14 | XFX RX 5600 XT THICC II | Samsung 960 PRO 512GB / Samsung 970 EVO 500GB / Seagate 7200RPM 3TB | Corsair CX650M | Mellanox ConnectX-2 10G NIC | Anidees AI-07BW Case | Dell U3415W Monitor | Microsoft Modern Keyboard

 

FreeNAS Server (Veda): Core i3-4170 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 4x 10TB WD Whites / 1x Samsung PM961 128GB SSD / 1x Kingston 16GB SSD | Corsair CX430M | Mellanox ConnectX-2 10G NIC | LSI 9207-8i LBA | Fractal Design Node 804 Case (side panels swapped to show off drives)

 

Media Center/Video Capture (Jesta): Core i7-2600 | Asus H77M-PRO | Stock Cooler | 8GB No-name DDR3 | EVGA GTX750Ti SC | Sandisk UltraII SSD 64GB / Seagate 1.5TB HDD | Corsair CX450M | Hauppauge ImpactVCB-PCIe | Syba USB3.1 Gen 2 Card | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | UMIS 256GB NVMe SSD | (all-around awesome machine)

Laptop (Rozen-ZuluSony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Kingston 120GB SSD | Blu-ray Drive | (lived a good life, retired with honor)

 

Tablet (---): Samsung Galaxy Tab A 8" (crosses fingers)
Tablet (ReGZ)Asus T102HA (BIOS clock doesn't tick, loses time when sleep/off) (I kill tablets with disturbing regularity)

Tablet (Unicorn)Surface Pro 2 (battery will reset total capacity to current charge, leading Windows to think it's always 100% charged until it dies)

Tablet (Loto)Dell Venue 8 Pro (screen discoloration issues, wouldn't update to Windows 10)

Tablet: iPad 2 16GB (WiFi died, basically useless after that)

 

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M (?) | HooToo USB 3.0 PCIe Card | NZXT H230 Case

 

Camera: Sony ɑ7II (w/ Meike Grip) | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance SDXC cards

Link to post
Share on other sites

If you use HDD it wont saturate 40GB, if my calculator is right, you will get 14x read speed (14x 200mb/s) which is only 2800mb/s, less than what a single samsung 970 nvme would run. If you use Sata SSD array, you'll get 500x14 (7 gb/s). The only way is to use nvme (3.5gb/s each).

Ryzen 2600 @ 4ghz | Radeon RX580 | 32gb HyperX 3200mhz | 500gb Samsung PM981a | 5 TB HDD | Corsair CX450

Link to post
Share on other sites

Thanks for the feedback. That makes sense. Since the most I'll get from HDDs is less than 5GB, I'll use 5 or 10gb NICs to prioritize cost and capacity for now.

 

What operating system would work best for a HDD array? I'm new to server things and read that FreeNAS has stricter hardware requirements, but unRAID has some bottlenecks in performance. Or should it be a Raid card?

Link to post
Share on other sites
2 hours ago, 1045666 said:

Thanks for the feedback. That makes sense. Since the most I'll get from HDDs is less than 5GB, I'll use 5 or 10gb NICs to prioritize cost and capacity for now.

 

What operating system would work best for a HDD array? I'm new to server things and read that FreeNAS has stricter hardware requirements, but unRAID has some bottlenecks in performance. Or should it be a Raid card?

Any os should work here, so use what you like. Id probalby just use a linux distro like ubuntu as I like that but truenas gives you a nice web interface for it. Truenas doesnt really need any faster of hardware than other oses do. Id use software raid here.

Link to post
Share on other sites
10 hours ago, Electronics Wizardy said:

Any os should work here, so use what you like. Id probalby just use a linux distro like ubuntu as I like that but truenas gives you a nice web interface for it. Truenas doesnt really need any faster of hardware than other oses do. Id use software raid here.

If I wanted to use truenas, would it require ZFS to get raid 6 going? If it uses 1GB of memory per TB of storage, I'll have to foresee the motherboard requiring a ram capacity above 128GB.

Link to post
Share on other sites

My advise is to get a 1TB NVMe drive for the OS and cache, then add HDDs as your storage requirements grow.

 

Use any plain Linux distro, but if your business depends on it, consider a commercially supported version. Install a web-based config and admin tool, like Webmin. Especially handy for building a RAID. ZFS is by no means necessary, JFS, XFS and BTRFS will do just fine. The cache will take up some functionality of the "1GB per TB storage" mantra, as it's very fast, even a PCIe v3 drive (@ 3.5GB/s) suffices for those 14 SATA drives mentioned earlier. If unsure, increase the NVMe drive to 2TB and have 1.5TB as cache, some 128GB for the OS and the remainder as wear-levelling spare capacity.

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites
1 hour ago, Dutch_Master said:

My advise is to get a 1TB NVMe drive for the OS and cache, then add HDDs as your storage requirements grow.

Don't bother with L2ARC (ZFS cache) unless you're running production servers with significant repeated calls to the same data (e.g. a database). It's a waste.

 

I'm running FreeNAS off a 16GB SATA M.2 drive, and that is more than enough.

 

4 hours ago, 1045666 said:

If I wanted to use truenas, would it require ZFS to get raid 6 going?

I don't think you have the option to not use ZFS in TrueNAS/FreeNAS. RAID6 = RAIDZ2 in ZFS.

 

4 hours ago, 1045666 said:

If it uses 1GB of memory per TB of storage, I'll have to foresee the motherboard requiring a ram capacity above 128GB.

Again, that rule should be relaxed when just using it as a personal NAS. I have 32GB of RAM in my server, but it was fine with 16GB (or even the 2GB I had in the prototype server). If you're using it for storage and not some mission-critical server, the cache really doesn't do much beyond the file you're currently accessing.

Main System (Byarlant): Ryzen 5 1600X | Asus B350-F Strix | Corsair H80i V2 | 16GB G.Skill DDR4 3200MHz CAS-14 | XFX RX 5600 XT THICC II | Samsung 960 PRO 512GB / Samsung 970 EVO 500GB / Seagate 7200RPM 3TB | Corsair CX650M | Mellanox ConnectX-2 10G NIC | Anidees AI-07BW Case | Dell U3415W Monitor | Microsoft Modern Keyboard

 

FreeNAS Server (Veda): Core i3-4170 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 4x 10TB WD Whites / 1x Samsung PM961 128GB SSD / 1x Kingston 16GB SSD | Corsair CX430M | Mellanox ConnectX-2 10G NIC | LSI 9207-8i LBA | Fractal Design Node 804 Case (side panels swapped to show off drives)

 

Media Center/Video Capture (Jesta): Core i7-2600 | Asus H77M-PRO | Stock Cooler | 8GB No-name DDR3 | EVGA GTX750Ti SC | Sandisk UltraII SSD 64GB / Seagate 1.5TB HDD | Corsair CX450M | Hauppauge ImpactVCB-PCIe | Syba USB3.1 Gen 2 Card | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | UMIS 256GB NVMe SSD | (all-around awesome machine)

Laptop (Rozen-ZuluSony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Kingston 120GB SSD | Blu-ray Drive | (lived a good life, retired with honor)

 

Tablet (---): Samsung Galaxy Tab A 8" (crosses fingers)
Tablet (ReGZ)Asus T102HA (BIOS clock doesn't tick, loses time when sleep/off) (I kill tablets with disturbing regularity)

Tablet (Unicorn)Surface Pro 2 (battery will reset total capacity to current charge, leading Windows to think it's always 100% charged until it dies)

Tablet (Loto)Dell Venue 8 Pro (screen discoloration issues, wouldn't update to Windows 10)

Tablet: iPad 2 16GB (WiFi died, basically useless after that)

 

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M (?) | HooToo USB 3.0 PCIe Card | NZXT H230 Case

 

Camera: Sony ɑ7II (w/ Meike Grip) | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance SDXC cards

Link to post
Share on other sites
7 hours ago, 1045666 said:

If I wanted to use truenas, would it require ZFS to get raid 6 going? If it uses 1GB of memory per TB of storage, I'll have to foresee the motherboard requiring a ram capacity above 128GB.

the whole 1gb/tb is bs. The ram is just being used as a cache, like any other filesystem, so more is better, but you don't need much at all. You can get away with much less, like 8gb will be plenty for it to function, more is nice for better performance.

 

 

One other issue with 40gbe is actually getting those speeds. Your probably gonna want to make sure something like smb3 multichannel is running, or use ROCe nics so your not as cpu limited here. 

Link to post
Share on other sites
On 1/23/2021 at 8:03 AM, Dutch_Master said:

consider a commercially supported version

Is FreeNas/TrueNas considered commercially supported? It is semi-mission critical data, as it'll be the only redundancy for my business. I've been operating for the last 5 years without any automated backing up or redundancy and have experienced one drive failure. Luckily the projects on that drive were completed and not active. So this is a supposed to be a solution that finally secures that. A 2TB NVMe to split cache/OS sounds good.

 

On 1/23/2021 at 12:24 PM, Electronics Wizardy said:

One other issue with 40gbe is actually getting those speeds. Your probably gonna want to make sure something like smb3 multichannel is running, or use ROCe nics so your not as cpu limited here. 

I thought the 40GB speeds wouldn't be saturated with the limitations of HDDs.

 

After I get the system running, I'll buy the HDDs - 9 Exos 16TB. 5 for 80TB storage, 2 for parity, and 2 kept aside to remove the step of buying/ordering a drive when one fails so recovery can start almost immediately.

 

I have a few more questions if it's alright:

  • Would it be better to use 10 8TB drives instead of 5 16TB dives for speed? I see the Exos speed is 261MB/s. Is the math here logical?
    • 10 8TB drives: 261MB/s x 10 =  2,610 MB/s
    • 5 16TB drives : 261MB/s x 5 = 1,305/MB/s
  • To get the full 2.6GB/s, assuming the above is rational, will the 10GB connection be enough or would it need to be 20GB? Something about that confuses me a bit because my 1GB connection does around 113 MB/s across two computers, which another forum said is expected. One computer has 3 NVMEs in raid 0, and the other is a single NVME. I just tested the transfer speed between these two drives, and got 113mb on a GB connection. Unless I haven't set it up right? So would I actually need a 20gb connection to get the 2.6gb from 10 HDDs?
  • Since this is my only redundancy, if I wanted to double up on the hardrives to build a raid 1 copy of the raid 6 array, is that possible in TrueNas?
  • Is it also possible to use the server to backup my computer, where deleted files from my computer are moved to a 'recovery' folder on the server in real time so I don't truly delete things by accident?

Thanks for confirming the GB per TB concern with ram. I'll just grab 16 or 32 to be safe.

Link to post
Share on other sites
1 hour ago, 1045666 said:

Something about that confuses me a bit because my 1GB connection does around 113 MB/s across two computers, which another forum said is expected. One computer has 3 NVMEs in raid 0, and the other is a single NVME. I just tested the transfer speed between these two drives, and got 113mb on a GB connection.

Gigabit vs GigaByte. HDDs are measured in Bytes; networks are in bits. 1Gbps is approx 125MBps. 1GBps requires 10Gbps networking.

 

1 hour ago, 1045666 said:

Would it be better to use 10 8TB drives instead of 5 16TB dives for speed? I see the Exos speed is 261MB/s. Is the math here logical?

  • 10 8TB drives: 261MB/s x 10 =  2,610 MB/s
  • 5 16TB drives : 261MB/s x 5 = 1,305/MB/s

That 261MBps is very optimistic. HDDs get slower the fuller they are (assuming they fill from the beginning). The outside of the disk is faster than the inside, and drives fill platters from outside to inside. By the time the disk is full, you'll probably be getting half of that.

 

Also, it doesn't scale perfectly with more drives (because the CPU in the NAS still has to reorder the data to be sent, as well as check hashes). IMO, 10G networking will be plenty unless you have several heavy users.

 

1 hour ago, 1045666 said:

Is it also possible to use the server to backup my computer, where deleted files from my computer are moved to a 'recovery' folder on the server in real time so I don't truly delete things by accident?

Not so much like that, but you can keep snapshots in ZFS that keep files for a set period of time before it's truly removed. For local disks, you'd have to have the disk backed up to the NAS periodically (or else, tell Windows to do snapshots, which I think you can do).

 

1 hour ago, 1045666 said:

Since this is my only redundancy, if I wanted to double up on the hardrives to build a raid 1 copy of the raid 6 array, is that possible in TrueNas?

RAID is not a backup. That said, you can have two RAIDZ2 arrays and periodically copy changes from one to the other.

 

Look into figuring out an offsite backup if you're concerned about data redundancy (if your house burns down, it's all gone no matter how many copies you keep there).

 

1 hour ago, 1045666 said:

Is FreeNas/TrueNas considered commercially supported?

It's semi-commerical. iX Systems provides consultation and does some hardware, but their base OS (FreeNAS/TrueNAS) is free to use.

 

1 hour ago, 1045666 said:

A 2TB NVMe to split cache/OS sounds good.

Don't. Cache isn't useful for your workload, and FreeNAS doesn't take more than 16GB.

Main System (Byarlant): Ryzen 5 1600X | Asus B350-F Strix | Corsair H80i V2 | 16GB G.Skill DDR4 3200MHz CAS-14 | XFX RX 5600 XT THICC II | Samsung 960 PRO 512GB / Samsung 970 EVO 500GB / Seagate 7200RPM 3TB | Corsair CX650M | Mellanox ConnectX-2 10G NIC | Anidees AI-07BW Case | Dell U3415W Monitor | Microsoft Modern Keyboard

 

FreeNAS Server (Veda): Core i3-4170 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 4x 10TB WD Whites / 1x Samsung PM961 128GB SSD / 1x Kingston 16GB SSD | Corsair CX430M | Mellanox ConnectX-2 10G NIC | LSI 9207-8i LBA | Fractal Design Node 804 Case (side panels swapped to show off drives)

 

Media Center/Video Capture (Jesta): Core i7-2600 | Asus H77M-PRO | Stock Cooler | 8GB No-name DDR3 | EVGA GTX750Ti SC | Sandisk UltraII SSD 64GB / Seagate 1.5TB HDD | Corsair CX450M | Hauppauge ImpactVCB-PCIe | Syba USB3.1 Gen 2 Card | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | UMIS 256GB NVMe SSD | (all-around awesome machine)

Laptop (Rozen-ZuluSony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Kingston 120GB SSD | Blu-ray Drive | (lived a good life, retired with honor)

 

Tablet (---): Samsung Galaxy Tab A 8" (crosses fingers)
Tablet (ReGZ)Asus T102HA (BIOS clock doesn't tick, loses time when sleep/off) (I kill tablets with disturbing regularity)

Tablet (Unicorn)Surface Pro 2 (battery will reset total capacity to current charge, leading Windows to think it's always 100% charged until it dies)

Tablet (Loto)Dell Venue 8 Pro (screen discoloration issues, wouldn't update to Windows 10)

Tablet: iPad 2 16GB (WiFi died, basically useless after that)

 

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M (?) | HooToo USB 3.0 PCIe Card | NZXT H230 Case

 

Camera: Sony ɑ7II (w/ Meike Grip) | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance SDXC cards

Link to post
Share on other sites
1 hour ago, 1045666 said:

Is FreeNas/TrueNas considered commercially supported? It is semi-mission critical data, as it'll be the only redundancy for my business. I've been operating for the last 5 years without any automated backing up or redundancy and have experienced one drive failure. Luckily the projects on that drive were completed and not active. So this is a supposed to be a solution that finally secures that. A 2TB NVMe to split cache/OS sounds good.

Not on custom hardware. They have supported version, but your probably looking at spending much more. Really up to cost vs support, and if you want to be on the line if somethinge go wrong. For a backup a diy solution is probably fine though.

 

1 hour ago, 1045666 said:

thought the 40GB speeds wouldn't be saturated with the limitations of HDDs.

 

Im gonn guess you won't get much over 500-1000mB/s due to other limits before the hdds become the limit. What protocol are you using?

 

1 hour ago, 1045666 said:

I have a few more questions if it's alright:

  • Would it be better to use 10 8TB drives instead of 5 16TB dives for speed? I see the Exos speed is 261MB/s. Is the math here logical?

Id use fewer drives, as I think you will hit other limits before drive io here. Like cpu limits.

 

1 hour ago, 1045666 said:

Would it be better to use 10 8TB drives instead of 5 16TB dives for speed? I see the Exos speed is 261MB/s. Is the math here logical?

  • 10 8TB drives: 261MB/s x 10 =  2,610 MB/s
  • 5 16TB drives : 261MB/s x 5 = 1,305/MB/s

 

Well there are raid calculators that are better, and it depends a lot on workload, but id probably half those numbers for more real life speeds.

 

Also that 261mB/s is peak, expect like 100-150 real world when it starts to fill and you have fragmentation and backupground io.

 

1 hour ago, 1045666 said:

To get the full 2.6GB/s, assuming the above is rational, will the 10GB connection be enough or would it need to be 20GB? Something about that confuses me a bit because my 1GB connection does around 113 MB/s across two computers, which another forum said is expected. One computer has 3 NVMEs in raid 0, and the other is a single NVME. I just tested the transfer speed between these two drives, and got 113mb on a GB connection. Unless I haven't set it up right? So would I actually need a 20gb connection to get the 2.6gb from 10 HDDs?

Id stick with 10gbe, as you probalby won't fill it up anyways, and anything more than 10gbe is normally much more expensive.

 

1 hour ago, 1045666 said:

Since this is my only redundancy, if I wanted to double up on the hardrives to build a raid 1 copy of the raid 6 array, is that possible in TrueNas?

NOt really, but if you want more backups id backup to the cloud, or tape or simmilar as anouther copy

 

1 hour ago, 1045666 said:

Is it also possible to use the server to backup my computer, where deleted files from my computer are moved to a 'recovery' folder on the server in real time so I don't truly delete things by accident?

Kinda, Id just use something like veeam to manage backups, and run a backup every hour or so, so you can view all the previous versions.

Link to post
Share on other sites
2 hours ago, AbydosOne said:

Cache isn't useful for your workload

I respectfully disagree here. The OP has a 500GB dataset and you'd want to get that out of volatile memory (RAM) ASAP. A HDD may offer 120 MB/s, but only on a single drive. Remember, these drives are in a RAID/ZFS pool, so the OS has to compute parity for every bit transmitted on the fly. That takes time, the infamous RAID6/RAIDZ performance penalty. So, by storing the entire dataset in a (fast) cache, the data itself is no longer at risk (while on the NVMe drive) and the OS can transfer the data more efficiently to the array as it no longer needs to do multiple things with the same data at the same time. Which adds its own risks.  (i.e. data corruption. Low risk, but still a risk)

 

2 hours ago, AbydosOne said:

FreeNAS doesn't take more than 16GB

I couldn't find that limit, but perhaps I looked in the wrong place. Pointer pls?

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites
56 minutes ago, Dutch_Master said:

I respectfully disagree here. The OP has a 500GB dataset and you'd want to get that out of volatile memory (RAM) ASAP. A HDD may offer 120 MB/s, but only on a single drive.

With FreeNAS/ZFS, the L2ARC gets built as data is accessed, and it doesn't persist through reboots (which is something I don't think everyone realizes, it's essentially an extension to the RAM cache), so the cache will almost never be filled. It's only truly useful if you're accessing the same data, that is larger than can be reasonably stored in RAM, constantly.

 

By the time you have ~2TB of L2ARC data, why not just build an NVMe array so you don't have to deal with uncertainty of the cache MRU/MCU algorithms?

 

56 minutes ago, Dutch_Master said:

Remember, these drives are in a RAID/ZFS pool, so the OS has to compute parity for every bit transmitted on the fly.

I can't say for certain, but I'll bet dollars to donuts that ZFS still checks block integrity even when pulling data from L2ARC devices (seeing as integrity is ZFS' thing), so it's still computing checksums (not parity) on the fly. If parities were that much of a bottleneck, the i3 in my server would be on its knees any time I did a file transfer (SMB uses substantially more CPU than ZFS does). Parity is an extremely simple calculation, limited on reasonable CPUs by parity disk speed.

 

56 minutes ago, Dutch_Master said:

So, by storing the entire dataset in a (fast) cache, the data itself is no longer at risk (while on the NVMe drive) and the OS can transfer the data more efficiently to the array as it no longer needs to do multiple things with the same data at the same time. Which adds its own risks.  (i.e. data corruption. Low risk, but still a risk)

At risk of what?

 

L2ARC is not a redundancy level. It's not going to keep your data any safer. If anything, it's a single point of (potential) corruption, without parity, so if ZFS finds a checksum error, it will dump the cache data and go back to the disk.

 

56 minutes ago, Dutch_Master said:

I couldn't find that limit, but perhaps I looked in the wrong place. Pointer pls?

The installed size of FreeNAS is less than 16GB (source: using a 16GB SSD for my boot drive).

 

Now that I think about it, I'm actually not even certain FreeNAS will let you partition the boot drive to have cache on the same device (I think it requires a whole bare device, not a partition).

Main System (Byarlant): Ryzen 5 1600X | Asus B350-F Strix | Corsair H80i V2 | 16GB G.Skill DDR4 3200MHz CAS-14 | XFX RX 5600 XT THICC II | Samsung 960 PRO 512GB / Samsung 970 EVO 500GB / Seagate 7200RPM 3TB | Corsair CX650M | Mellanox ConnectX-2 10G NIC | Anidees AI-07BW Case | Dell U3415W Monitor | Microsoft Modern Keyboard

 

FreeNAS Server (Veda): Core i3-4170 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 4x 10TB WD Whites / 1x Samsung PM961 128GB SSD / 1x Kingston 16GB SSD | Corsair CX430M | Mellanox ConnectX-2 10G NIC | LSI 9207-8i LBA | Fractal Design Node 804 Case (side panels swapped to show off drives)

 

Media Center/Video Capture (Jesta): Core i7-2600 | Asus H77M-PRO | Stock Cooler | 8GB No-name DDR3 | EVGA GTX750Ti SC | Sandisk UltraII SSD 64GB / Seagate 1.5TB HDD | Corsair CX450M | Hauppauge ImpactVCB-PCIe | Syba USB3.1 Gen 2 Card | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | UMIS 256GB NVMe SSD | (all-around awesome machine)

Laptop (Rozen-ZuluSony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Kingston 120GB SSD | Blu-ray Drive | (lived a good life, retired with honor)

 

Tablet (---): Samsung Galaxy Tab A 8" (crosses fingers)
Tablet (ReGZ)Asus T102HA (BIOS clock doesn't tick, loses time when sleep/off) (I kill tablets with disturbing regularity)

Tablet (Unicorn)Surface Pro 2 (battery will reset total capacity to current charge, leading Windows to think it's always 100% charged until it dies)

Tablet (Loto)Dell Venue 8 Pro (screen discoloration issues, wouldn't update to Windows 10)

Tablet: iPad 2 16GB (WiFi died, basically useless after that)

 

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M (?) | HooToo USB 3.0 PCIe Card | NZXT H230 Case

 

Camera: Sony ɑ7II (w/ Meike Grip) | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance SDXC cards

Link to post
Share on other sites
On 1/22/2021 at 2:34 PM, 1045666 said:

and would like to use CrashPlan for extra security. No virtual machines, media streaming, or video

Have you ever used CrashPlan before? Outside of an enterprise setting I will say it took FOREVER to back up even a single TB of data. I have 940Mb Up and Down with my ISP and it was uploading at maybe 5 to 10 Mbps even when configured properly to use much much more, that's as high as I ever saw it go. I switched to Backblaze a year or so ago and I backed up 6TB of data in the same amount of time CrashPlan took to backup less than 1TB. I always found the UI clunky and not intuitive and there were (not sure if there still are) like 4 different pro versions as well.

Last I checked CrashPlan is still unlimited data storage per device compared to Backblaze but the speeds to back things up and the sluggish controls were just not worth it. I pay around $8-10/month, roughly, to Backblaze for the business stuff and right now I think I've got 4-6TB of stuff on there after cleaning up a couple months ago.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to post
Share on other sites

 

On 1/25/2021 at 11:33 AM, Electronics Wizardy said:

Id use fewer drives, as I think you will hit other limits before drive io here. Like cpu limits.

Do you mean CPU limits in general or because the one in my parts list is very basic? Should I look into a dual cpu motherboard to get more io from the drive speeds? In this case, I learned that cpus need to be 'matched'. Is that as simple as buying 2 of the same cpu models or is there a deeper matching process required to get them operating more similarly?

 

On 1/25/2021 at 2:42 PM, AbydosOne said:

the i3 in my server

I was thinking of switching out the Xeon in my list for an i3 to save $100 because from what I currently understand, I should be able to get away with the bare minimum cpu since I'm not doing any heavy computing, just transferring files and doing some raid. Do you mind sharing your server specs/OS so I could get some ideas how to modify my parts list or set mine up? Have you ever felt a need to change cpus? Sounds like no.

 

As for L2ARC, the way I'll be using these files/storage, is when I'm done with a client's project, I'll dump it to the server. And if I ever need to reference some settings from a previous project, I'll copy it back to my computer, open it locally, and then delete it from my computer when I have what I need. I don't imagine that happening super frequently, and once it's copied to my computer, it'll be running locally. So I think the cache can be minimal vs a scenario where I'd be running files directly from the server as with multiple video editors sharing a source server. Could I just not have a cache drive at all..?

On 1/25/2021 at 5:30 PM, Lurick said:

Have you ever used CrashPlan before?

I had the trial and after 3 days of waiting for one hdd to upload, I gave up on it because it was taking too long and then cancelled the subscription. I figured I'd try again and just be more patient once this was up and running, so I appreciate you mentioning Backblaze. I'll look into that for the offsite backup.

 

I've updated my specs plan:

CPU - Xeon E 2124G or i3 9100

Motherboard - Asus WS C246M

Ram - 1 x 8GB Crucial SDRAM ECC 2666 (maybe 2 x 4gb if the cpu runs better with dual channel memory?)

OS SSD - 970 Evo Plus 250 GB

Network cards - Intel X540-T2

Case: Fractal Define XL 7

PSU: Corsair HX750

Fans: Noctua NF-A14 Industrial 3000 PWM 140mm

Cooler: Noctua DH-15 Chromax Black

OS: TrueNas, 1 array of raid 6

 

Are there any resources I can look at regarding what hardware will work with TrueNas, or is the above list good to go? Maybe OpenMediaVault which is apparently more flexible with components? I'm pretty sure I'm ready to start sourcing hardware unless it's beneficial to read/write speeds to look into dual-cpu builds, or even an AMD version for more cores per dollar.

Link to post
Share on other sites
22 minutes ago, 1045666 said:

Do you mean CPU limits in general or because the one in my parts list is very basic? Should I look into a dual cpu motherboard to get more io from the drive speeds? In this case, I learned that cpus need to be 'matched'. Is that as simple as buying 2 of the same cpu models or is there a deeper matching process required to get them operating more similarly?

 

The issue is many protocols like smb are pretty simgle threaded, so you limited by that. Thats why you want things like smb multichannel and ROCe. Dual xeon won't help you normally in terms of performance, but nice for extra ram and registered ram support + much more io.

 

25 minutes ago, 1045666 said:

s for L2ARC, the way I'll be using these files/storage, is when I'm done with a client's project, I'll dump it to the server. And if I ever need to reference some settings from a previous project, I'll copy it back to my computer, open it locally, and then delete it from my computer when I have what I need. I don't imagine that happening super frequently, and once it's copied to my computer, it'll be running locally. So I think the cache can be minimal vs a scenario where I'd be running files directly from the server as with multiple video editors sharing a source server. Could I just not have a cache drive at all..?

Yea I wouldn't use a l2arc here, caching doeson't seem to help for your workload.

 

25 minutes ago, 1045666 said:

had the trial and after 3 days of waiting for one hdd to upload, I gave up on it because it was taking too long and then cancelled the subscription. I figured I'd try again and just be more patient once this was up and running, so I appreciate you mentioning Backblaze. I'll look into that for the offsite backup.

Id probaly use something like veeam to manage the backups, then you can use whatever s3 compatible storage you want. Don't use the backblaze desktop backup here.

 

 

26 minutes ago, 1045666 said:

I've updated my specs plan:

CPU - Xeon E 2124G or i3 9100

Motherboard - Asus WS C246M

Ram - 1 x 8GB Crucial SDRAM ECC 2666 (maybe 2 x 4gb if the cpu runs better with dual channel memory?)

OS SSD - 970 Evo Plus 250 GB

Network cards - Intel X540-T2

Case: Fractal Define XL 7

PSU: Corsair HX750

Fans: Noctua NF-A14 Industrial 3000 PWM 140mm

Cooler: Noctua DH-15 Chromax Black

OS: TrueNas, 1 array of raid 6

 

Are there any resources I can look at regarding what hardware will work with TrueNas, or is the above list good to go? Maybe OpenMediaVault which is apparently more flexible with components? I'm pretty sure I'm ready to start sourcing hardware unless it's beneficial to read/write speeds to look into dual-cpu builds, or even an AMD version for more cores per dollar.

Id get more ram, just is nice as a cache, but not needed.

 

nhd15 is way overkill, stock cooler is fine.

 

Id probably go ryzen + asrock rack x470 board, or anouther server board. IMPI is nice, and those server board normally are just better for servers with things like better nics on board, and better slot layouts.

 

28 minutes ago, 1045666 said:

Network cards - Intel X540-T2

Id go x550 or the aquantia, depends on if you need more features. Id personaly go fibre and sfp+ nics though.

 

29 minutes ago, 1045666 said:

 

Are there any resources I can look at regarding what hardware will work with TrueNas, or is the above list good to go? Maybe OpenMediaVault which is apparently more flexible with components? I'm pretty sure I'm ready to start sourcing hardware unless it's beneficial to read/write speeds to look into dual-cpu builds, or even an AMD version for more cores per dollar.

Performnce should be fine, id expect about 500mB/s sequntical, maybe a bit more before disks are full.

Link to post
Share on other sites
21 hours ago, 1045666 said:

Do you mind sharing your server specs

They're below the fold in my signature for future reference, but: Core i3-4170 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 4x 10TB WD Whites / 1x Samsung PM961 128GB SSD / 1x Kingston 16GB SSD | Corsair CX430M | Mellanox ConnectX-2 10G NIC | LSI 9207-8i LBA | Fractal Design Node 804 Case (side panels swapped to show off drives)

 

21 hours ago, 1045666 said:

Could I just not have a cache drive at all

Yep, ZFS/FreeNAS uses all your extra RAM for the primary cache.

 

21 hours ago, 1045666 said:

PSU: Corsair HX750

Probably could get by with a lower-power model (my server is a 430W).

 

21 hours ago, 1045666 said:

Cooler: Noctua DH-15 Chromax Black

Just use the Intel stock cooler, unless you're trying to show it off and/or need that much less noise.

Main System (Byarlant): Ryzen 5 1600X | Asus B350-F Strix | Corsair H80i V2 | 16GB G.Skill DDR4 3200MHz CAS-14 | XFX RX 5600 XT THICC II | Samsung 960 PRO 512GB / Samsung 970 EVO 500GB / Seagate 7200RPM 3TB | Corsair CX650M | Mellanox ConnectX-2 10G NIC | Anidees AI-07BW Case | Dell U3415W Monitor | Microsoft Modern Keyboard

 

FreeNAS Server (Veda): Core i3-4170 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 4x 10TB WD Whites / 1x Samsung PM961 128GB SSD / 1x Kingston 16GB SSD | Corsair CX430M | Mellanox ConnectX-2 10G NIC | LSI 9207-8i LBA | Fractal Design Node 804 Case (side panels swapped to show off drives)

 

Media Center/Video Capture (Jesta): Core i7-2600 | Asus H77M-PRO | Stock Cooler | 8GB No-name DDR3 | EVGA GTX750Ti SC | Sandisk UltraII SSD 64GB / Seagate 1.5TB HDD | Corsair CX450M | Hauppauge ImpactVCB-PCIe | Syba USB3.1 Gen 2 Card | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | UMIS 256GB NVMe SSD | (all-around awesome machine)

Laptop (Rozen-ZuluSony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Kingston 120GB SSD | Blu-ray Drive | (lived a good life, retired with honor)

 

Tablet (---): Samsung Galaxy Tab A 8" (crosses fingers)
Tablet (ReGZ)Asus T102HA (BIOS clock doesn't tick, loses time when sleep/off) (I kill tablets with disturbing regularity)

Tablet (Unicorn)Surface Pro 2 (battery will reset total capacity to current charge, leading Windows to think it's always 100% charged until it dies)

Tablet (Loto)Dell Venue 8 Pro (screen discoloration issues, wouldn't update to Windows 10)

Tablet: iPad 2 16GB (WiFi died, basically useless after that)

 

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M (?) | HooToo USB 3.0 PCIe Card | NZXT H230 Case

 

Camera: Sony ɑ7II (w/ Meike Grip) | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance SDXC cards

Link to post
Share on other sites

On the backblaze thing, I believe they support sending the initial dataset via a disk in the mail. The latency might suck, but nothing beats the bandwidth of a crate of disks in the back of a van/car.

Link to post
Share on other sites

I think I'm going to swap the HDDs for either 20x 1TB SSDs or 10x 2TB SSDs (plus 2 for parity with either option). The Supermicro X11 SPI TF has 10 onboard sata ports, so I'm looking into the LSI cards for expansion, particularly 9211 8i or 16i.

 

I saw from some other threads that splitting raid across onboard sata ports and PCIe HBAs will work fine with TrueNas. Looks like they require flashing into an 'IT' mode to get them working. And it looks like I can just get a PCI fan to account for the heat the LSI card(s) will give off. Something I'm curious about is in Linus' video, '24 SSD RAID - Over 20TB of SSD Storage', he says getting 5+ GB/s combined speeds can cause stability issues so he opted for raids on each of the cards first, then striped the cards together to get raid 50.

 

I was wondering what you guys suggest for stability with high quantities of drives, or if it's fine to just cram them all into a single raid array. Anyone have experience with LSI cards?

 

Additionally, I'm back to looking into the Mellanox NICs for this so I can get the full SSD raid array benefits, so my shopping list now looks like:

 

Motherboard: Supermicro X11 SPI TF

CPU: Intel Xeon 3106

Ram: 2 x 32GB of Crucial 2133 RDIMM

Networking: Mellanox MCX456A-ECAT 100GbE to remove NIC bottle necks when later expanding. I'll probably drop to 50gb once I start spending money.

Sata expansion: LSI 9211 8i

OS SSD: 970 Evo Plus 250 GB

Case: Fractal Meshify 2 XL

Cooler: Noctua NH-U12S DX 3647

Fans: Noctua NF-A14 Industrial 3000 PWM 140mm

PSU: Corsair SF450

OS: TrueNas, 1 array of raid 6

Storage SSD: Samsung 870 Evo

Link to post
Share on other sites
6 hours ago, 1045666 said:

was wondering what you guys suggest for stability with high quantities of drives, or if it's fine to just cram them all into a single raid array. Anyone have experience with LSI cards?

 

Id use relativity small sets of drives, like 2x10drive arrays

 

6 hours ago, 1045666 said:

Additionally, I'm back to looking into the Mellanox NICs for this so I can get the full SSD raid array benefits, so my shopping list now looks like:

 

What does your network look like? What protocols? Your normally need a good amount of tuning to get this to work, and most programs can't take advantage of this speed. For your listed uses, hdds will be just fine.

6 hours ago, 1045666 said:

think I'm going to swap the HDDs for either 20x 1TB SSDs or 10x 2TB SSDs (plus 2 for parity with either option). The Supermicro X11 SPI TF has 10 onboard sata ports, so I'm looking into the LSI cards for expansion, particularly 9211 8i or 16i..

 

Id go nvme here, doesn't cost much more, much higher iops and lower latency. And often cheaper if you get used drives on ebay

 

6 hours ago, 1045666 said:

Case: Fractal Meshify 2 XL

Can you use a rack mount case? Id just get one with 24 2.5in hot swap nvme bays.

 

6 hours ago, 1045666 said:

Motherboard: Supermicro X11 SPI TF

CPU: Intel Xeon 3106

Your probably gonna want a faster cpu if you want to fill a 100gbe link

Link to post
Share on other sites
10 hours ago, Electronics Wizardy said:

What does your network look like? What protocols? Your normally need a good amount of tuning to get this to work, and most programs can't take advantage of this speed. For your listed uses, hdds will be just fine.

I'm not sure what the network/protocols details are. I was thinking I could just install faster dual-port NICs in my two current PCs, connect those to each other for direct file transferring between them, then connect the open ports to the NAS so each pc has access. Basically one connection away from needing a switch. I'm not sure what further settings or tweaking I'll be up against. My thinking was 10 x 1TB SSDs would get a theoretical speed of ~5.5+ GB/s, so the 50Gb nics would become a bottleneck. I know to expect half the combined speed in actual application. I'd like to avoid HDDs because a couple nights ago I had to backup 600GB of data on 3 occasions when reformatting and the hour long waits were annoying. I also like the idea of SSDs for speed, reliability, and noise.

 

10 hours ago, Electronics Wizardy said:

Id go nvme here, doesn't cost much more, much higher iops and lower latency. And often cheaper if you get used drives on ebay

I was looking into nvmes before SSDs and would prefer it, but for m.2s it looks like it needs special PCI cards that aren't the cheaper riser cards. And the Honey Badger is way outside what I'm interested in paying lol. For U.2 nvmes, I couldn't find drives that were cheaper than 870 EVOs, and couldn't find the right PCI cards for those either. Any suggestions for drives and connectivity? It seems the gear to implement nvmes would be another expense much higher than SSDs.

 

10 hours ago, Electronics Wizardy said:

Can you use a rack mount case? Id just get one with 24 2.5in hot swap nvme bays.

Noise and space is a small factor for me since I work right beside my computers. My understanding is the rack mountable server cases require higher fan rpm for cooling so it'd be louder than a PC case. I could be wrong as I haven't done enough research into that detail. But also, I couldn't find an empty server chassis that would be cheaper than an XL PC case. I guess I could source ebay? I wouldn't mind switching all my computers to rack cases instead of having 3 towers, just for space. But if the noise goes up dramatically, I'd have to steer away from it for now. Plus there's the added expense of a cabinet.

 

10 hours ago, Electronics Wizardy said:

Your probably gonna want a faster cpu if you want to fill a 100gbe link

Okay. I don't suspect saturating more than 60GB/s if I'm understanding everything correctly. Do you have a recommendation?

Link to post
Share on other sites
5 minutes ago, 1045666 said:

I'm not sure what the network/protocols details are. I was thinking I could just install faster dual-port NICs in my two current PCs, connect those to each other for direct file transferring between them, then connect the open ports to the NAS so each pc has access. Basically one connection away from needing a switch. I'm not sure what further settings or tweaking I'll be up against. My thinking was 10 x 1TB SSDs would get a theoretical speed of ~5.5+ GB/s, so the 50Gb nics would become a bottleneck. I know to expect half the combined speed in actual application. I'd like to avoid HDDs because a couple nights ago I had to backup 600GB of data on 3 occasions when reformatting and the hour long waits were annoying. I also like the idea of SSDs for speed, reliability, and noise.

 

What applications are you using? You probalby won't hit close to 50gbit over the network due to protocol issues, and most apps not being able to uses that much bandwidth..

 

A big hdd array can easily fill 10 gbe easily, and should be plenty here

 

What protocol are you using for sharing files over the network? What os is running on the clients and server?

7 minutes ago, 1045666 said:

I was looking into nvmes before SSDs and would prefer it, but for m.2s it looks like it needs special PCI cards that aren't the cheaper riser cards. And the Honey Badger is way outside what I'm interested in paying lol. For U.2 nvmes, I couldn't find drives that were cheaper than 870 EVOs, and couldn't find the right PCI cards for those either. Any suggestions for drives and connectivity? It seems the gear to implement nvmes would be another expense much higher than SSDs.

 

Id look into used drives like the p4500's on ebay and simmilar models, you gonna be hunting for there, as they often get sub $100 per tb.

 

7 minutes ago, 1045666 said:

Noise and space is a small factor for me since I work right beside my computers. My understanding is the rack mountable server cases require higher fan rpm for cooling so it'd be louder than a PC case. I could be wrong as I haven't done enough research into that detail. But also, I couldn't find an empty server chassis that would be cheaper than an XL PC case. I guess I could source ebay? I wouldn't mind switching all my computers to rack cases instead of having 3 towers, just for space. But if the noise goes up dramatically, I'd have to steer away from it for now. Plus there's the added expense of a cabinet.

 

Yea id get a premade server as its easier to work wtih.

 

But if its right next to your desktop, why not just put the drives in your desktop, and make a network share if needed.

 

8 minutes ago, 1045666 said:

Okay. I don't suspect saturating more than 60GB/s if I'm understanding everything correctly. Do you have a recommendation?

Id probalby try to get a used server here, with something liek a dual e5 v3 platform. The xeon brownze chips make almost no sense and are a awful value.

 

 

Link to post
Share on other sites
35 minutes ago, Electronics Wizardy said:

What applications are you using?

None over the network. It's strictly for file transfers/storage for my business. The OS on my PCs are Windows 10 Pro, and the server will have TrueNas - unless I learn of a reason to use something else. My understanding of protocols is nonexistent, so I don't know how to provide more information on that specific question at the moment. I don't have anything in place of the server right now, so I'd be building everything from the ground up.

 

35 minutes ago, Electronics Wizardy said:

p4500

Wouldn't that require 1 for every drive? And a motherboard to house adequate PCI slots? It looks like the costs grow exponentially with nvme integration vs ssd. I can spring for SSD from HDD if the hardware is mostly the same anyway. Edit: When I looked this up, I saw adapters and not drives - sorry! Those don't look bad then. Are there any U.2 cards you'd recommend? 

 

35 minutes ago, Electronics Wizardy said:

why not just put the drives in your desktop

Raid reliability. I heard W10 isn't the most stable with raid and I'd like something secure for storing the last 5 years of projects. I know it's not a backup, but can hopefully be safer than W10 on HDDs. But what you suggest is essentially what I'm trying to do, just in a separate box so I can use TrueNAS with a balance between fast transfer and cost. I feel like SSDs on LSI cards would be comfortable and a faster cpu will push that over a bit. You're saying to make use of the SSD array's speed, it'd need a faster cpu?

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Newegg

×