Jump to content

File Server For Scaling Capacity and Redundancy

Hi all,
I'm a complete networking newbie (Windows OS only so far)
Looking to make my first step away from just single do-everything desktop PC.
A friend had mentioned getting enterprise gear and Proxmox?  I wanted to check for your recommendations here.


Current Setup/Gear
Currently on i7 6700k on an Asus Z170-A motherboard
All SATA ports are tapped out with full HDD.
Currently no redundancy.  Making me more and more nervous.

 

I will require additional hardware.
Buy used enterprise hardware?
Or retire the 6700k for storage duties + add-in cards? And buy AMD Zen 3 in September?
Storage controller cards for now (for the next 3-4 months)?  For redunancy?

 


Useage

  • Primary useage is for data hoarding,
  • These files, I don't ever want to lose: Work files, personal documents, videos, photos.
  • These files, I can afford to lose: Temporary large files, videos and Steam library

 


Needs

  • Scaleable so I can continue to data hoard in the future (easy growth and migration?)
    less up-front cost of filling a server with drives?
  • Also can benefit from higher capacity drives coming out.
  • Redundancy

 


Wants

  • Not to have to address drives individually, all in 1 cluster?
  • Maybe some sort of online backup?  Currently using Backblaze on my Windows desktop

 


Concerns

  • Going down a proprietary route (saw Synology NAS fail on GamersNexus).
  • HD failure, ease of detection, maintenance and replacement

 


Connections

  • Currently only connection is 1 desktop PC.
  • May wish to scale to 2nd PC for HTPC
  • Streaming to mobile devices maybe
Link to comment
Share on other sites

Link to post
Share on other sites

How much do you care about power usage? Old server stuff loves to use lots of power. They can also be loud.

 

From what you have listed, you don't need much cpu power, so you current cpu should be fine.

 

What case do you have? The server cases are nice as you can puts tons of drives in them.

 

Id probalby go for some sort of raid or parity so disk failure isn't a issue. Look at unraid if you have mismatched drives, and want to easily expand.

 

If your case has space, id probably get a sas card so you can add more drives, run unraid for easy parity, and add more drives to your current case.

Link to comment
Share on other sites

Link to post
Share on other sites

If you want a File Server it can be done on PROXMOX but unless you also want containerization & virtualization it's probably not the best choice from a ease of setup perspective. It does have a very nice WebUI but setting up a network share would require the configuration of another OS within the PROXMOX OS which may be an extra set you aren't personally interested in.

 

If you use a solution that uses ZFS you'll want HBA's not RAID cards.

 

If you're on a budget you CAN use desktop parts but 2nd hand servers off eBay are always nice. More hardware support for the software features.

 

ZFS has inconveniences to deal with like mixing drives of different capacities isn't really a thing. Perhaps it's doable through vdevs but that adds complexity to the pool. If you need to mix drives of different capacities you may like to look into unraid.

Link to comment
Share on other sites

Link to post
Share on other sites

Proxmox is a virtualisation platform first and foremost. If you want virtual machines, then yeah its a great platform. 

Otherwise look at ZFS (like FreeNAS / TrueNAS Core), or if you need mixed drives then UnRAID. as mentioned above. 

 

For data security for those really important files, you'll either want to run 2 drive parity (RAID6 / RAIDZ2), or if going with ZFS you could create a RAID5 for media storage, and a Mirrored 2 disk RAID for your documents so you have 2 copies.  

 

Personally theres plenty of power in that i7 for lots of tasks, so i'd repurpose that and build a new main rig. 

 

Keep in mind that for doing this if you want to repurpose existing disks, you'll need to consider how you can copy/backup the data as you'll need to destroy the partitions on any existing disks to add them to an array. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Power usage: Not really a concern.  Unless its really crazy.

 

Noise:  More quiet is better? Let's say this one the lowest priority?  If I have to relocate it, it can be done.
 

Case:  Currently Fractal Define R5 (8x 3.5" bays and 2x 5.25" bays, unsure if there is a conversion for those)

 

RAID/Parity: Yes.  Some sort of redundancy like that is something I want.  Just unsure which OS and hardware will dictate the setup here.

 

Mismatched drives:  What are the options for matched vs mismatched drives?   
Ideally, I was thinking I'd be able to keep using my current drives + buy a few more?

But if that bottlenecks me in say... 1-2 year's time cause the setup is more janky, then maybe I should/need to go buy 8 brand new drives and just start off with a better foundation?

Also, Proxmox is free and open source?  And unRAID is paywalled for certain features?

 

@Windows7ge I'm not on too much of a budget as long as it's reasonable.  Like I'm not going out and buying new enterprise gear.

Mostly, I just want to lay a good foundation to continue to scale (who knows how bad this data hoarding is going to get? haha)

I don't want to end up in some sort of hardware/software proprietary dead-end.

 

If mixing HDD forces me into a proprietary solution or weaker foundation for scaling, then I'm willing to invest the money.

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Jarsky 

Thanks for that info about Proxmox being for virtualization.  Not sure any virtualization is needed for me, not yet.  

Would it be safe to assume that the file storage and redundancy is still good in Proxmox?  and isn't just an afterthought?

 

So hypothetically, I transplant my i7 into a new Fractical R7 XL and have up to 16 HDD capacity.

I buy all 12 new HDD (so I'm not forced into unRAID)

I'd be doing this with a combination of the 5 free SATA ports of my Asus Z170-A + some HBA cards?
then file storage/redundancy is handled at the OS level?

 

What happens when those 12 HDD are full?

 

What does scaling look like after that?

Fractal_Design_Define_7_Define_7_XL_Feature_Visuals_StorageLayout.0001-2560x1810.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Culverin said:

I'm not on too much of a budget as long as it's reasonable.  Like I'm not going out and buying new enterprise gear.

Mostly, I just want to lay a good foundation to continue to scale (who knows how bad this data hoarding is going to get? haha)

I don't want to end up in some sort of hardware/software proprietary dead-end.

 

If mixing HDD forces me into a proprietary solution or weaker foundation for scaling, then I'm willing to invest the money.

Supermicro has some nice and cheap servers on eBay. LGA2011 is a sweet-spot for good price/performance/power efficiency. The type of server where if all you need to upgrade is data density it'll last you a lifetime provided the hardware holds up. It should be noted these servers are generally noisy though and use a proprietary form-factor.

 

You can find 2nd hand standard form factor equipment and build something out yourself. That's what I like to do. There are server rack-mount boxes that can hold 12, 24 or even more disks. If you need to horde and aren't worried about the highest speeds you can add disk enclosures via SAS.

 

With ZFS you'd just have to create pools with matching drives. That's not too big a problem for most. If you use seriously random capacity disks then unraid would be your easiest option.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Culverin said:

Noise:  More quiet is better? Let's say this one the lowest priority?  If I have to relocate it, it can be done.

Old enterprise can be quite noisey, standard cases you can use decent fans like Noctua's

 

23 minutes ago, Culverin said:

RAID/Parity: Yes.  Some sort of redundancy like that is something I want.  Just unsure which OS and hardware will dictate the setup here.

Basically you're looking at:

ZFS - Freenas & Basically any Linux incl. Proxmox (but no GUI)

Hardware Raid - Any OS but needs a RAID card

Storage Spaces - Windows Pro & Server

BTRFS - Rockstor & Linux

MDADM (Linux Raid) - Linux

 

23 minutes ago, Culverin said:

Mismatched drives:  What are the options for matched vs mismatched drives?   

 

Mixed capacity drives you'd be limited to UnRAID, Storage Spaces, and BTRFS

BTRFS is still kinda beta when it comes to RAID's though so its typically not recommended to go that route. 

 

23 minutes ago, Culverin said:


Ideally, I was thinking I'd be able to keep using my current drives + buy a few more?

Then assuming you don't have a backup place to keep your data and theyre mixed capacity, UnRAID is kind of your best option. UnRAID is basically JBOD, with parity disks.

You can create the array, copy data over, then preclear your old disks and add them to the array. 

Keep in mind though UnRAID license isnt cheap for a Pro license, it's $129

 

23 minutes ago, Culverin said:

But if that bottlenecks me in say... 1-2 year's time cause the setup is more janky, then maybe I should/need to go buy 8 brand new drives and just start off with a better foundation?

Starting from a new foundation can be a very good idea with storage. It also gives you the most options as you can consider ZFS as well. 

Also keep in mind with ZFS, you can create 1 vdev now, and create another later...that is to say you create 1 RAID5/6 with 4-5 disks.....then later on you create another RAID5/6, and add it to the pool. Each vdev will have its own redundancy, but it will be pooled as a single storage as far as your datasets are concerned. 

 

23 minutes ago, Culverin said:

Also, Proxmox is free and open source?  And unRAID is paywalled for certain features?

Yup Proxmox is free, but keep in mind all storage (ZFS) you do via command line, there is no GUI

I have tried setting up Debian Buster with Proxmox (PVE), and installing Cockpit ZFS Manager, but its very buggy

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Culverin said:

@Jarsky 

Thanks for that info about Proxmox being for virtualization.  Not sure any virtualization is needed for me, not yet.  

Would it be safe to assume that the file storage and redundancy is still good in Proxmox?  and isn't just an afterthought?

It just uses ZFS on Linux, so same as any other Linux Distro...Proxmox is just Debian kernel underneath, you can even Install PVE (Proxmox) onto Debian

The storage capabilities are really designed for holding VM's & Linux Containers, not for native file storage....think of it like VMWare ESXi. Yes it has built in storage on the host for local datastores, but its really designed to be used with SAN storage. 

 

19 minutes ago, Culverin said:

So hypothetically, I transplant my i7 into a new Fractical R7 XL and have up to 16 HDD capacity.

That'd be a nice case, my NAS is in a Fractal R6,

 

19 minutes ago, Culverin said:

I buy all 12 new HDD (so I'm not forced into unRAID)

You could setup a single VDEV for now with 6 disks, and add 6 later in a second VDEV....its typically better to have smaller VDEV's in your pool than 1 very large one. 

 

19 minutes ago, Culverin said:

I'd be doing this with a combination of the 5 free SATA ports of my Asus Z170-A + some HBA cards?

You could get a couple of HBA's like the LSI 9211-8i's. Theyre cheap, and 2 would give you 16 ports without using your motherboard ports...leave those potentially for SSD's. 

 

19 minutes ago, Culverin said:

then file storage/redundancy is handled at the OS level?

All of the solutions mentioned are OS level aka Software. Hardware level RAID you'd need a controller like an LSI 9260-8i

 

19 minutes ago, Culverin said:

What happens when those 12 HDD are full?

 

What does scaling look like after that?

 

ZFS gives you a lot of scalability. As mentioned above, you can keep creating VDEV's. You can also upgrade your VDEV's. You upgrade each disk 1 at a time in the VDEV allowing it to rebuild parity after each disk. Once theyre all upgraded you can grow your dataset. 

 

 

It's worth mentioning hardware is still a really good option. 

- Works on any OS, and can move the entire array between systems. 

- You can change RAID levels (e.g change RAID5 to RAID6)

- You can add additional disks to RAID 1 at a time. 

- You can replace disks with larger ones, and grow the entire virtual drive

Downsides are:

- If your controller dies, you need a replacement....your array is hardware dependent

- For max speed you need a battery, so occasional battery replacements

- Lose the extra benefits that ZFS can have like dedup and bitrot protection

- All disks needs to be attached to the controller, so need to consider how you'd expand off the native ports such as a SAS Expander card. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 months later...
On 6/10/2020 at 5:37 PM, Jarsky said:

It just uses ZFS on Linux, so same as any other Linux Distro...Proxmox is just Debian kernel underneath, you can even Install PVE (Proxmox) onto Debian

The storage capabilities are really designed for holding VM's & Linux Containers, not for native file storage....think of it like VMWare ESXi. Yes it has built in storage on the host for local datastores, but its really designed to be used with SAN storage. 

 

That'd be a nice case, my NAS is in a Fractal R6,

 

You could setup a single VDEV for now with 6 disks, and add 6 later in a second VDEV....its typically better to have smaller VDEV's in your pool than 1 very large one. 

 

You could get a couple of HBA's like the LSI 9211-8i's. Theyre cheap, and 2 would give you 16 ports without using your motherboard ports...leave those potentially for SSD's. 

 

All of the solutions mentioned are OS level aka Software. Hardware level RAID you'd need a controller like an LSI 9260-8i

 

ZFS gives you a lot of scalability. As mentioned above, you can keep creating VDEV's. You can also upgrade your VDEV's. You upgrade each disk 1 at a time in the VDEV allowing it to rebuild parity after each disk. Once theyre all upgraded you can grow your dataset. 

 

 

It's worth mentioning hardware is still a really good option. 

- Works on any OS, and can move the entire array between systems. 

- You can change RAID levels (e.g change RAID5 to RAID6)

- You can add additional disks to RAID 1 at a time. 

- You can replace disks with larger ones, and grow the entire virtual drive

Downsides are:

- If your controller dies, you need a replacement....your array is hardware dependent

- For max speed you need a battery, so occasional battery replacements

- Lose the extra benefits that ZFS can have like dedup and bitrot protection

- All disks needs to be attached to the controller, so need to consider how you'd expand off the native ports such as a SAS Expander card. 

@Jarsky thank you for all the advice.
I've been completely putting this off for a while, 

 

I'm currently maxed out on storage, so I was thinking for right now, I'll get an HBA and 2 HDD,
run it as JBOD for the next 2-3 months.

Then in January, I'll get Zen 3 and retire the i6700k into the file server.
I still haven't figured out which path to take. Still gotta figure out what criteria to balance.

 

 

Is this the LSI 9211-8i you mentioned?

It is described as a RAID card.  I'm seeing some comments mention having to flash the firmware on them?  Is that the case?

https://www.amazon.ca/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M

 

If I'm flashing firmware... reading this thread, it recommends the IBM ServeRAID M1015

https://www.ixsystems.com/community/threads/confused-about-that-lsi-card-join-the-crowd.11901/


and that is this card? 

https://www.amazon.ca/IBM-Serveraid-M1015-Controller-46M0831/dp/B0034DMSO6

 

 

What are your thoughts on the IBM ServeRAID M1015 vs LSI 9211-8i?

Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Culverin said:

Then in January, I'll get Zen 3 and retire the i6700k into the file server.

Sounds like a good plan

 

1 hour ago, Culverin said:

Is this the LSI 9211-8i you mentioned?

It is described as a RAID card.  I'm seeing some comments mention having to flash the firmware on them?  Is that the case?

https://www.amazon.ca/SAS9211-8I-8PORT-Int-Sata-Pcie/dp/B002RL8I7M

That is it yes, The 9211-8i is not a RAID card. It has some limited RoC (RAID on Chip) functionality, but it has no cache, battery/eeprom backup, etc...

1 hour ago, Culverin said:

If I'm flashing firmware... reading this thread, it recommends the IBM ServeRAID M1015

https://www.ixsystems.com/community/threads/confused-about-that-lsi-card-join-the-crowd.11901/


and that is this card? 

https://www.amazon.ca/IBM-Serveraid-M1015-Controller-46M0831/dp/B0034DMSO6

Yup that is an M1015

 

1 hour ago, Culverin said:

What are your thoughts on the IBM ServeRAID M1015 vs LSI 9211-8i?

Theyre essentially the same card. The difference is that the 9211-8i is an LSI branded card, while the M1015 is an OEM card. 

In this case the layout is a bit different due to it being made for a rackmount server, and the OEM cards have slightly modified firmware. 

 

I'd really recommend looking at eBay though for the cards, as those prices are far to high ~US$100 for cards that are 4 generations old now. 

 

These are what the pricing should be, about $30

9211: https://www.ebay.com/itm/LSI-6Gbps-SAS-HBA-Fujitsu-D2607-A21-FW-P20-9211-8i-IT-Mode-ZFS-FreeNAS-unRAID/143299825144

M1015: https://www.ebay.com/itm/LSI-6Gbps-SAS-HBA-LSI-9200-8i-IT-Mode-ZFS-FreeNAS-unRAID-2-SFF8087-SATA/124178147022

H310: https://www.ebay.com/itm/Dell-PERC-H310-8-Port-6Gb-s-SAS-Adapter-RAID-Controller-HV52W-Replaces-Perc-H200/192120843762

 

Theyre all the same card. Some vendors sell the M1015 & H310 pre-flashed with the LSI IT firmware, but you can always flash them yourself as well. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Jarsky said:

I'd really recommend looking at eBay though for the cards, as those prices are far to high ~US$100 for cards that are 4 generations old now. 

Oh wow.  $30 is a fraction of what I expected to spend!
Heck yeah!
😄

You mention that it's 4 generations old?
Is there any reason to get anything newer?  And where should I be reading up so I can learn more about this?

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Culverin said:

Oh wow.  $30 is a fraction of what I expected to spend!
Heck yeah!
😄

You mention that it's 4 generations old?
Is there any reason to get anything newer?  And where should I be reading up so I can learn more about this?

The latest generation from LSI are the 9500 series, which are 12GB SAS NVMe support with PCIe Gen4. There's no point getting these latest controllers for home use with HDDs. You could maybe do the 9300 series if you want to do SSD pools as well, but anything newer than that would be a waste of money. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

A suggestion ... if you're in US, look at webhosting forums like webhostingtalk.com, from time to time  there's servers put for sale in their "hardware sales" section, and you can score good rackable servers with lots of drive bays and often motherboards with 10g cards on them. 

 

For example here's their "hardware" section : https://www.webhostingtalk.com/forumdisplay.php?f=163

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

My unRAID box is based almost with the same hardware you have mine is an I7 6700 non K but its been rock solid its serves as my NAS, plex media server, torrent machine and my homeassistant machine your hardware will work fine and unRAID will be the cheapest part of the build!

My daily driver: The Wrath of Red: OS Windows 10 home edition / CPU Ryzen TR4 1950x 3.85GHz / Cooler Master MasterAir MA621P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / ASRock x399 Taichi / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / Samsung 512GB 970 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor x3

 

My technology Rig: The wizard: OS Windows 10 home edition / CPU Ryzen R7 1800x 3.95MHz / Corsair H110i / PSU Thermaltake Toughpower 750watt / ASUS CH 6 / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / 512GB 960 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor HP Monitor

 

My I don't use RigOS Windows 10 home edition / CPU Ryzen 1600x 3.85GHz / Cooler Master MasterAir MA620P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / MSI x370 Gaming Pro Carbon / Gskill Flare X 32GB DDR4 3200Mhz / Samsung PM961 256GB M.2 PCIe Internal SSDEVGA GeForce GTX 1050 Ti SSC GAMING / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor

 

My NAS: The storage miser: OS unRAID v. 6.9.0-beta25 / CPU Intel i7 6700 / Cooler Master MasterWatt Lite 500 Watt 80 Plus / ASUS Maximus viii Hero / 32GB Gskill RipJaw DDR4 3200Mhz / HP Mellanox ConnectX-2 10 GbE PCI-e G2 Dual SFP+ Ported Ethernet HCA NIC / 9 Drives total 29TB - 1 4TB seagate parity - 7 4TB WD Red data - 1 1TB laptop drive data - and 2 240GB Sandisk SSD's cache / Headless

 

Why did I buy this server: OS unRAID v. 6.9.0-beta25 / Dell R710 enterprise server with dual xeon E5530 / 48GB ecc ddr3 / Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT / 4 450GB sas drives / headless

 

Just another server: OS Proxmox VE / Dell poweredge R410

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×