Jump to content

Home storage/server upgrade (buy new vs old)

Razor Blade

I am planning on updating my home system fairly soon. So I am hoping to get some wisdom here. To give background. Computers are a hobby. I would consider myself between novice and intermediate when it comes to general knowledge. I do know how to build a machine and troubleshoot most issues. I am still relatively new when it comes to things beyond home PCs. Recently I built myself a new PC so I decided to turn my old PC and hard drives I had laying around into a NAS...which turned into a NAS + media server...which is turning into project after project...isn't that how this stuff goes?

 

Current NAS is my old gaming PC (very old and probably the most inefficient hardware I could be running for a NAS I do know that already)

*Case = cooler master HAF 932 with silverstone RL-FS305B front bay hot swap enclosure

*PSU = 1k watt Xion AXP

*MB = ASUS rampage extreme II with LGA 1366 socket

*CPU = Intel core i7 940

*RAM = 24GB corsair vengeance (6x4GB) DDR3

*HDD = 5X 2TB drives in raidz2 config (for storage) + 1X 1TB drive (used for jails)

*OS = Freenas 11

 

What I use it for currently is storage of data and plex media server which it's done just fine and waaaaay more than enough to transcode media or store data. What I'm looking to do is start playing around more with multiple VMs...this is where the hardware is starting to show it's age. Plus the hot swap bay is not great quality and the rig is very power hungry for what it does...

 

What I'm aiming for... a machine that will obviously store data and function as the plex media server as I have now, but also be capable of running 1-2 VMs in the home with the possibility of more in the future. I would like something I can mount in the bottom of a rack. I'm not pressed for space or anything so even something like a 4U isn't out of the question. What I really need help with right now is to determine what hardware would be the best value for my needs. I've looked into old server components on ebay and feel like it's a giant can of worms that I would prefer not to learn about with my wallet. I would need a lot of help with what will work together and avoid proprietary components when possible. At the same time if I go with consumer grade stuff I don't want to buy junk either. I know there is a lot of it out there... I've done a bit of searching but computer hardware builds become quickly outdated and availability of mentioned hardware generally is pretty much nil.

 

As far as budget goes I have no idea what this stuff can start costing... I am hoping to aim for something sub $1k USD.

 

Thank you in advance!

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Cheapest option would be to buy a used retired Dell or HP server off Ebay or the like. As you said though a lot of the parts are proprietary. You could buy parts new and use some older gen hardware like socket, LGA1150, LGA1151, or LGA2011 but buying new I don't think you'll get a full server built under $1,000. The Xeon (a Intel server CPU) will eat a good chunk of the budget right off of the bat but I can put together a couple lists and see what I come up with. Do you have a case in particular that you want? These kinds of budget limited projects are best pursued by getting the critical components and the system running then upgrading accessories down the road when more money is available.

Link to comment
Share on other sites

Link to post
Share on other sites

If you are planning on continuing to use freenass get a new processor, board, ram that supports that supports ecc.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, combine1237 said:

If you are planning on continuing to use freenass get a new processor, board, ram that supports that supports ecc.

Thanks for the input. I will be sticking with freenas for the moment as that is what I know right now. Though ECC ram would be nice to have, I will be honest and say unless I end up with used hardware that supports ECC... I'm probably not going to bother.

2 hours ago, Windows7ge said:

Cheapest option would be to buy a used retired Dell or HP server off Ebay or the like. As you said though a lot of the parts are proprietary. You could buy parts new and use some older gen hardware like socket, LGA1150, LGA1151, or LGA2011 but buying new I don't think you'll get a full server built under $1,000. The Xeon (a Intel server CPU) will eat a good chunk of the budget right off of the bat but I can put together a couple lists and see what I come up with. Do you have a case in particular that you want? These kinds of budget limited projects are best pursued by getting the critical components and the system running then upgrading accessories down the road when more money is available.

Thank you for the information, it is very helpful! That was kind of what I figured... though it is really difficult to sort through all the hardware available. I know any old xeon isn't necessarily better than what I have...or that those cheap $200 2U servers you find all over the place would necessarily do what I am hoping for. I don't have a particular case in mind but I did want something I could mount in a rack and have a hot swap back plane for 3.5" SATA HDD. I've heard you could possibly use SATA HDD on a SAS backplane but I haven't looked enough into it to know if it's true.

 

I know this is probably a loaded question but for Xeon processors, should I look into something like a single 4 or 6 core or dual 2 core type layout? Does it matter if it's older hardware? The reason I ask is something like this...

 

Dell R710
Dual Xeon E5530
12gb ram
6 x 3.5" drive bays
Perc controller
Dual PSU

 

I can get for under $400 all day long. But is this something even worth considering?

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Razor02097 said:

Thanks for the input. I will be sticking with freenas for the moment as that is what I know right now. Though ECC ram would be nice to have, I will be honest and say unless I end up with used hardware that supports ECC... I'm probably not going to bother.

If you like FreeNAS (I love it, been using it for about 3~4 years now) and your data is important to you you'll want ECC memory. The reason is FreeNAS relies heavily on memory when loading data onto disk. When FreeNAS does a disk check it will compare data on disk to data in memory. If a bit flips in memory (not super common but it happens) FreeNAS will see that the information in memory is different from the information on disk. FreeNAS will then write over the data on disk because FreeNAS always considers the data in RAM to be correct or most up to date. If a bit flips in RAM without ECC it will corrupt files on disk the next time the disk is checked.

 

11 minutes ago, Razor02097 said:

I've heard you could possibly use SATA HDD on a SAS backplane but I haven't looked enough into it to know if it's true.

SATA drives WILL work on SAS backplanes/controllers but SAS drives WON'T work on SATA drive planes/controllers so what you've heard is correct.

 

12 minutes ago, Razor02097 said:

I know this is probably a loaded question but for Xeon processors, should I look into something like a single 4 or 6 core or dual 2 core type layout? Does it matter if it's older hardware? The reason I ask is something like this...

 

Dell R710
Dual Xeon E5530
12gb ram
6 x 3.5" drive bays
Perc controller
Dual PSU

 

I can get for under $400 all day long. But is this something even worth considering?

I'll see if I can find a 4U chassis with 3.5" bays on the front. My server uses the NORCO RPC 4224 but it'd eat almost 1/2 your budget and it has 24 bays...don't know if you want that many. As for your hardware selection I think you'll want something a little bit more modern. You can only give away up to 50% of your CPU/RAM to VM's so I'd recommend something newer. The Intel Xeon E5 2670 is a nice 8 core 16 thread LGA2011 CPU...if we can find it in budget. It'd allow up to 2 quad core VM's and we'd probably want 32GB of RAM. 16GB for FreeNAS and 8GB for each VM...I think this is over budget. I'll see what I can find.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Windows7ge said:

If you like FreeNAS (I love it, been using it for about 3~4 years now) and your data is important to you you'll want ECC memory. The reason is FreeNAS relies heavily on memory when loading data onto disk. When FreeNAS does a disk check it will compare data on disk to data in memory. If a bit flips in memory (not super common but it happens) FreeNAS will see that the information in memory is different from the information on disk. FreeNAS will then write over the data on disk because FreeNAS always considers the data in RAM to be correct or most up to date. If a bit flips in RAM without ECC it will corrupt files on disk the next time the disk is checked.

 

SATA drives WILL work on SAS backplanes/controllers but SAS drives WON'T work on SATA drive planes/controllers so what you've heard is correct.

 

I'll see if I can find a 4U chassis with 3.5" bays on the front. My server uses the NORCO RPC 4224 but it'd eat almost 1/2 your budget and it has 24 bays...don't know if you want that many. As for your hardware selection I think you'll want something a little bit more modern. You can only give away up to 50% of your CPU/RAM to VM's so I'd recommend something newer. The Intel Xeon E5 2670 is a nice 8 core 16 thread LGA2011 CPU...if we can find it in budget. It'd allow up to 2 quad core VM's and we'd probably want 32GB of RAM. 16GB for FreeNAS and 8GB for each VM...I think this is over budget. I'll see what I can find.

Thank you I really appreciate the help and I didn't know that about freenas. Like I said I'm still new at this... I guess ECC will be way more important if I actually start using VMs to avoid data corruption.

 

24 bays would be way overkill for me. I have 6 HDDs right now with 5 in the main data storage array. I was thinking a minimum of 6 to give some room for expansion of the data volume array. Though if I had more than 6 bays, it would give the option to make the new volume and move everything over without having to back it up first...which is appealing. When I built my new gaming PC I wish I knew I was going to want to build a server...I would have had more money for this... at the time I was just going to build a NAS out of the old PC but never realized how useful having a server could be. The good news is I only have about $80 vested in my current freenas rig for the hot swap backplane. So I won't be shedding many tears if I can't reuse anything.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Razor02097 said:

Thank you I really appreciate the help and I didn't know that about freenas. Like I said I'm still new at this... I guess ECC will be way more important if I actually start using VMs to avoid data corruption.

 

24 bays would be way overkill for me. I have 6 HDDs right now with 5 in the main data storage array. I was thinking a minimum of 6 to give some room for expansion of the data volume array. Though if I had more than 6 bays, it would give the option to make the new volume and move everything over without having to back it up first...which is appealing. When I built my new gaming PC I wish I knew I was going to want to build a server...I would have had more money for this... at the time I was just going to build a NAS out of the old PC but never realized how useful having a server could be. The good news is I only have about $80 vested in my current freenas rig for the hot swap backplane. So I won't be shedding many tears if I can't reuse anything.

If you know places to buy these cheaper then awesome but buying them here like this they will eat your budget but it's about everything you'll need electrically:

 

SUPERMICRO X9SRA - $252.99

Noctua NH-D9L - $54.95

CORSAIR RMx Series RM750X - $143.74 (ordinarily you'd want a redundant server PSU but for a little home job this will do just fine. It won't be under full load 100% of the time. A UPS is recommended too as FreeNAS has another quirk that I can explain if you want.)

Kingston 32GB (4 x 8GB) - $399.99 (The price of RAM went up quite a bit. This use to cost $214.99)

Intel Xeon E5-2670 - $160.00 (This is a refurb but CPU's almost never die. If you want new there's one on the site for an additional $13)

 

Total: $1,013.66 (that includes shipping but to my general region of the world, yours will vary a little) Unfortunately this leaves you without a case. Good news is the motherboard is ATX so you could install this into a mid-tower case until you have the budget for a proper chassis. The specific CPU heatsink I picked is designed to suit server chassis down to 3U. Here's a picture of my 4U server with two of them installed:

IMAG0237.thumb.jpg.5b3d688869faa81a2229f4df72052760.thumb.jpg.a82d38a8e91e12a651cbf6ae88fbdee4.jpg

They're small, don't overhang the RAM, and are quite effective for their size, by server standards they're also not very loud.

 

As for chassis, you might consider any of these solution down the road they can be expand to hold all your drives, support the motherboard and PSU:

iStarUSA D-4-B350PL-RED (could buy it with this to get 5 more bays)

iStarUSA D-400-6-Blue (could buy it with two of these to get exactly 8 bays)

Rosewill RSV-L4412 (12 bay, by default. Might replace the fans inside with higher quality though.)

NORCO RPC-4308 (8 bays but no option to go beyond that)

NORCO RPC-4216 (16 bays and two 5.25" bays)

There's many other options some require SFF-8087 cables and controller cards.

Link to comment
Share on other sites

Link to post
Share on other sites

Wow thank you for all the information! I'll look around to see if I can find anything for cheaper but newegg is usually pretty good when it comes to that. I will have to watch on ebay as well. I would almost be tempted to cheap out on the ram and use the DDR3 I have now but I remember reading that server motherboards can be picky and won't work with normal desktop memory a lot of the time. I'll have to read up more on backplanes and sas as well as raid controllers. I would like to keep the raid software managed so I will have to find a card that will allow that.

 

As far as chassis goes I guess it would be better to cough up the money because adding hot swap enclosures usually adds about $100 for each unit. So I guess thats something to consider too

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

No point going for other LGA1366 gear. As long as you arent pushing your CPU your efficiency is going to be about the same. Those old Nehalems have C3/C4 stepping so are fairly efficient when not under load. 

 

As far as VM's it would really depend on what you intend on doing with those VM's. My old box had just as Phenom II 550BE unlocked, and could happily run my 2 linux vm's and my windows vm. With that in mind, that kind of setup could happily run on something like this:

 

PCPartPicker part list / Price breakdown by merchant

CPU: Intel - Core i3-7350K 4.2GHz Dual-Core Processor  ($157.46 @ Amazon) 
CPU Cooler: Noctua - NH-U12S 55.0 CFM CPU Cooler  ($57.99 @ Amazon) 
Motherboard: ASRock - E3V5 WS ATX LGA1151 Motherboard  ($101.98 @ Newegg) 
Memory: Crucial - 16GB (1 x 16GB) DDR4-2133 Memory  ($179.74 @ Newegg) 
Memory: Crucial - 16GB (1 x 16GB) DDR4-2133 Memory  ($179.74 @ Newegg) 
Storage: Samsung - 850 EVO-Series 250GB 2.5" Solid State Drive  ($99.89 @ OutletPC) 
Case: Phanteks - Enthoo Pro M ATX Mid Tower Case  ($69.99 @ Newegg) 
Power Supply: Corsair - RMx 750W 80+ Gold Certified Fully-Modular ATX Power Supply  ($109.99 @ Amazon) 
Total: $956.78
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2018-02-21 22:06 EST-0500

 

Add in an LSI 9311-8i and it would be an efficient system around budget

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Windows7ge said:

If you like FreeNAS (I love it, been using it for about 3~4 years now) and your data is important to you you'll want ECC memory. The reason is FreeNAS relies heavily on memory when loading data onto disk. When FreeNAS does a disk check it will compare data on disk to data in memory. If a bit flips in memory (not super common but it happens) FreeNAS will see that the information in memory is different from the information on disk. FreeNAS will then write over the data on disk because FreeNAS always considers the data in RAM to be correct or most up to date. If a bit flips in RAM without ECC it will corrupt files on disk the next time the disk is checked.

 

All modern oses do this, ECC isn't any better on zfs compared to ntfs, btrfs, refs, or others. A memory error will cause data corruption if it happends at the correct time. Its unlikey, but possilbe.

 

ZFS won't start writing over the disk due to incorrect memory. It make multiple checks before writing over and a simple memory error won't cause them. It will checksum the bit to be replaced so it can't put it a corrupted part during a scrub.

 

So yea, ECC is nice to have, but nothing about ZFS makes it need ECC more than any other modern storage solution.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Electronics Wizardy said:

All modern oses do this, ECC isn't any better on zfs compared to ntfs, btrfs, refs, or others. A memory error will cause data corruption if it happends at the correct time. Its unlikey, but possilbe.

 

ZFS won't start writing over the disk due to incorrect memory. It make multiple checks before writing over and a simple memory error won't cause them. It will checksum the bit to be replaced so it can't put it a corrupted part during a scrub.

 

So yea, ECC is nice to have, but nothing about ZFS makes it need ECC more than any other modern storage solution.

Good to note. I'll have to research what really makes ECC memory a standard for these types of storage solutions. There must be more to it that I'm not aware of.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windows7ge said:

Good to note. I'll have to research what really makes ECC memory a standard for these types of storage solutions. There must be more to it that I'm not aware of.

ECC is great to have. Memory errors and random and can cause issues. Its just some people have way overhyped the problem, and made it look like a zfs thing, not a all computers thing.

 

All modern systems have data going though ram before the program and use the unused ram as a disk cache(its in the background, you don't notice it.)

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Electronics Wizardy said:

ECC is great to have. Memory errors and random and can cause issues. Its just some people have way overhyped the problem, and made it look like a zfs thing, not a all computers thing.

 

All modern systems have data going though ram before the program and use the unused ram as a disk cache(its in the background, you don't notice it.)

This reminds me of a misconception someone here on the forum cleared up for me that it's wrong when someone says when using ZFS to have 1GB of RAM per TB of storage. I was even linked to a discussion group where the developers spoke about how they don't know where that misconception came from and that you could run an Exabyte of storage off just a few GB. I see why this would be true. ZFS just uses excess RAM as cache for files. It's not a necessity. Circumstantially the more you have equals more performance but only when reading large quantities of files from the arrays. The most frequently used will cache in RAM for faster access. The more RAM you have the more files you can cache. Faster isn't a good word to describe what excess ram does for ZFS.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Razor02097 said:

I would like to keep the raid software managed so I will have to find a card that will allow that.

ZFS isn't picky about what controllers it's plugged into so long as it has direct access to the drives. If you really want dedicated HBA cards to plug the drives into instead of the miscellaneous onboard SATA the LSI 9207-8i is pretty good. Operates the drives in JBOD mode so ZFS is happy. Little pricy though. Alternatively if you want to save a buck there's 8 port options that are refurbs like the Dell PERC H310. If you're comfortable playing with DOS you can flash the cards firmware to IT mode so the server sees it just as a bunch of drives once plugged in. It'll take some work but it's only $45.00 (not including shipping). You do have to be wary though. If the controller starts to fail it could corrupt data on disk and I don't believe freeNAS will be able to tell you it's the controller. Schedule periodic scrubs and check them for errors. If a lot come up it might be the card.

Link to comment
Share on other sites

Link to post
Share on other sites

For what I want to do with VMs I wanna mine crypto bruh! No I'm kidding...I haven't played around with them enough to know what they're capable of yet. Starting out I would like a simple VM with windows 7 for legacy programs I can't run properly on 10. Also windows 10 I can use to install programs I don't want to clog up my main computer with...Also if the VM would have enough horsepower it would be nice if I could move the job of encoding media content onto another computer... It doesn't take that many resources from my main computer but it is cpu intensive especially if I'm using my SSD...handbrake chews up those frames! I don't have room to setup tons of computers in the house so it would be nice if I could push all of this work onto my NAS...plus I figure it would be far more power efficient.

 

Good to know that about ZFS. If I'm going for more than the drives the mobo can support I'll have to look for a controller also. Right now 6 drives at a mere 6TB is sufficient but the disks I'm using are older and a mix bag of WD green, black, seagate, etc. That's the reason I'm running raidz2 and scheduling S.M.A.R.T. tests. I'm also trying to remember to backup my data onto my main computer weekly in case the whole NAS decides to hop off the table and roll out the door... Eventually if I'm going to get something more legit I have to invest in some drives too...hence the smaller budget.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Razor02097 said:

Thanks for the input. I will be sticking with freenas for the moment as that is what I know right now. Though ECC ram would be nice to have, I will be honest and say unless I end up with used hardware that supports ECC... I'm probably not going to bother.

Thank you for the information, it is very helpful! That was kind of what I figured... though it is really difficult to sort through all the hardware available. I know any old xeon isn't necessarily better than what I have...or that those cheap $200 2U servers you find all over the place would necessarily do what I am hoping for. I don't have a particular case in mind but I did want something I could mount in a rack and have a hot swap back plane for 3.5" SATA HDD. I've heard you could possibly use SATA HDD on a SAS backplane but I haven't looked enough into it to know if it's true.

 

I know this is probably a loaded question but for Xeon processors, should I look into something like a single 4 or 6 core or dual 2 core type layout? Does it matter if it's older hardware? The reason I ask is something like this...

 

Dell R710
Dual Xeon E5530
12gb ram
6 x 3.5" drive bays
Perc controller
Dual PSU

 

I can get for under $400 all day long. But is this something even worth considering?

It is worth it, I didn't see any large VM use past making a NAS so 12 GB of RAM is enough but I personally would just upgrade it to 32 GB for those other VMs you want. My servers all have 128+ GB  of RAM and I tend to use a good chunk of each.

Link to comment
Share on other sites

Link to post
Share on other sites

It really helps to have all the input! I think I'll take this in logical stages First I will plan on getting a quality permanent case and PSU along with some NAS hard drives to sure up my current setup. Then stage 2 I will plan on getting a new mobo, CPU, and RAM. This way I don't have to compromise now just to gain VM functionality but possibly regret later...especially if the possibility of RAM prices going back to normal.

 

 

18 hours ago, Windows7ge said:

A UPS is recommended too as FreeNAS has another quirk that I can explain if you want.

I do have a UPS currently though it isn't a fancy one...I haven't been using the smart function on it yet as I'm still learning things on Freenas and don't want to mess up anything. What was the quirk mentioned about the UPS?

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, KirbyTech said:

It is worth it, I didn't see any large VM use past making a NAS so 12 GB of RAM is enough but I personally would just upgrade it to 32 GB for those other VMs you want. My servers all have 128+ GB  of RAM and I tend to use a good chunk of each.

Depends on what I can do with it. I don't know if I would necessarily use giant chunks of RAM right now but I agree it would be nice to divide up more than 4GB RAM between VMs and not be limited for when the NAS needs to transcode or use ram for freenas to do it's thing. From everything I've read and even the minimum requirements for freenas you need a minimum of 8GB of ram to be safe so 12GB would be cutting it close for VMs too.

 

I think I've decided against getting the old Dell server for now. I may end up building something myself which is fine with me...I would prefer to mess with hardware I'm more familiar with.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Razor02097 said:

Depends on what I can do with it. I don't know if I would necessarily use giant chunks of RAM right now but I agree it would be nice to divide up more than 4GB RAM between VMs and not be limited for when the NAS needs to transcode or use ram for freenas to do it's thing. From everything I've read and even the minimum requirements for freenas you need a minimum of 8GB of ram to be safe so 12GB would be cutting it close for VMs too.

 

I think I've decided against getting the old Dell server for now. I may end up building something myself which is fine with me...I would prefer to mess with hardware I'm more familiar with.

Buy a Dell R710 or R610 both would be great for you. They are solid and are much better than using an i5 or similar consumer product. If it is a NAS you want the 2x power supply and a UPS always. Consumer gear just doesn't give you 2 PSUs. 

 

Another thing is the RAM for the Dell servers is cheap. I just bought a kit of 96 GB for ~$150 USD, beat that with a new DDR4 build.... Really though a proper server will do you best.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Razor02097 said:

What was the quirk mentioned about the UPS?

@Electronics Wizardy can correct me on this if this is also false about ZFS but I've been told that since this is a software RAID unlike hardware RAID where when power is lost a battery attached to the RAID controller will hold the block being written to the array in the RAM attached to the controller until power is reapplied where it will then finish writing the block. This prevents further data corruption beyond the file that didn't complete copying. Software RAID doesn't have this so if power is lost when data is being written to the array bad things will happen so a UPS is highly recommended.

 

FreeNAS has a service you can enable for connecting a UPS. This requires a communication cable though so the battery can talk to FreeNAS and tell it how much of a charge the battery has and if wall power has been lost. You can then program it to either shut the server off immediately or when the battery gets low.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Windows7ge said:

@Electronics Wizardy can correct me on this if this is also false about ZFS but I've been told that since this is a software RAID unlike hardware RAID where when power is lost a battery attached to the RAID controller will hold the block being written to the array in the RAM attached to the controller until power is reapplied where it will then finish writing the block. This prevents further data corruption beyond the file that didn't complete copying. Software RAID doesn't have this so if power is lost when data is being written to the array bad things will happen so a UPS is highly recommended.

 

FreeNAS has a service you can enable for connecting a UPS. This requires a communication cable though so the battery can talk to FreeNAS and tell it how much of a charge the battery has and if wall power has been lost. You can then program it to either shut the server off immediately or when the battery gets low.

ZFS handles this in two ways depending on how you are writing a file.

 

If you aren't using sync, the file goes to ram, then the operation is reported as complete. This will later get written to disk. If power failure happens at this time you will lose data that has been reported as written. 

 

If you are using sync It will force the write to the drive before the confirmation that the request is done. This will make it so you will never lose data reported as written during a power failure. 

 

You can force sync writes to all uses with zfs set sync=always.

 

Problem is that using sync is very slow normally, so its not used for cifs, iscsi and many other uses. To get past this you have a options slog that stores the data that has been reported as written but not on the main disks. This is normally a high endurance ssd(its only written to, the slog is only read from during a power failure recovery). 

 

Hope this helps.

Link to comment
Share on other sites

Link to post
Share on other sites

The DELL R710 is a very populare home server now days, even amongst IT professionals. 

 

What i do wonder however, is what types of VMs are you wanting to make? The first thing that will stop you is RAM capacity of the server itself.

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Windows7ge said:

This reminds me of a misconception someone here on the forum cleared up for me that it's wrong when someone says when using ZFS to have 1GB of RAM per TB of storage. 

 

I don't think its so much wrong, but an outdated concept based on factors at the time. This was said yearssss ago when we were still dealing mostly in 1-2TB disks. So if you had a ZFS Raid with 8 x 1TB disks, then you would have wanted 8GB - to be able to handle your ARC, Scrubbing , De-dup, etc.....but that minimum memory requirement hasnt realy increased with the rate of disk size increases. Back then it was quite common for people to be using 2-4GB ram for storage, so arrays with more disks couldnt handle all of the ZFS functions without following this rule. Now that 8-16GB is common ground, the concept is basically defunct, but it was a bit of a reality 7-8 years ago. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Electronics Wizardy said:

This is normally a high endurance ssd(its only written to, the slog is only read from during a power failure recovery). 

I looked into this about a year ago. You can configure it for reads or writes to a specified array however my efforts to use a SSD as a log device when writing to the array failed. Turns out it only works for sync reads/writes not async. A file server falls under async. It should in theory work well for applications such as Databases and VMs though.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windows7ge said:

I looked into this about a year ago. You can configure it for reads or writes to a specified array however my efforts to use a SSD as a log device when writing to the array failed. Turns out it only works for sync reads/writes not async. A file server falls under async. It should in theory work well for applications such as Databases and VMs though.

You can force it to sync if you want, but normally its not worth the speed loss for the 5 ish sec of files after power loss. ZFS is good about only losing files in progress, not corruption the whole volume.

 

For example with NFS you can do sync_always to use sync.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×