Jump to content

Planning first NAS build, lots of questions though

Go to solution Solved by MrBucket101,

@Captain Chaos

 

1) RAID IS NOT A BACKUP. mirrored arrays don't prevent corruption, you simply end up with corrupted data, on 2 disks. The ONLY way to truly protect your data, is to back it up in multiple places. I know I know, easier said than done. I personally just dumped 18TB of data to 7 3TB hard drives. Labeled the disks, then put them in ESD bags, back in the boxes, and then placed them in a box and stored them in my basement, for cold storage. The data I've accumulated since the dump won't be backed up in the event of a catastrophic failure...but losing 10% is much better than 100%.

 

2) Nope, don't even bother purchasing a board just for it's massive amount of sata ports. You can save a lot of money using a RAID card or HBA. Which use PCIe lanes to connect the drives. most cards provide support for 8 drives, and then if you get an expander you can even run something like 32 drives-ish off one card.

 

Raid card or not? Up to you, ZFS software raid is really powerful. So if you go that route, a RAID card is actually not recommended. They recomend HBA's, the LSI 9211 in IT mode is a very cheap HBA. So that's an option if you want.

 

3) All nice RAID cards are pretty much the same, except for the software used to manage them. I would look into the software they use to monitor the arrays, configure them, etc... and see what you like best. I went ahead and purchased my card because of the LSI name, and I had used LSI cards at my work before. The problem with this though is that I was using windows OS which the LSI software works flawlessly.

 

It took me nearly 3 months to figure out how to install the LSI megaraid server on my linux OS. And I haven't been able to replicate it since. I have survived just by cloning my drive to an SSD and using that to recover from failures. I can't upgrade the server utility either, so I'm stuck with what I have now. I can't really complain, it works perfectly fine...once I got it installed lol. Before I got MSM installed on my server, I would just reboot the machine and go into the cards bios to make changes. I much prefer MSM as it's all laid out in front of me with a GUI.

 

I'm currently very happy with my hardware RAID setup, I get amazing performance out of it, especially with my caching SSD's. Shortly after I got the caching SSD's setup, I ran some benchmarks on my array and nearly maxed out the card on the read speed, and wasn't too far behind on the write speeds. 

 

4) I would get a low-end i3, to give yourself some room if you do decide to do something else with the system. I run a quadcore xeon in mine, but I'm running 4-5 virtual machines, as well as daily transcoding for plex. If I ever get the money, I'll definitely be getting a more powerful CPU because I can make use of it. It didn't start that way though, initially with my setup my CPU was overkill, then I found some useful things I could do with it, and kept adding on. A low end i3, would allow you to do some other tasks with the system, not just store files. And it supports ECC memory, which the g3258 does not.

 

5) The 1GB of ECC RAM per TB is a recommendation, it is not required. A lot of people don't do it, and a lot of people do. Personally for me and my data, I would not risk it and abide by the recommendation. However, with you rocking RAID 0 like a champ, maybe you're that kind of risk taker. 


checkout the links in my sig, where I talk about my server and some updates and even problems I ran into, might help you with some choices.

I started using prebuilt NAS solutions a few years ago.  First a small single-disk Lacie and then it evolved from there.  Currently I have 2 prebuilt units running :
- Lacie 2big Network with 2x OCZ Vertex4 128GB in RAID 0 : fast data storage  (don't ask, I had the SSDs laying around after a PC upgrade and this was a good use for them)
- WD EX4 with 4x WD Green 3TB in RAID 0 : software/ISOs and video storage (again, don't ask.  I already had these HDDs when I bought the NAS and wanted to get it up and running on the cheap)
 
On the EX4 I have some corruptions in some video files, probably due to bit-flip or perhaps disk corruption.  Unfortunately my backup has the same problem.
I guess it's time to get serious and build something that offers me some data protection.  That would also enable me to put everything in a single enclosure.

 

I've been reading up on FreeNAS and ZFS a bit and do understand some of what I'm reading.  However I have plenty of questions left.
 
 
 

 

Currently I'm using close to 8TB of the 10.4TB I actually have on the big NAS, so upgrading to 16 TB of usable space seems like a good idea.  It's unlikely that I'll need more than that in the next decade or so. 
1 ) Am I right in thinking that for proper data protection (ability to recover from file corruption, complete HDD failure, etc) the only really good option is to use mirrored Vdev's?  In that case I guess I'll need to get 32TB (8x 4TB WD Red)
 
On the SSD side,  I'm currently using around 100GB, so 250GB should give me plenty of room to grow.  Ideally I'd like the same protection, so I guess that'll be mirrored Vdev too.  (I'm thinking 2x Crucial MX200 250GB here)
 
The next question would be the RAID card and motherboard.  So far I'm thinking I'll need 8 HDDs and 2 SSDs.  In the 10TB+ club I see some people using RAID cards, others using the motherboard's own SATA ports. 
2 ) Should I use a motherboard that has 10 SATA ports (if those exist) or should I use SATA cards or RAID cards? 
3 ) If I need a card, SATA card or RAID card? and what to look for in a card?  Transfer speed isn't that important really, all that matters to me in terms of speed is seek/access time for the data, which is why I went SSD in my current Lacie NAS to begin with. 
 
Processing power doesn't seem that important according to various FreeNAS guides, but some of those guides are quite old. 

In the 10+TB club I'm seeing low-end Celerons as well as i7s and Xeons being used. I myself was thinking in the region of 3 - 3.5GHz dualcore. 
4 ) Is that too little, enough or overkill?
 
I read rules of thumb like "1GB of RAM per 1TB of storage", so I assume that 16GB of RAM should be enough for the intended 16.25GB of storage. 
5 ) Or am I completely wrong and do I need to count the actual amount of disk space rather than what will be available?  In that case I'd need 32GB

 

 

 

 

@ the mods : I am not sure if this belongs in Storage Solutions or in New Builds And Planning.  I posted in the latter, but feel free to move it if you feel it belongs in the former instead.

Link to comment
Share on other sites

Link to post
Share on other sites

1. Idk.

2. If you are going to run disks in raid, then get a raid card or two.

3, Raid card. (Not sure for what to look for in one tho)

4. That would be fine, I'd go for maybe a pentium g3258 or similar if its just going to be used as a nas. (Though I myself would probably make a nas/server combo :P)

5. Not too sure on the "1GB per 1TB" rule, I'd guess that 8GB of RAM would be more than enough for a NAS. --edit-- I think that the 1gb per 1tb rule is only really there for ZFS

Specs: CPU - Intel i7 8700K @ 5GHz | GPU - Gigabyte GTX 970 G1 Gaming | Motherboard - ASUS Strix Z370-G WIFI AC | RAM - XPG Gammix DDR4-3000MHz 32GB (2x16GB) | Main Drive - Samsung 850 Evo 500GB M.2 | Other Drives - 7TB/3 Drives | CPU Cooler - Corsair H100i Pro | Case - Fractal Design Define C Mini TG | Power Supply - EVGA G3 850W

Link to comment
Share on other sites

Link to post
Share on other sites

I would not run a parity array (5,6) on WD Reds. They have the Error Recovery Control (Time Limited Error Recovery) to not drop out of an array over little errors, but their Unrecoverable Read Error Rate is the same as other consumer drives at 10^14. That URE Rate could spell disaster while recovering a degraded array.

Link to comment
Share on other sites

Link to post
Share on other sites

@Captain Chaos

 

1) RAID IS NOT A BACKUP. mirrored arrays don't prevent corruption, you simply end up with corrupted data, on 2 disks. The ONLY way to truly protect your data, is to back it up in multiple places. I know I know, easier said than done. I personally just dumped 18TB of data to 7 3TB hard drives. Labeled the disks, then put them in ESD bags, back in the boxes, and then placed them in a box and stored them in my basement, for cold storage. The data I've accumulated since the dump won't be backed up in the event of a catastrophic failure...but losing 10% is much better than 100%.

 

2) Nope, don't even bother purchasing a board just for it's massive amount of sata ports. You can save a lot of money using a RAID card or HBA. Which use PCIe lanes to connect the drives. most cards provide support for 8 drives, and then if you get an expander you can even run something like 32 drives-ish off one card.

 

Raid card or not? Up to you, ZFS software raid is really powerful. So if you go that route, a RAID card is actually not recommended. They recomend HBA's, the LSI 9211 in IT mode is a very cheap HBA. So that's an option if you want.

 

3) All nice RAID cards are pretty much the same, except for the software used to manage them. I would look into the software they use to monitor the arrays, configure them, etc... and see what you like best. I went ahead and purchased my card because of the LSI name, and I had used LSI cards at my work before. The problem with this though is that I was using windows OS which the LSI software works flawlessly.

 

It took me nearly 3 months to figure out how to install the LSI megaraid server on my linux OS. And I haven't been able to replicate it since. I have survived just by cloning my drive to an SSD and using that to recover from failures. I can't upgrade the server utility either, so I'm stuck with what I have now. I can't really complain, it works perfectly fine...once I got it installed lol. Before I got MSM installed on my server, I would just reboot the machine and go into the cards bios to make changes. I much prefer MSM as it's all laid out in front of me with a GUI.

 

I'm currently very happy with my hardware RAID setup, I get amazing performance out of it, especially with my caching SSD's. Shortly after I got the caching SSD's setup, I ran some benchmarks on my array and nearly maxed out the card on the read speed, and wasn't too far behind on the write speeds. 

 

4) I would get a low-end i3, to give yourself some room if you do decide to do something else with the system. I run a quadcore xeon in mine, but I'm running 4-5 virtual machines, as well as daily transcoding for plex. If I ever get the money, I'll definitely be getting a more powerful CPU because I can make use of it. It didn't start that way though, initially with my setup my CPU was overkill, then I found some useful things I could do with it, and kept adding on. A low end i3, would allow you to do some other tasks with the system, not just store files. And it supports ECC memory, which the g3258 does not.

 

5) The 1GB of ECC RAM per TB is a recommendation, it is not required. A lot of people don't do it, and a lot of people do. Personally for me and my data, I would not risk it and abide by the recommendation. However, with you rocking RAID 0 like a champ, maybe you're that kind of risk taker. 


checkout the links in my sig, where I talk about my server and some updates and even problems I ran into, might help you with some choices.

Link to comment
Share on other sites

Link to post
Share on other sites

<snip>

 

Thanks for the reply (that goes to the other members too, of course)

 

I know RAID is not a backup, but (if I understand it correctly) mirrored Vdevs in ZFS use the checksum of each file to identify corruptions and then check if the mirror file is intact so they can repair the corruption.  That's why I was thinking of going along that route. 

Of course even with this redundancy I will still be making backups as I do now (a single backup for video files as I can always rip them again from the original DVDs and Blu-Rays, triple backups for my regular data including one for off-site storage). 

 

Seeing as I plan on using Freenas, which has ZFS software RAID, a HBA would be the way to go then.  Nice to know that.

How do you attach more than 2 drives to that LSI card?  I only see 2 ports at the front and that's it.

Also a bit nervous about flashing that LSI.  I've seen @alpenwasser 's tutorial and it really worries me (specific motherboard which I'd have to buy on eBay because no nearby stores sell it anymore, long procedure, etc), so I'll probably just get something that works out of the box.

 

I did look into SATA controller cards.  My local PC store really only has Highpoint cards and I've read far too many horror stories about those.  I'm researching the InLine 76617E right now, but I wonder if I'm even looking at he right kind of card because it costs just a fraction of what the LSI you mentioned tends to cost on Ebay (45 EUR for the InLine versus 135 to 300 for the LSI). 

 

As for ECC RAM, I didn't even think of that so I was looking at regular RAM.  Makes sense to go with ECC indeed.

Link to comment
Share on other sites

Link to post
Share on other sites

 

Thanks for the reply (that goes to the other members too, of course)

 

I know RAID is not a backup, but (if I understand it correctly) mirrored Vdevs in ZFS use the checksum of each file to identify corruptions and then check if the mirror file is intact so they can repair the corruption.  That's why I was thinking of going along that route. 

Of course even with this redundancy I will still be making backups as I do now (a single backup for video files as I can always rip them again from the original DVDs and Blu-Rays, triple backups for my regular data including one for off-site storage). 

 

Seeing as I plan on using Freenas, which has ZFS software RAID, a HBA would be the way to go then.  Nice to know that.

How do you attach more than 2 drives to that LSI card?  I only see 2 ports at the front and that's it.

Also a bit nervous about flashing that LSI.  I've seen @alpenwasser 's tutorial and it really worries me (specific motherboard which I'd have to buy on eBay because no nearby stores sell it anymore, long procedure, etc), so I'll probably just get something that works out of the box.

 

I did look into SATA controller cards.  My local PC store really only has Highpoint cards and I've read far too many horror stories about those.  I'm researching the InLine 76617E right now, but I wonder if I'm even looking at he right kind of card because it costs just a fraction of what the LSI you mentioned tends to cost on Ebay (45 EUR for the InLine versus 135 to 300 for the LSI). 

 

As for ECC RAM, I didn't even think of that so I was looking at regular RAM.  Makes sense to go with ECC indeed.

You can buy the 9211 pre-flashed to IT mode on ebay, just costs you a couple bucks more for "convenience"

 

SAS ports are magical, they're essentially just 4 SATA ports, using one connector. I could get fancy with it, but it doesn't matter that much.

 

You can get a SAS to SATA breakout cable, and you plug the SAS end into your card, and then it splits into 4 sata cables, that you plug into the drives respectively. Or if you get a case that has a SAS backplane, you just connect the SAS port to the SAS port on the backplane, and then you put drives in the hotswap bays.

 

SAS expanders are basically just a big splitter, letting you connect even more than 4 drives to one port. Most of the supermicro 24-bay 4U cases come with built in expanders.

 

If it were me, personally I would just stay away from SATA cards, they're the past :P

Link to comment
Share on other sites

Link to post
Share on other sites

Ahh, I see it now. Weirdly enough, when looking for a pre-flashed card I get a cheaper option on Ebay than if I look for just the type. Now we're talking !

So that single card could pull the 8 HDDs and I could then use the board's own SATA ports for the SSDs.

That should settle it then, the rest is just your average PC build minus the graphics card, oh and using a motherboard and CPU that support ECC I assume. Pretty sure I can handle that, even though it will limit my options.

Link to comment
Share on other sites

Link to post
Share on other sites

1 ) Am I right in thinking that for proper data protection (ability to recover from file corruption, complete HDD failure, etc) the only really good option is to use mirrored Vdev's? In that case I guess I'll need to get 32TB (8x 4TB WD Red)

The backup thing has already been mentioned, but depending on how many drives you have

in your vdev, RAIDZ1, RAIDZ2 or RAIDZ3 would also be acceptable.

The advtantage a RAIDZ<n> setup has is that any n drives can fail before the

vdev becomes broken, whereas in a pool made up of mirrors, only drives which are not

in the same mirror can break at the same time.

However, rebuilding is much faster on mirrors, so the risk of failures during a rebuild

is lower.

Both approaches have their upsides and drawbacks.

 

The next question would be the RAID card and motherboard. So far I'm thinking I'll need 8 HDDs and 2 SSDs. In the 10TB+ club I see some people using RAID cards, others using the motherboard's own SATA ports.

If you use ZFS, you can use a motherboard with lots of ports, but I'd evaluate closely

whether a less fancy board plus a host bus adapter such as the LSI 9211-8i is financially

more sensible. As @MrBucket101 has said, I wouldn't buy a board with lots of ports merely

for that reason. They can be more in the higher end range, which means you pay for features

you really don't need. However, if you find something with enoguh ports and it's a good

choice and doesn't cost a ton, it might be worth a look IMHO. I don't really have a good

overview of the market at the moment though, so I'm not sure if something like that

actually exists at the moment.

 

I read rules of thumb like "1GB of RAM per 1TB of storage", so I assume that 16GB of RAM should be enough for the intended 16.25GB of storage.

5 ) Or am I completely wrong and do I need to count the actual amount of disk space rather than what will be available? In that case I'd need 32GB

It's an oft-quoted rule of thumb, but it's honestly not required in many circumstances.

I used to run a 17 GB server with 4 GB of RAM via ZFS on Linux and it worked just fine.

The people from FreeNAS have a few words on it as well:

http://www.freenas.org/whats-new/2015/02/a-complete-guide-to-freenas-hardware-design-part-i-purpose-and-best-practices.html

 

@ the mods : I am not sure if this belongs in Storage Solutions or in New Builds And Planning. I posted in the latter, but feel free to move it if you feel it belongs in the former instead.

Fine in either, no worries.

 

I know RAID is not a backup, but (if I understand it correctly) mirrored Vdevs in ZFS use the checksum of each file to identify corruptions and then check if the mirror file is intact so they can repair the corruption. That's why I was thinking of going along that route.

The checksumming and parity calculations are also done in RAIDZ<n>, see above.

 

Seeing as I plan on using Freenas, which has ZFS software RAID, a HBA would be the way to go then. Nice to know that.

How do you attach more than 2 drives to that LSI card? I only see 2 ports at the front and that's it.

Also a bit nervous about flashing that LSI. I've seen @alpenwasser 's tutorial and it really worries me (specific motherboard which I'd have to buy on eBay because no nearby stores sell it anymore, long procedure, etc), so I'll probably just get something that works out of the box.

You don't necessarily need to buy that specific board, it might work perfectly fine

with what you have, I recommend trying it out first. It's just that among the boards

I had, that was the only one which worked.

EDIT: Or preflashed, that works too./EDIT

 

As for ECC RAM, I didn't even think of that so I was looking at regular RAM. Makes sense to go with ECC indeed.

Yes, definitely do that if you're running ZFS.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Ahh, I see it now. Weirdly enough, when looking for a pre-flashed card I get a cheaper option on Ebay than if I look for just the type. Now we're talking !

So that single card could pull the 8 HDDs and I could then use the board's own SATA ports for the SSDs.

That should settle it then, the rest is just your average PC build minus the graphics card, oh and using a motherboard and CPU that support ECC I assume. Pretty sure I can handle that, even though it will limit my options.

I remember looking at the 9211 last year and it was around $110 USD. Maybe the prices of them have gone up recently. Either way, that's still a reasonable price to pay for the card.

 

You can plug the SSD's in wherever you like. ZFS doesn't care what interface the drive uses, so long as its attached to the system. If it helps you to stay organized better, then by all means do it.

Link to comment
Share on other sites

Link to post
Share on other sites

That should settle it then, the rest is just your average PC build minus the graphics card, oh and using a motherboard and CPU that support ECC I assume. Pretty sure I can handle that, even though it will limit my options.

Intel has a pretty nice filter on their ARK which allows you to filter out CPUs which

support ECC: http://ark.intel.com/search/advanced?s=t&ECCMemory=true

You'll probably still need to refine the search a bit to narrow down the results, but

it should provide a starting point.

Not that you need to get an Intel CPU, but I do find that filter rather practical. 

 

I remember looking at the 9211 last year and it was around $110 USD. Maybe the prices of them have gone up recently. Either way, that's still a reasonable price to pay for the card.

I recently looked at the 9211-8i again, and found far fewer offerings than I did last year

when I bought the parts for my server, I think the market might have dried up a bit. They're

still there, but seem less numerous to me.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

After some digging and searching I'm currently considering the following setup :

Case : Fractal Define R4 (due to noise dampening and having enough room for all the drives) : 99.9 EUR

Case Fans : 2x BeQuiet Silent Wings 2 140mm : 51.98 EUR (I will use 3 fans, but I happen to have a SW2 laying around so I only need to buy 2)

MoBo : Intel Server S1200V3RPS (gonna have to source this on Ebay I'm afraid, not available locally) : 217.9 EUR

CPU : Intel Core i3-4170 (Intel says that's ECC compatible) : 134.9 EUR

CPU Cooler : BeQuiet Shadow Rock Slim : 49.99 EUR

RAM : 4x Kingston ValueRAM 4GB ECC DDR3-1600 : 207.6 EUR

RAID card : LSI 9211-8i flashed to IT mode : 144 EUR (paying extra for courier as other option is regular untracked mail)

Card's cables : SAS to 4x SATA 10Gbps : 23.96 EUR

USB cable for stick : Startech USB A to motherboard USB 4-pin : 5.22 EUR

SSDs : 2x Crucial MX200 250GB : 219.8 EUR

HDDS : 8x WD Red 4TB : 1439.2 EUR

HDD power cables : 2x BeQuiet CS-6940 : 42 EUR

Power-wise, I still have my BeQuiet Dark Power Pro P9 650W PSU that worked fine when I removed it from my PC (had to upgrade due to going SLI), so I'll use that.

That would bring the grand total to 2636.45 EUR.

I could go with the Intel cooler , which would keep it below 2600, but the aftermarket one is probably more quiet and should be good for 30+ years if the 300k hours claim is somewhat accurate.

I've checked for compatibility. All is ECC, everything supports or is DDR3-1600, the cooler fits in the case with 10mm to spare, it doesn't interfere with the RAM, the motherboard can be mounted in the case, the power supply is a regular ATX size so it should fit.

Am I missing something? (I have SATA cables for the SSDs, a collection of sticks to choose from and various other bits and bobs, so you can ignore the small stuff)

Any further suggestions?

Link to comment
Share on other sites

Link to post
Share on other sites

@Captain Chaos

 

take a look at this stuff on ebay, see if it interests you. It's EOL enterprise hardware. Perfectly acceptable for home use.

 

http://www.ebay.com/itm/4U-Supermicro-846TQ-R900B-24-Bay-Storage-Server-X8DTE-2x-Xeon-X5570-3x-SASLP-MV8-/141680123025?pt=LH_DefaultDomain_0&hash=item20fccb1c91

http://www.ebay.com/itm/4U-Supermicro-24-Bay-Internal-JBOD-Server-X8DTE-F-2x-E5520-2-26Ghz-LSI-SAS3081E-/141679999689?pt=LH_DefaultDomain_0&hash=item20fcc93ac9

 

$561+1439 = 2k EUR. Save a bit of money and get some more powerful hardware, and access to 24 hot swap bays. 

 

You don't need that kind of cooler on your CPU. The stock HSF will be fine, maybe a 212 EVO if you must. But that is insanely overkill.

 

Mobo looks good. You could also look at the workstation boards by asus, or some of the boards by supermicro. Just my personal prefs on the brands.

 

I would get 2x8gb sticks of RAM, instead of 4x4gb sticks. Gives you room to grow, shouldn't cost much extra.

 

How much is the 850 EVO in your area? pcpartpicker has it listed at 48EUR for the 120GB version and 80 EUR for the 250gb. 250gb might be a bit much, are you planning on running them in RAID 1? I've got a 120gb ssd for my OS drive on my server, and after 4 years, I've only filled up 75gb. And that's because I store some personal files on the drive. If you think you need 250gb though, then by all means get the 250gb version.

 

compatibility wise, you're golden.

EDIT: noticed I missed some stuff about the RAID configuration. I would recommend RAIDz2. 2 drives can fail, and you can easily expand if need be. To each their own though

Link to comment
Share on other sites

Link to post
Share on other sites

The rack is tempting, but I just don't have room for it in my apartment unless I put it in the living room or bedroom, which is not an option due to noise.

On its first day I moved the EX4 to an unused corner in the kitchen because its 4 HDDs were too noisy. That corner is also where I plan to put the new NAS.

Seeing as that space is 14.5" wide and a rack is 19", the rack just wouldn't fit. I'm not going to put it on its side and I can't start moving kitchen cabinets around either.

The 850 EVO is 109.9 at my local PC store, making it exactly as expensive as the MX200. Considering the enterprise-like data protection features on the MX200, it's just the better tool for the job.

2x8GB of RAM instead of 4x4GB sounds like a good idea indeed. I'll look some up and adjust my spreadsheet accordingly.

The CPU cooler (and the cooling in general) is complete overkill indeed, but I know for a fact that these parts will be dead silent even when the CPU is working hard.

It's also nowhere near as overkill as going full mental and cramming a Dark Rock Pro 3 in there, so ... err ... actually, come to think of it ... with low-profile memory the Pro 3 should fit in there just fine. Hmmm

To clear up the confusion about the HDD/SSD setup :

The OS (FreeNAS) won't be going on the SSDs, I plan on using a USB stick. That seems to be the most common way because FreeNAS dneeds its own dedicated physical drive that can't be used for shares.

I did consider putting it on a 32GB Adata SSD though. It'd only boost startup speed really, so not sure if that's worth the extra 25 EUR.

The SSDs are for my regular data and work stuff, things that I need to access frequently and where seek/access time is my only real priority. (that's why I'm running SSDs in the Lacie NAS right now)

Transfer speed doesn't matter as my network and the PC's NIC can only handle 1Gb anyway, not to mention that the laptop and netbook won't get anywhere near those speeds on WiFi.

I'll be putting the SSDs in ZFS' RAID 1 indeed, giving me 250GB with full redundancy. Right now I'm using 101GB so 250GB should give me plenty of extra room. The 128GB ones I currently have would have been big enough, but only just. If I were to use those, I'd have to upgrade soon anyway.

The HDDs are intended for storage of videos, with a TB or so set aside for software installers, ISO files and a primary backup of the SSDs.

Well, I guess it's time to order the parts then.

Link to comment
Share on other sites

Link to post
Share on other sites

@Captain Chaos

 

Yeah, the rack cases can take up quite a bit of room. I don't actually have a rack for mine, so I have it resting on an old coffee table in my basement. The fans are surprisingly quiet compared to my furnace :P

 

I see you've done your research on FreeNAS, I didn't know you could install it to a jump drive. Good to know :) In that regards, then yeah, the drive setup you've chosen sounds great.

 

Can't argue with you if you want silence. I would see what kind of temps you get under load without the CPU fan. Can't get any quieter than passive cooling. 

 

If it were me, I would just use the USB for the boot disk. No need to spend another 25EUR and take up one of your precious SATA ports. Worse comes to worse you can use those to expand. A lot of server/workstation class mobos actually have full blown USB ports located internally for this very purpose. My mobo has 2 of them in fact, didn't know what they were for at first lol

 

yes that is the correct cable, price seems really good. Is it used? I remember I ended up having to pay $20 USD for mine when I first got my card. Though I bought them brand new off newegg. Probably my mistake.

 

I'm just going to throw this out there because I went with this drive for my server, and I'm very happy with them. But have you looked at the Hitachi HGST NAS drives? IMO they're better than the WD Red's without too much of a cost difference. 7200rpm and a vibration sensor vs 5900rpm and no sensor. The warranty is the same as well. I've got 9 of them in my current server. 8 4TB's and a 3TB I use for temporary storage. I previously had a 2TB Hitachi Ultrastar as my temporary drive, I finally replaced it because I needed more space (Couldn't say the same for my seagate drives). But I had around 30k powered on hours on the drive, with who knows how much data written and removed from it.

Link to comment
Share on other sites

Link to post
Share on other sites

I completely forgot about the HGST ones, but I did have bad experience with their desktop versions from ages ago. They went wrong so fast that we used to call them Hitachi "Deathstar" instead of Deskstar.

I assume they got their act together by now, otherwise they wouldn't have such good failure ratings.

Price-wise they're exactly the same as the WD Reds here. it's worth looking into. Whichever one they have enough in stock off will be what I get then.

Regardless of what I get, I'll be doing a triple pass with CCleaner's disk wipe tool on each individual HDD before I run a full HDTune error scan to verify drive health. That should suffice as a burn-in.

It's good that you mentioned USB ports on the motherboard. I completely forgot to check if that Intel board had one and almost bought that "header->female USB" cable. You just saved me 5 EUR

EDIT : Motherboard, RAM, RAID card, SAS->SATA cables and the PSU cables are ordered. The rest I'll get at my regular PC shop (Alternate).

Link to comment
Share on other sites

Link to post
Share on other sites

Always fun to walk into a store and keep a straight face when you order one piece of this, 2 of that, one of this, and then tell them you need 32TB worth of hard drives.

They had 27 4TB Reds so I went with those. Of course when they were collecting everything they found out that 20 of those were not in the warehouse yet, so I'll have to go back on Monday to pick up #8.

When I heard that they didn't have enough Reds, I almost cancelled them to go for HGST. However then I remembered that with the WD ones at least I can shout at captain_WD if one goes wrong (just kidding).

I guess I'll be attaching these 7 to the PC (I have 2 free SATA ports and a USB3.0 dock, so I can do 3 at a time if needed) and give them the burn-in already. The rest of the parts will take a week or so to arrive anyway.

At the last minute I decided to spend the extra 20 EUR and get the Define R5 instead of the R4.

Oh, and it turned out that someone bought the last of those slim BeQuiet CPU coolers before I got there, so no insane overkill cooler for me.

Instead I ended up getting a regular Shadow Rock 2. A 180W TDP cooler for a 54W TDP processor ... I guess that means it won't overheat too easily.

Obligatory shopping pron shot

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

I'm currently in the middle of the NAS build, but now I'm wondering where to put the RAID card.

Right now it's in the PCI-E x16 slot, but that's less than a cm from the CPU cooler. I'd like to avoid heat radiating from the card to the CPU or vice versa.

Seeing as the card is x8, I assume I can use one of the x8 slots. Does it matter which one? If not, I think I'll take the bottom one.

 

 

<pic>

<pic>

Excuse the uncut tie wraps and the crappy cellphone pics. The M8's camera really suck indoors.

@Eniqmatic : seeing as you have helped me a lot with the drives already, I'll tag you to draw you in here.

Link to comment
Share on other sites

Link to post
Share on other sites

If it's an 8x card then no problem putting in that lane, might help with temps a bit.

System/Server Administrator - Networking - Storage - Virtualization - Scripting - Applications

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry if you've mentioned it - I don't have time just now to read the whole thread but what board and CPU have you went for?

System/Server Administrator - Networking - Storage - Virtualization - Scripting - Applications

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry if you've mentioned it - I don't have time just now to read the whole thread but what board and CPU have you went for?

Intel Server S1200V3RPS

Intel i3-4170 (chose it because it supports ECC)

Link to comment
Share on other sites

Link to post
Share on other sites

Intel Server S1200V3RPS

Intel i3-4170 (chose it because it supports ECC)

Interesting, where did you manage to source the board from out of interest?

System/Server Administrator - Networking - Storage - Virtualization - Scripting - Applications

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting, where did you manage to source the board from out of interest?

Sorry for the late reply, I was out shopping.

I sourced it on Ebay.

http://www.ebay.co.uk/itm/301650596689?ru=http%3A%2F%2Fwww.ebay.co.uk%2Fsch%2Fi.html%3F_from%3DR40%26_sacat%3D0%26_nkw%3D301650596689%26_rdc%3D1

Haven't tested it yet, it's not that much more work to put it in the case so I figured I'd skip the test run on the cardboard box.

Link to comment
Share on other sites

Link to post
Share on other sites

- snip -

Just FYI, those RAID cards have a tendency to get a bit toasty in normal PC environments

since they're usually made to rely on a server with lots of airflow for cooling. Not necessarily

an issue, but in case you run into any stability problems and such, might be worth giving

the card some extra air. No reason to panic, just thought I'd mention it.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/15/2015 at 8:47 PM, alpenwasser said:

Just FYI, those RAID cards have a tendency to get a bit toasty in normal PC environments

since they're usually made to rely on a server with lots of airflow for cooling. Not necessarily

an issue, but in case you run into any stability problems and such, might be worth giving

the card some extra air. No reason to panic, just thought I'd mention it.

Thanks for the info.

I did the following regarding airflow and cooling in general :

- dual 140mm intake fans

- a 140mm exhaust at the rear

- 2x 120mm behind the hard drive cages (to assist the intake fans, the cages do restrict airflow so I'm going push-pull on them)

- Then there's the CPU cooler which also displaces a fair bit of air.

- I swapped the original Fractal PCI covers for Cooler Master ones from my late HAF X, which have much bigger openings in them.

I haven't checked the temps yet and no idea how to do that or what the safe limit is.

Right now I have the fans running at roughly half speed (I never noticed more than 2°C difference between half and full in my PCs), but I'll increase it if there is any sign of trouble.

I'll look into getting another intake on the side. I think I can squeeze another 140mm in there. The hole is in the perfect position for blowing straight onto the card.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×