Jump to content

LekroNAS - 16TB raw (8TB with two drive redundancy) expandable NAS build (NOT good looking) using FreeNAS

lekro

Hello everyone.

 

This is my first build log, and my third build. It's just as the title says - 16TB (raw) of storage hooked up to the network. I'm using FreeNAS for the OS.

 

I'll break this up into a few phases so it's easier to see...

 

The name probably isn't going to change.

Lekro = Lekro

NAS = network-attached storage

LekroNAS = Lekro's network-attached storage device that can store large amounts of data reliably.

 

I started using lots of my disk space when I had all my steam/origin library downloaded and started ripping cds. Then another person in the house wanted to hear my music. I used Windows sharing. What resulted was foobar2000 on the clientside hanging when my wireless card got tired or I was doing other things involving network connection at the same time. I stopped doing that. I ripped more of the cds in my house, my drive started filling up... and TekSyndicate made the NASferatu video! I hadn't ever heard of a NAS before, and it sounded interesting. I could store the games that I didn't play as often, and my entire music library (that's 140GB and growing) on a computer accessible 24/7, from any place? Nice!

 

I decided I needed to have a NAS to store some games, media, and backups. I was quite busy at the time, so I put it off until June. And here we are.

So I spent a couple of hours every day for three weeks researching here and there what is needed for a NAS... hardware... software... RAID configuration... and I came up with this.

 

CPU: Intel Core i3-4130T. It's low power, relatively cheap, supports ECC RAM, and is powerful enough for my needs. (obtained)

RAM: Crucial 16GB DDR3 ECC Unbuffered RAM (2x8GB). (relatively) cheap for ecc support. (obtained)

Motherboard: Supermicro X10SLL-F. Does everything I need it to. Not beautiful, but does the job. Expandable with 2 extra DIMM slots. (obtained)

PSU: Seasonic SSR-450RM. Efficient, quiet, way more than enough wattage. (obtained)

Case: Fractal Design Arc Midi R2. Nice looks, 8 drive bays for expandability. (obtained)

HDD: Seagate NAS drive 4TB (VN series) (x4) in RAID-Z2 (which gives two drives of redundancy, 8TB of space) (obtained)

OS: FreeNAS. (free online, flash to USB stick)

USB stick: HP v100w 8GB. Good enough for my use, found it lying around. (8GB+ recommended) (obtained)

 

http://forums.freenas.org/index.php?threads/first-build-need-critique-and-suggestions.21481/

The CPU arrives first. The CPU can be seen on top, with Intel's kinda boring plastic packaging. There's also a CPU cooler underneath.

PDLsXa3.jpg

Next comes the motherboard. It comes in this cheap, generic motherboard packaging. I didn't expect amazing packaging.

k5DlBp1.jpg

Here's the motherboard itself. Green PCB. Again, not beautiful, but it does the job. No overclocking necessary, we're going for stability here.

uXRAuLm.jpg

Six RED sata cables and the I/O shield.

1y8AeH5.jpg

Next comes the PSU. I haven't opened this one because it was really hot in the mailbox when we picked it up. Hopefully it's still good, but the testing that follows should make sure that it is. (Heat is unavoidable with shipping here xD)

zku9sOc.jpg

Then I got my hard disks in one big box, which had some "packaged air" padding and four little boxes...

q4yWzDb.jpg

I decided to open up one of these to see what was inside. A retail box? No, there's some more packaging and it looks well made. These drives should still be alive.

grnsKxS.jpg

And a closer look at the packaging without the box. Notice that the drive is packaged upside down. I'm not sure if that's normal, but the drive inside the static protection bag looks fine. (I didn't remove any more packaging and put the drive back in its box, awaiting the day that I have all the parts)

DGw0sH8.jpg

On Thursday, I got the RAM. There's not much to say here. ECC, heatsinks really don't matter...

qXM2An8.jpg

And on Friday, I get the case. I thought I'd document the unboxing of this as well, because packaging (might) matter. But I didn't like how the previous ones looked without a spoiler, so I'll put unboxing in a spoiler and the pictures of the actual case after that. (Sorry for the lighting, I didn't want to bring the dirty box that it came in onto the carpet floor that has brighter lighting)

Here we see the dirty, but still intact and in good condition box. There's tape along all seams, so there's no chance that that nasty brown liquid could have come in. (and it didn't)

Ra966Ey.jpg

And the case in two styrofoam (I don't know if that's the right word) things around it.

ERzUP7t.jpg

The case's nice outside clean look. I won't really be using the I/O ports or disk drives.

1sxmgDs.jpg

And the inside. The eight drive cages on the right are important for this build.

nRcmwpQ.jpg

 

Those are all the parts we need! I'm thinking of using the stock CPU cooler, or maybe something cheap. I was going to go with the Noctua NH-L9i (not caring about looks again) but then... cost-cutting.

The NAS is now built. I have been unable to take some pictures, but will try to do as soon as I can ;)

It was an EXTREMELY quick build, and in the Arc Midi R2 it was a breeze. Even the CPU cooler took under thirty minutes.

 

Here are some pictures of the finished build.

I ran memtest86+ for some time. It reported the correct 16GB ECC RAM and ran without errors.

 

I installed Debian Live to a USB stick and booted off of that to run some tests.

I checked to see if all the drives were there. They were.

Then I ran some SMART tests. First, conveyance, then short, then long. The long test failed on the fourth drive, reporting 8 pending sectors.

I wrote zeroes to each disk with dd.

This didn't reallocate those bad sectors on the fourth disk, so I ran shred on the fourth disk to put random data. The 8 sectors were then marked as reallocated.

 

After doing more research, though, I now feel like RMAing the drive. So I did so.

 

It took a couple of weeks to RMA the drive and get a replacement, but SMART didn't throw any errors upon testing the replacement disk.

 

I then decided to do the writing zeroes and reading them back off again with dd.

I first did these operations on each disk, sequentially. I did the test on the first, then the second, and so on and so forth. Then I did them on all disks at the same time.

for write testing and

for read testing.

 

Open the following spoiler if you want to read what the dd options mean. They are really quite simple and intuitive.

If you really want to read this for some reason, here's the options that I used for dd and why I used them.

Wikipedia has a great article on dd if you want to read more.

 

dd is one of those odd commands that doesn't use flags like -t or -f, but rather makes you set values with an = sign.

 

if is the input file.

of is the output file.

bs is the block size, basically how much to copy at once before going back to grab more data.

 

/dev/zero is a place to get an infinite source of zeroes. (or "null characters")

/dev/sdx is one of my four drives, where x is a, b, c, or d.

/dev/null is a trashcan. (Wikipedia calls it a "black hole")

 

 

Then I flashed FreeNAS to a flash drive as I wished to run badblocks on there and do some miscellaneous setup.

 

I set up SSH and a user for me and ran four badblocks tests:

using tmux.

 

Meanwhile, I set up emails from FreeNAS. I used the following options to use Google's SMTP server:

CpgdDIh.png

 

Then I ran one last smart long test.

Installing everything was a breeze through FreeNAS.

 

I set up the four disks in a RAID-Z2 array, giving me ~8TB of space.

 

Users and groups were simple and straightforward, as was the creation of shares.

 

But it looks like I'm going to need a new WLAN card to fathom the speeds of the NAS. Right now, I'm getting ~700KB/s.

I'm looking at the ASUS PCE-AC68. Although my router isn't that fast, it'll certainly come in handy when Google Fiber comes around to this part of town...

Phase 6: Critical review

 

Current status: I'm running some tests on the disks to make sure EVERYTHING is perfectly perfect.

 

I've also entered this build here. It's not aesthetically pleasing or gaming-oriented, so I'm not sure how it'll turn out. But it's worth a try.

 

 

@lekro - for doing all the research and building. (hehe)

@_ASSASSIN_ - for insight on the Arc Midi R2, general building stuff... and cable management!

FreeNAS documentation and forums - a great source for information if using FreeNAS. Helped me make hardware choices.

 

(... more will come when it comes...)

 

Camera: Canon t2i

Tripod: Velbon 7000 (not like anyone cares)

Images hosted by imgur

dd if=/dev/zero of=sd[a, b, c, d] bs=512K
dd if=/dev/zero of=/dev/sdx bs=512K
dd if=/dev/sdx of=/dev/null bs=512K
badblocks -ws /dev/ada#
From email: root@freenas.localSMTP server: smtp.google.comPort 587TLS authenticationUser/pass required: YesUsername: [email address]Password: [password]
Link to comment
Share on other sites

Link to post
Share on other sites

@lekro If I were you, I would invest in an LSI 9211-8i. You can get them on ebay for cheap, and flash it to IT mode. Turns the card into a relatively cheap (but still good) HBA

Then you can use some SAS breakout cables and pass an additional 8-disks to the host. More if you get an SAS expander.

 

This would give you some additional flexibility among other things.


Also, when you're done, don't forget to post in the 10TB+ storage thread

Link to comment
Share on other sites

Link to post
Share on other sites

@MrBucket101 Thanks for the recommendation! If/when I decide I need more storage, I've been looking to get an HBA. The IBM M1015 seems like a good, cheap option, as does the LSI 9211-8i. For now, though, I'm running with 4 disks.

 

In the end, I won't really get 16TB of storage, I'll be getting 8TB because I'm running it in RAID-Z2. So maybe I should make the thread title have 8TB instead. But it is capable of more than that if more drives and RAM are added.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, this looks like something I might be interested in (maybe @MG2R,

@looney, @Vitalius, @wpirobotbuilder and @IdeaStormer as well?

Just in case ;)).

 

@lekro If I were you, I would invest in an LSI 9211-8i. You can get them on ebay for cheap, and flash it to IT mode. Turns the card into a relatively cheap (but still good) HBA

Then you can use some SAS breakout cables and pass an additional 8-disks to the host. More if you get an SAS expander.

 

This would give you some additional flexibility among other things.

If you ever need more storage, Fractal Design sells their drive

cages as spare parts, so you could buy an additional drive cage

and fit a few extra drives into the case if you really needed to.

Just as a long-term FYI (I think one of the builds in the 10TB+

topic has this feature).

 

Also, when you're done, don't forget to post in the 10TB+ storage thread

 

Yes, definitely.

 

In the end, I won't really get 16TB of storage, I'll be getting 8TB because I'm running it in RAID-Z2. So maybe I should make the thread title have 8TB instead. But it is capable of more than that if more drives and RAM are added.

How you advertise it is up to you, we don't really have rules about

this as far as I know (for the 10TB+ topic we only count raw space though).

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I had no idea there was an i3 with ECC support. Definitely going to remember that.

 

Here's the motherboard itself. Green PCB. Again, not beautiful, but it does the job. No overclocking necessary, we're going for stability here.

Always a good way to go.

 

In case you add more storage, I recommend adding another 16GB of memory for stability and performance.

 

Subbed.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I had no idea there was an i3 with ECC support. Definitely going to remember that.

 

Always a good way to go.

 

In case you add more storage, I recommend adding another 16GB of memory for stability and performance.

 

Subbed.

 

I thought all the E3 Xeon Boards were i3 compatible and with ECC?

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Update: I got my drives. Pictures of one (the packaging) is above. It's not much, but an update!

 

If you ever need more storage, Fractal Design sells their drive
cages as spare parts, so you could buy an additional drive cage
and fit a few extra drives into the case if you really needed to.
Just as a long-term FYI (I think one of the builds in the 10TB+
topic has this feature).

How you advertise it is up to you, we don't really have rules about
this as far as I know (for the 10TB+ topic we only count raw space though).

That's good to know. If I ever get there, I'll look at that option. But I'm not sure I want more than two sets of 4x4TB. (RAM limitations of MB and CPU)

Ah- so I do qualify for >10TB. Nice. I just changed the thread title to include both (might be clunky but works)

 

I had no idea there was an i3 with ECC support. Definitely going to remember that.

 

Always a good way to go.

 

In case you add more storage, I recommend adding another 16GB of memory for stability and performance.

 

Subbed.

Yeah, some i3's support ECC, but not all. The one I found for cheap happened to support it. 

http://ark.intel.com/products/77481/Intel-Core-i3-4130T-Processor-3M-Cache-2_90-GHz

 

Yep, that's what I'm going to do. Unfortunately, with that, I don't really feel like adding another 16TB set. (Performance/stability would suffer if I added a third set?)

 

I thought all the E3 Xeon Boards were i3 compatible and with ECC?

They should be. But some i3's don't support ECC. Quoting someone from the FreeNAS forums:

 

 


For an Intel-based system to be ECC compatible, three things must be true.

 

1. The CPU must support ECC.

2. The motherboard [chipset] must support ECC.

3. The RAM must support ECC.

Link to comment
Share on other sites

Link to post
Share on other sites

 

They should be. But some i3's don't support ECC. Quoting someone from the FreeNAS forums:

 

Just verified from the Intel site, Ivy Bridge i3's and newer all support ECC. Ref: (lowest end i3's selected for this comparison) http://ark.intel.com/compare/46472,53423,65694,77487

 

Still no sign of the new E5-FPGA though... want to see its specs.

Edited by IdeaStormer

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Just verified from the Intel site, Ivy Bridge i3's and newer all support ECC. Ref: (lowest end i3's selected for this comparison) http://ark.intel.com/compare/46472,53423,65694,77487

 

Still no sign of the new E5-FPGA though... want to see its specs.

That makes it easier. The i3's ECC support is nice because for this sort of application, I don't really think we need a Xeon E3.

Link to comment
Share on other sites

Link to post
Share on other sites

I had no idea there was an i3 with ECC support. Definitely going to remember that.

 

Always a good way to go.

 

In case you add more storage, I recommend adding another 16GB of memory for stability and performance.

 

Subbed.

 

I believe ZFS needs 1GB of RAM for every TB of storage you use. Is that correct?

Link to comment
Share on other sites

Link to post
Share on other sites

I believe ZFS needs 1GB of RAM for every TB of storage you use. Is that correct?

That is the widely accepted rule of thumb.

 

I suggested adding another 16GB because his only option is to add another 16GB. Can't add 2x4GB, only 2x8GB. He could add a single 8GB stick but it's slightly better for compatibility to add a kit that was tested in dual-channel mode.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I believe ZFS needs 1GB of RAM for every TB of storage you use. Is that correct?

That is the widely accepted rule of thumb.

 

I suggested adding another 16GB because his only option is to add another 16GB. Can't add 2x4GB, only 2x8GB. He could add a single 8GB stick but it's slightly better for compatibility to add a kit that was tested in dual-channel mode.

What he said^.

For ZFS, more RAM is better and almost always preferred.

The general warnings on the FreeNAS forums regarding ZFS and RAM as I understand it is as such: ZFS can work with less RAM (as Alpen below mentions), but you can lose your entire storage array (All the data.) if you run your system with too little RAM for too long on FreeNAS using ZFS.

It's actually something that's well documented as an issue, but is not very well understood. However, people kept losing their arrays and coming to the forums. It happened so much that they basically made a sticky saying not to bother posting if you were using less than 8GB of RAM because you didn't follow the minimum requirements and that adds a complex layer of troubleshooting that just isn't worth hashing through because it's easily avoidable by having more than 8GB of RAM.

Now, Less than 8GB of RAM =/= Less than 1GB per TB of RAM. They are two different things, but their issue is similar.

The reason this problem isn't very well understood is because some people go years with very little RAM and never have an issue (which usually comes up as a kernel panic and quickly devolves to a corrupted pool) while others can't run their system with less than X GB of RAM without having kernel panics every couple of minutes. Yet some people can have constant kernel panics and never lose their pool, while others can have a single one after years of a stable and healthy system and lose everything.

That's why. It's like a light switch. It either happens or doesn't. There's no gradual corruption. One day it's there, the next it's gone. And there's little to warn you about it. So it's not worth the risk.

Then there's the performance hit you get from not enough RAM (again, as Alpen mentions), but that's not nearly as vital though it is very nice.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Ah- so I do qualify for >10TB. Nice. I just changed the thread title to include both (might be clunky but works)

Yes, you do indeed. :D

Yep, that's what I'm going to do. Unfortunately, with that, I don't really feel like adding another 16TB set. (Performance/stability would suffer if I added a third set?)

Since you're running ZFS on FreeNAS, I'll let Vitalius speak on that

one. Personally I've been running ZFS on Linux for about a year now,

and from my experiences with that I would be rather surprised if adding

a third 16 TB set would lead to instabilities and such. Depending on

your amount of RAM, speeds might suffer a bit, but that's about it

(again, FreeNAS might be a bit different on this).

 

I believe ZFS needs 1GB of RAM for every TB of storage you use. Is that correct?

That's the usual recommendation. FreeNAS folks seem to be very adamant

about it. While I have indeed come across the 1 GB / TB recommendation

with ZOL as well, I haven't really seen anyone be truly strict about it.

I ran a ZOL setup with 4 GB of RAM and 17 TB of raw storage space for

about nine months (so, way below the recommended amount of RAM). The

only negative effect I could notice was that when I was doing something

on that system which started eating up lots of RAM, ZFS speeds would

noticeably drop (ZOL's memory management will take up as much RAM as it

can up until a defined limit, but free it again when other processes

request RAM). I never encountered any reliability or stability issues

though, it only seemed to affect speed (and for the most part, the pool

was still fast enough to saturate a gigabit link, so I didn't really

care too much because that was a bottleneck I couldn't get around anyway).

I'm not sure why FreeNAS seems to be stricter about this. Maybe there are

actual differences in memory management between ZOL and ZFS on FreeNAS,

maybe ZOL just isn't popular enough yet for people to really have noticed

this issue, maybe it's something completely different, not sure.

Oh, forgot, @Ahnzh might be interested in this as well, sorry for forgetting

you earlier. ;)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

For ZFS, more RAM is better and almost always preferred.

The general warnings on the FreeNAS forums regarding ZFS and RAM as I understand it is as such: ZFS can work with less RAM (as Alpen below mentions), but you can lose your entire storage array (All the data.) if you run your system with too little RAM for too long on FreeNAS using ZFS.

It's actually something that's well documented as an issue, but is not very well understood. However, people kept losing their arrays and coming to the forums. It happened so much that they basically made a sticky saying not to bother posting if you were using less than 8GB of RAM because you didn't follow the minimum requirements and that adds a complex layer of troubleshooting that just isn't worth hashing through because it's easily avoidable by having more than 8GB of RAM.

Now, Less than 8GB of RAM =/= Less than 1GB per TB of RAM. They are two different things, but their issue is similar.

The reason this problem isn't very well understood is because some people go years with very little RAM and never have an issue (which usually comes up as a kernel panic and quickly devolves to a corrupted pool) while others can't run their system with less than X GB of RAM without having kernel panics every couple of minutes. Yet some people can have constant kernel panics and never lose their pool, while others can have a single one after years of a stable and healthy system and lose everything.

That's why. It's like a light switch. It either happens or doesn't. There's no gradual corruption. One day it's there, the next it's gone. And there's little to warn you about it. So it's not worth the risk.

Then there's the performance hit you get from not enough RAM (again, as Alpen mentions), but that's not nearly as vital though it is very nice.

This is exactly what my research ended up at. If I was going to expand, I'd add a HBA (M1015 or 9211-8i), another one of that 16GB kit, and then 4x4TB drives. That fills up all the drive cages (8) in my Arc Midi R2. If I was going to get another 16TB set, ending with 48, I'm not sure how that would work with the 32GB RAM cap of my CPU/MB. I'm going to go with the thumb rule and not expand more than to 32TB. I don't even think I'll need more than 8TB of storage, but if I happen to, the expansion option is available without replacing CPU, MB, or case.

 

I could, I guess, add a set of 4 500GB drives or something to the 32GB RAM... but I don't really feel like that's necessary.

 

That's the usual recommendation. FreeNAS folks seem to be very adamant

about it. While I have indeed come across the 1 GB / TB recommendation

with ZOL as well, I haven't really seen anyone be truly strict about it.

I ran a ZOL setup with 4 GB of RAM and 17 TB of raw storage space for

about nine months (so, way below the recommended amount of RAM). The

only negative effect I could notice was that when I was doing something

on that system which started eating up lots of RAM, ZFS speeds would

noticeably drop (ZOL's memory management will take up as much RAM as it

can up until a defined limit, but free it again when other processes

request RAM). I never encountered any reliability or stability issues

though, it only seemed to affect speed (and for the most part, the pool

was still fast enough to saturate a gigabit link, so I didn't really

care too much because that was a bottleneck I couldn't get around anyway).

I'm not sure why FreeNAS seems to be stricter about this. Maybe there are

actual differences in memory management between ZOL and ZFS on FreeNAS,

maybe ZOL just isn't popular enough yet for people to really have noticed

this issue, maybe it's something completely different, not sure.

I really am not sure about ZOL... never used it, never had enough drives. I've only had max. 2 drives in a system before. Maybe the different implementation has different hardware requirements or does things in a different way?

 

 

 

I have another question... hard drive placement in the case. I'm guessing it's better to put them in every other cage, like this:

 

Cage 0: 4TB

Cage 1: (empty)

Cage 2: 4TB

Cage 3: (empty)

Cage 4: 4TB

Cage 5: (empty)

Cage 6: 4TB

Cage 7: (empty)

 

... to reduce vibration of drives affecting each other. Or does it really not matter? If I put them right next to each other, when/if I need to add more drives, it'll probably be neater.

Link to comment
Share on other sites

Link to post
Share on other sites

I really am not sure about ZOL... never used it, never had enough drives. I've only had max. 2 drives in a system before. Maybe the different implementation has different hardware requirements or does things in a different way?

From what I've read, the implementation of ZFS itself between the BSD

and the Linux branch actually seems to be quite similar underneath in

many areas. ZFS on Linux even needs the "Solaris Porting Layer", which,

according to what I've read, basically makes ZFS think it's running on

a BSD system (OK, I'm sure that's a bit of an oversimplification, but

for us mere mortals who don't understand every single last detail of

the ZFS code, that seems to be an acceptable explanation from what I've

read).

If somebody has proper knowledge of this and I'm wrong, feel free to

correct me of course, but from what I've been able to determine, at

least for the time being, this seems to be the situation. AFAIK, things

won't stay like this forever, the plan is to eventually properly

optimize the ZOL branch for Linux, but since it's still quite far off

from a 1.0 release, that will need to wait for quite a while yet I think.

Anyway, what was my point? Right, hardware requirements. For the time

being (i.e. until there's more data available) I'll just go with "I don't

have a reliable answer." for FreeNAS' stricter hardware requirements

(well, at least stricter recommendations).

It does indeed seem like it's not just made up out of thin air though,

Vitalius has pointed me to a few threads on the FreeBSD forums where

the trouble seems to have come from not enough RAM. I haven't been

able to find similar horror stories with ZOL (yet), except those involving

deduplication, but that's a very different beast. But as said, that

could just be because ZOL has a much smaller userbase than FreeNAS

(I don't have any actual numbers though, just a guess).

I have another question... hard drive placement in the case. I'm guessing it's better to put them in every other cage, like this:

 

Cage 0: 4TB

Cage 1: (empty)

Cage 2: 4TB

Cage 3: (empty)

Cage 4: 4TB

Cage 5: (empty)

Cage 6: 4TB

Cage 7: (empty)

 

... to reduce vibration of drives affecting each other. Or does it really not matter? If I put them right next to each other, when/if I need to add more drives, it'll probably be neater.

The HDDs are pretty well dampened in that case IIRC, so I don't

really see this as an issue TBH. Even if not, NAS drives should

have some decent reslience against vibrations (although probably

not quite as good as proper server drives). I'm running my server

without any dampening at all with WD Red drives. It hasn't been up

and running very long yet, but so far at least no issues.

And in any case, before a few years ago, we rarely used vibration

dampening for HDDs, I don't recall vibrations being that huge of

an issue back then (I know they can be, or at least I've read this

from datacenter blogs and such, but personally I've never really

had multiple premature HDD failures which I could have attributed

to vibrations).

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

So I guess I'll just follow that thumb rule. Better than finding out my pool is gone later.

And hard drives all next to each other it is. It makes cable routing easier, especially after upgrading.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

A bit more of an update after a long time. We've built the NAS (big thanks to @_ASSASSIN_ for excellent cable management skills), and I proceeded to software installation and memory/disk testing. (Sorry, I don't have any pictures!)

 

memtest86+ ran fine, detected ECC and 16GB of RAM. I ran it for a couple of hours. The BIOS showed everything properly.

 

I installed Debian live on a USB stick for testing of disks. I started with SMART tests on all the drives: conveyance, short, long.

 

All the tests went successfully except for the long test on the fourth disk. This one failed with 8 bad sectors. These showed up as pending for remapping. Immediately I wondered if I should return the drive. But I did some searches and apparently it's correctable (although I'm still not sure whether I should return the disk and get a new one or not)

 

So I used dd to write zeroes to the bad sectors. But this didn't produce any results.

I then used dd to write zeroes to the entire disk. This didn't produce any results either. 

Then I used shred to write random values to the entire disk. This, finally, made the SMART attribute for pending for remapping sectors go to 0 and remapped sectors to 8. But I'm STILL not sure whether or not to use the disk. 

 

But anyway, that's what I was doing testing for - to make sure that this wouldn't show up later when I make my zpool and put all my data on it...

 

I also benchmarked all the disks. They were within +- 10 MB/s of each other, with an average read speed of 140 MB/s and an average write speed of 100 MB/s. Latency to each disk was ~12ms.

 

Next steps: replace the fourth disk?, stress the disks with dd, badblocks... I still need to do research on badblocks, but dd seems simple enough.

Link to comment
Share on other sites

Link to post
Share on other sites

Update.

 

I used shred to fill the fourth disk up with random data (although I could have just done

dd if=/dev/urandom of=/dev/sdd

.... I guess)

 

 

The fourth disk's 8 bad sectors now show up as remapped. But I've decided not to risk it and went for an RMA/return. So that probably means another one or two weeks' delay.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...

Ok! Finally, an update! I got a replacement fourth disk from newegg. I ran SMART conveyance, short, long tests and found no issues. Now I am proceeding to use dd and badblocks to stress (?) or benchmark (?) the disks.

 

I started with the dd write tests. I filled the disks with zeroes using the following command:

dd if=/dev/zero of=/dev/sdx bs=512K

Here are the speeds that they ran at.

 

Drive 1: 142 MB/s

Drive 2: 140 MB/s

Drive 3: 140 MB/s

Drive 4: 143 MB/s

 

I'll easily saturate a gigabit with these kind of speeds. But NOTE that these are sequential write/read speeds - they are ideal and no seeking is involved. I'll use something else to do seek tests.

 

Then I ran some read tests. I just read all the zeroes back off.

dd if=/dev/sdx of=/dev/null bs=512K

Drive 1: 143 MB/s

Drive 2: 140 MB/s

Drive 3: 141 MB/s

Drive 4: 143 MB/s (EDIT show results of final sequential dd test)

 

Each of the drives takes about 8 hours to complete each test.

 

That's really all. Not much, but I thought I'd update this thread after a couple of weeks of nothing.

Link to comment
Share on other sites

Link to post
Share on other sites

yo you need to take some photos of the NAS and show off my nice cable management I did ;)

Link to comment
Share on other sites

Link to post
Share on other sites

yo you need to take some photos of the NAS and show off my nice cable management I did ;)

hehe ;)

Link to comment
Share on other sites

Link to post
Share on other sites

So here's a picture of how the dd tests look like.

Just commands and output. 8 hours of HDD indicator LED on, and when it's off, the test is done.

FqIbFtR.png

And you can check the progress of any one of these tests at any time by sending a USR1 signal to the dd process. Just grab the process id of dd with top and give it a USR1 signal using

where pid is the process ID of dd.

YPixr9t.png

sudo kill -USR1 pid
Link to comment
Share on other sites

Link to post
Share on other sites

Today, I ran some dd tests simultaneously. I just wrote some zeroes and read them all back off on all four drives at the same time.

Speeds are similar to those of the sequential tests and everything looks good. I'll need to do some seek testing - perhaps with iozone - after this, so time to research!

 

4mh1Vmy.png

 

Pictures of the NAS will be up soon! (I need to move around some things to get a good angle on the machine's internals, too lazy to do that right now)

Link to comment
Share on other sites

Link to post
Share on other sites

yo you need to take some photos of the NAS and show off my nice cable management I did ;)

Done!

 

Here's some pictures of the NAS. I didn't want to move it around much, so the light isn't great, but everything (necessary) can be seen. I also need to improve my photography skills ;P

 

(Note the clean cable management by _ASSASSIN_ xD)

dgZWJsp.jpg

I blurred out the side cable mess. Need to clean that up sometime ;P

Basically everything (except CPU) can be seen in here. 

CMYRanH.jpg

Here's that internal USB port that I'm going to try to boot off of so I won't have a flash drive sticking out the back.

Y18I6NB.jpg

Top of CPU cooler and DIMMs.

1UmaBKI.jpg

PSU.

5QHWBUU.jpg

A view with the window on.

rNQPf4D.jpg

I'll be using this USB stick. It's an old one, but still works, and that's what matters.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×