Jump to content

FreeNAS Solution with 10GbE

Ahnzh

So today I ordered parts for a NAS solution based on FreeNAS and 10Gbase-T (Copper based 10 gigabit LAN)

 

Why it seems ... a bit much maybe ... for the average user, I decided to go with it because of multiple reasons. To understand it though I have to start at the very beginning.

 

I've got a Mac Pro, one of the new ones, pretty expensive, very small SSD in it, since i work with large media files, an upgrade was inevitable in some way. Stressing out this SSD isn't the ... optimal solution, I would say. I still have to transfer lots of data to other devices. 

 

I've got a Windows workstation as well, some programs just don't run on Macs. But that isn't that much of a problem, when rendering stuff on one of them I'm using the other, as long as I don't have any render nodes I actually like that.

 

My thought process now was: I need more storage space, this space needs to give me at least 300MB/s transfer speeds (i would prefer something around 400+ MB/s), so a potent RAID solution is needed. Now there were some options:

  1. Putting HDDs in the Windows workstation as well as getting a Thunderbolt external RAID case (Promise Pegasus2 R4)
    That would be something around 2500$. I still have to get the data from one Workstation to the other. So i need a NAS as well. Transfer speeds would be at approximately 120MB/s, that's not INCREDIBLY much but it should do the job. Let's add another 2000$. 4500$ in total. OUCH
  2. Round Robin LAG (aggregating LAN ports) would be an alternative that would work with a performance orientated NAS. Round Robin though can lead to starving connections, and in general lead to lots of problems with your network. Additionally Macs do not support RR LAGs. Bummer
  3. 10GbE Ethernet with iSCSI - New network switches by Netgear and 10Gbase-T are getting cheaper and cheaper. It's at around 850$ for the XS708E, Network cards are getting cheaper and cheaper as well, starting at around 200$ already for PCIe cards. The Thunderbolt network card is available for 1000$ though, quite hefty. Then building a NAS or buying one with the option of adding a 10gbe card would complete the solution. When going for such an extreme network solution not using iSCSI would be such a waste though. I do not trust in the ability of Celeron / Core i3 powered NAS devices to handle block level IO though. So in that case i would have to build a NAS.

So i started planning

 

Network

  • Netgear XS708e 850$
  • 2x Intel X540-T1 250$ (Got these ones really cheap)
  • Sonnettech 10Gbase-T Thunderbolt 920$

Makes 2200$ there

 

 

NAS

  • Intel Xeon E3 v3 1265L 270$
  • IStarUSA S-917 Case 180$ (Awesome case, no built in 3.5 or 2.5 drive bays, just 7x 5.25inch bays across the whole front)
  • Two Chenbro SATA/SAS HDD Cages 3x5.25in to 5x3.5in HDD 120$ each
  • AsrockRack E3C224D4I-14S Motherboard 300$ (Onboard 8x SAS connector in a m-ITX formfactor + 4 RAM slots and a broad range of OS support, i am amazed!)
  • Icy Dock ToughArmor 5.25inch to ODD+2x2.5inch SAS/SATA bays 100$
  • BeQuiet! Straight Power E9 480W with cable management option (Server/Workstation certified) 100$
  • 8x Seagate Constellation ES.3 2TB 150$ each
  • 4x WD XE 300GB SAS 10.000 RPM HDD 180$ each
  • 4x 8GB ECC RAM 85$ each. ZFS needs 1GB per TB + RAM for the system + RAM for additional services, and the more the better.

Sums up to 2300$ / 980$ without HDDs

 

FreeNAS will be the OS, those 4 10k RPM HDDs will be running as a raid5 iSCSI target while 8 Seagate HDDs will run in a raidZ2. All in all this gives me a good flexibility, 12TB storage will be enough and if really needed the NAS will be expandable.

 

I expect the throughput of the iSCSI to be at around 550MB/s with high IOs, while the raidZ2 throughput should be around 700-800MB/s with lower IOs. I had the Xeon at home already so I tested around a bit with 1gbe, getting cpu usages of roughly 25-35% for 4x full 130MB/s from 4 clients to the server via LACP bonding (with 8x 1TB Seagate Constellation HDDs in a raidZ2) so the Xeon shouldn't be the bottleneck and will do just fine. It's all down to the HDDs.

 

For me it's a really good performance at a reasonable price. 

 

Once everything is here i will post some pics and keep you up to date with the experiences i made + some testing since it's my first step into 10gbe networking.

 

I would love to hear some opinions and what i could have done better.

 

 

Ahnzh

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Welcome to the forums! :)

I'm a bit pressed for time at the moment, so don't time

to delve into details for onw, but it does look interesting,

so bring on the pics!

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

You may find that the 10 GbE was a waste. It's always best to build the system, then do a performance analysis to see if your storage can match the network throughput.

 

Your specs look like they'll be more than a match for GbE, but depending on your workload you might not utilize much of the 10GbE. Streaming workloads will definitely use a good chunk of it.

 

Out of curiosity, why the WD XE series drives? What do you need that they offer? Also, where are you going to hook them up to? Your SAS ports are full and that motherboard only has two onboard SATA ports.

 

Let me know how that switch works out, I've been eying it myself.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

You may find that the 10 GbE was a waste. It's always best to build the system, then do a performance analysis to see if your storage can match the network throughput.

 

Your specs look like they'll be more than a match for GbE, but depending on your workload you might not utilize much of the 10GbE. Streaming workloads will definitely use a good chunk of it.

 

Out of curiosity, why the WD XE series drives? What do you need that they offer? Also, where are you going to hook them up to? Your SAS ports are full and that motherboard only has two onboard SATA ports.

 

Let me know how that switch works out, I've been eyeing it myself.

I believe he did the XE series because he wants performance. They have TLER too, so they are fine for Parity RAID.

I agree about the 10 GbE. His Mac can't handle that anyway unless he uses Thunderbolt and how he will connect to 10GbE with it, I don't know (I'm not a Mac guy).

FreeNAS will be the OS, those 4 10k RPM HDDs will be running as a raid5 iSCSI target while 8 Seagate HDDs will run in a raidZ2. All in all this gives me a good flexibility, 12TB storage will be enough and if really needed the NAS will be expandable.

 

I expect the throughput of the iSCSI to be at around 550MB/s with high IOs, while the raidZ2 throughput should be around 700-800MB/s with lower IOs. I had the Xeon at home already so I tested around a bit with 1gbe, getting cpu usages of roughly 25-35% for 4x full 130MB/s from 4 clients to the server via LACP bonding (with 8x 1TB Seagate Constellation HDDs in a raidZ2) so the Xeon shouldn't be the bottleneck and will do just fine. It's all down to the HDDs.

 

For me it's a really good performance at a reasonable price. 

 

Once everything is here i will post some pics and keep you up to date with the experiences i made + some testing since it's my first step into 10gbe networking.

 

I would love to hear some opinions and what i could have done better.

RAIDZ1 is RAID 5 for FreeNAS. RAIDZ2 is RAID 6. Just so you know.

 

I personally reallllly doubt those speeds. I don't know why. I just do. Perhaps it's the disbelief of it (it is epic fast). 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You may find that the 10 GbE was a waste. It's always best to build the system, then do a performance analysis to see if your storage can match the network throughput.

 

Your specs look like they'll be more than a match for GbE, but depending on your workload you might not utilize much of the 10GbE. Streaming workloads will definitely use a good chunk of it.

 

Out of curiosity, why the WD XE series drives? What do you need that they offer? Also, where are you going to hook them up to? Your SAS ports are full and that motherboard only has two onboard SATA ports.

 

Let me know how that switch works out, I've been eying it myself.

 

The Asrock Website states it got 4 SATA ports: LINK

 

And the WD XE got a third of the access latency, if you calculate in that 10gbe has a slightly higher latency than 1gbe when pressing a lot of data, and using it as an iSCSI device with accessing smaller files as well, I see an advantage when using WD Xe HDDs for the iSCSI. I want to replace a DAS with it so performance should be comparable to a DAS. Ofc i will post detailed testing information about the Switch

 

 

And as i told, i tested the storage ... sort of, i got 1tb HDDs and the xeon here at home already, getting a combined throughput of around 500MB/s through 4 NICs in a LACP LAG with 4 Clients, i see no reason why bigger HDDs will get less than that on a single or dual Client with 10gbe. Tendentially i would say i will get more out of it, the bottleneck wasn't the IO, neither was it the CPU but the 4x1GbE

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

So today I ordered parts for a NAS solution based on FreeNAS and 10Gbase-T (Copper based 10 gigabit LAN)

 

Why it seems ... a bit much maybe ... for the average user, I decided to go with it because of multiple reasons. To understand it though I have to start at the very beginning.

 

I've got a Mac Pro, one of the new ones, pretty expensive, very small SSD in it, since i work with large media files, an upgrade was inevitable in some way. Stressing out this SSD isn't the ... optimal solution, I would say. I still have to transfer lots of data to other devices. 

 

I've got a Windows workstation as well, some programs just don't run on Macs. But that isn't that much of a problem, when rendering stuff on one of them I'm using the other, as long as I don't have any render nodes I actually like that.

 

My thought process now was: I need more storage space, this space needs to give me at least 300MB/s transfer speeds (i would prefer something around 400+ MB/s), so a potent RAID solution is needed. Now there were some options:

  1. Putting HDDs in the Windows workstation as well as getting a Thunderbolt external RAID case (Promise Pegasus2 R4)

    That would be something around 2500$. I still have to get the data from one Workstation to the other. So i need a NAS as well. Transfer speeds would be at approximately 120MB/s, that's not INCREDIBLY much but it should do the job. Let's add another 2000$. 4500$ in total. OUCH

  2. Round Robin LAG (aggregating LAN ports) would be an alternative that would work with a performance orientated NAS. Round Robin though can lead to starving connections, and in general lead to lots of problems with your network. Additionally Macs do not support RR LAGs. Bummer
  3. 10GbE Ethernet with iSCSI - New network switches by Netgear and 10Gbase-T are getting cheaper and cheaper. It's at around 850$ for the XS708E, Network cards are getting cheaper and cheaper as well, starting at around 200$ already for PCIe cards. The Thunderbolt network card is available for 1000$ though, quite hefty. Then building a NAS or buying one with the option of adding a 10gbe card would complete the solution. When going for such an extreme network solution not using iSCSI would be such a waste though. I do not trust in the ability of Celeron / Core i3 powered NAS devices to handle block level IO though. So in that case i would have to build a NAS.

So i started planning

 

Network

  • Netgear XS708e 850$
  • 2x Intel X540-T1 250$ (Got these ones really cheap)
  • Sonnettech 10Gbase-T Thunderbolt 920$

Makes 2200$ there

 

 

NAS

  • Intel Xeon E3 v3 1265L 270$
  • IStarUSA S-917 Case 180$ (Awesome case, no built in 3.5 or 2.5 drive bays, just 7x 5.25inch bays across the whole front)
  • Two Chenbro SATA/SAS HDD Cages 3x5.25in to 5x3.5in HDD 120$ each
  • AsrockRack E3C224D4I-14S Motherboard 300$ (Onboard 8x SAS connector in a m-ITX formfactor + 4 RAM slots and a broad range of OS support, i am amazed!)
  • Icy Dock ToughArmor 5.25inch to ODD+2x2.5inch SAS/SATA bays 100$
  • BeQuiet! Straight Power E9 480W with cable management option (Server/Workstation certified) 100$
  • 8x Seagate Constellation ES.3 2TB 150$ each
  • 4x WD XE 300GB SAS 10.000 RPM HDD 180$ each
  • 4x 8GB ECC RAM 85$ each. ZFS needs 1GB per TB + RAM for the system + RAM for additional services, and the more the better.

Sums up to 2300$ / 980$ without HDDs

 

FreeNAS will be the OS, those 4 10k RPM HDDs will be running as a raid5 iSCSI target while 8 Seagate HDDs will run in a raidZ2. All in all this gives me a good flexibility, 12TB storage will be enough and if really needed the NAS will be expandable.

 

I expect the throughput of the iSCSI to be at around 550MB/s with high IOs, while the raidZ2 throughput should be around 700-800MB/s with lower IOs. I had the Xeon at home already so I tested around a bit with 1gbe, getting cpu usages of roughly 25-35% for 4x full 130MB/s from 4 clients to the server via LACP bonding (with 8x 1TB Seagate Constellation HDDs in a raidZ2) so the Xeon shouldn't be the bottleneck and will do just fine. It's all down to the HDDs.

 

For me it's a really good performance at a reasonable price. 

 

Once everything is here i will post some pics and keep you up to date with the experiences i made + some testing since it's my first step into 10gbe networking.

 

I would love to hear some opinions and what i could have done better.

 

 

Ahnzh

 

Wow that's a build, ES3's, 10K RPM drives, nice...

 

One thing though, I've been looking at that mobo and searching info on the SAS controller and so far I have found people complaining its not up to par for storage or big storage. I will have to find the links again but I'm sure you can search that controller number and find the same info. Its holding me back from buying that mobo as I might get a cheaper one with no SAS and add a separate SAS card.

 

For the network you might be better off budget-wise sticking to 1G Ports and a better switch, well have some type of nic teaming in place as well. 10G is da bomb but you need the 10G switch that can handle it properly and not treat ports as 1G or not 10G, look over reviews on the switch if you do go 10G. I also have found some cheaper 10G cards, PM if seriously interested, they are off brand though but work.

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Wow that's a build, ES3's, 10K RPM drives, nice...

 

One thing though, I've been looking at that mobo and searching info on the SAS controller and so far I have found people complaining its not up to par for storage or big storage. I will have to find the links again but I'm sure you can search that controller number and find the same info. Its holding me back from buying that mobo as I might get a cheaper one with no SAS and add a separate SAS card.

 

For the network you might be better off budget-wise sticking to 1G Ports and a better switch, well have some type of nic teaming in place as well. 10G is da bomb but you need the 10G switch that can handle it properly and not treat ports as 1G or not 10G, look over reviews on the switch if you do go 10G. I also have found some cheaper 10G cards, PM if seriously interested, they are off brand though but work.

 

for now thanks for the offer but out of experience i can tell that OpenBSD works best with Intel NICs. I really don't want unexpected things to arise. From what i read there used to be problems when the controller was still new. It's on the open market now since mid 2012 if i remember correctly and since Asus builds it into extender cards and Gigabyte, Supermicro and AsrockRack build them onto the Mobo i really think that was merely a problem of drivers. And i didn't hear anything bad about it for at least half a year. And the main reason to pick this board is that it's the only m-ITX board with 4 RAM slots (ZFS)

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

The Asrock Website states it got 4 SATA ports: LINK

I see that, but look at the motherboard; there are only two on there.

 

 

for now thanks for the offer but out of experience i can tell that OpenBSD works best with Intel NICs

Yup.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I see that, but look at the motherboard; there are only two on there.

 

 

Yup.

@Ahnzh

From the manual for that motherboard:

 

SATA 

Controller

Intel® C224 : 4 x SATA3 6.0 Gb/s (from mini SAS connector)

2 x SATA2 3.0 Gb/s, Support RAID 0, 1, 5, 10 and Intel® Rapid 

Storage

LSI 2308: 8 x SATA 6.0 Gb/s (from mini SAS connector

Support RAID 0, 1, 5, 10)

The constellation drives have got TLER as well as much as i believe.  I stated that i am buying a Thunderbolt 10Gbase-T ethernet adapter for the Mac Pro LINK.

 

And as long as the CPU is powerful enough to keep up with parity calculations there shouldn't be a problem. I was testing it with an Adaptec HBA card, the onboard LSI chip is bringing 500+MB/s to the table, since i wanted 400+ MB/s either way it turns out it will be enough.

Ah, right. I'm dumb. Ignore me. And yes, I know they do. :P

True.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I believe he did the XE series because he wants performance. They have TLER too, so they are fine for Parity RAID.

I agree about the 10 GbE. His Mac can't handle that anyway unless he uses Thunderbolt and how he will connect to 10GbE with it, I don't know (I'm not a Mac guy).

RAIDZ1 is RAID 5 for FreeNAS. RAIDZ2 is RAID 6. Just so you know.

 

I personally reallllly doubt those speeds. I don't know why. I just do. Perhaps it's the disbelief of it (it is epic fast). 

 

 

The constellation drives have got TLER as well as much as i believe.  I stated that i am buying a Thunderbolt 10Gbase-T ethernet adapter for the Mac Pro LINK.

 

And as long as the CPU is powerful enough to keep up with parity calculations there shouldn't be a problem. I was testing it with an Adaptec HBA card, the onboard LSI chip is bringing 500+MB/s to the table, since i wanted 400+ MB/s either way it turns out it will be enough.

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Your SAS ports are full and that motherboard only has two onboard SATA ports.

 

 

The Asrock Website states it got 4 SATA ports: LINK

I count 2 × SATA ports and 3 × Mini-SAS on that picture (four SATA ports

each), giving me a total of 14 ports (unless I'm missing something?).

I think the confusing might arise from the fact that one of the mini-SAS

is hooked up to the C224 chipset, along with the two normal SAS ports?

At least it sounds like that to me.

EDIT:

Sigh, too slow for @Vitalius. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Intel® C224 : 4 x SATA3 6.0 Gb/s (from mini SAS connector), 2 x SATA2 3.0 Gb/s, Support RAID 0, 1, 5, 10 and Intel® Rapid Storage
LSI 2308: 8 x SATA 6.0 Gb/s (from mini SAS connector, 

 

 

the visible SATA ports are just the 3gb/s

 

 

/edit: 2late as well, my bad

 

//edit: I forgot to tell that it's a gamble if the Mobo will fit into the case, it's EXTENDED m-ITX and i had to look for a side mounted m-ITX case with flexible drive bays for a long time. It's a bit longer and i hope the space will be enough, time will tell, in case it's not enough i have to get the case's bigger brother though

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Once you're done, don't forget to stop by our storage showoff topic,

it sounds like you'll qualify for the list (and even if you don't

we still like to have special stuff in there). :)

Side note: Since you seem to be going for high performance, having

lots of RAM makes sense as far as I can tell, but ZFS doesn't really

need that much as such. I've run a 12 TB pool (17 TB raw disk

space) on a machine wich 4 GB RAM without issues, performance was

pretty good (although I never really benchmarked it and by now the

setup has changed).

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Side note: Since you seem to be going for high performance, having

lots of RAM makes sense as far as I can tell, but ZFS doesn't really

need that much as such. I've run a 12 TB pool (17 TB raw disk

space) on a machine wich 4 GB RAM without issues, performance was

pretty good (although I never really benchmarked it and by now the

setup has changed).

.... Wait wait wait.

ZFS or UFS?

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

.... Wait wait wait.

ZFS or UFS?

 

8 SATA drives in ZFS, 4 SAS drives in UFS, from what I read, ZFS can be a pain in the ass to configure for iSCSI and in general it's recommended for smaller Storages that you use UFS for iSCSI

 

http://doc.freenas.org/index.php/Hardware_Recommendations <-- states that you should have 1gb per TB if using ZFS, less might work but performance is drastically reduced in that case. NAS4Free's recommendations tell that it needs less RAM but just when not using ZFS but UFS (2GB or so if i remember correctly, didn't pay attention to that part since if I decide to run FreeNAS or NAS4Free, I'm SO going to use ZFS

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

.... Wait wait wait.

ZFS or UFS?

ZFS on Linux (the machine is ZEUS, to be precise). I seem to recall

that FreeNAS mentions in their hardware requirements to have at least

8 GB, not sure if there's a difference there between ZFS on Linux and

FreeNAS, but I thought I'd mention it. ZFS loves RAM, I'm aware of

that, but it can run on more modest setups. Just thought I'd mention

my experience (not that it matters much in this case).

EDIT:

Ah, you meant OP? lol, silly me. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

well, RAM shouldn't be any different for Linux ZFS, The thing is that it needs to cache the Data tables and additional metadata to pre-cache data before it's even requested. If you had 12TB and used just 4 TB that might be a different thing, not so sure, but in general the recommendations stay the same

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

I count 2 × SATA ports and 3 × Mini-SAS on that picture (four SATA ports

each), giving me a total of 14 ports (unless I'm missing something?).

EDIT:

Sigh, too slow for @Vitalius. :D

My bad, I assumed each mini-SAS port was providing four SATA ports. @Ahnzh you should be fine as far as drives go.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

well, it left me shellshocked, i didn't even pay attention to the ports to be honest, since the chipset usually brings 6 ports and even on the smallest m-ITX boards there are at least 4 ports. I really appreciate it though that you payed attention to even such a small detail. I'm new here but it makes me feel safe and at home in this community.

 

 

Thanks a lot

 

/edit: typo

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

well, RAM shouldn't be any different for Linux ZFS, The thing is that it needs to cache the Data tables and additional metadata to pre-cache data before it's even requested. If you had 12TB and used just 4 TB that might be a different thing, not so sure, but in general the recommendations stay the same

Nah, I actually had it pretty much filled up. Towards the end (before

upgrading to a bigger server), I had about 20 GB free. :D

Performance did drop significantly by then of course, though I'm assuming

not due to RAM. Read speeds were still >50 MB/s, but write speeds dropped

below 20 MB/s and would dip down to 5 MB/s on occasion.

But for the most part (i.e. as long as there were still a few hundred gigabytes

of free space) it ran very well (though of course, it wasn't as performance-

oriented as your build, just a normal fileserver for personal data and

media files, no need for extra low latencies and/or very high bandwidth).

EDIT:

well, it left me shellshocked, i didn't even pay attention to the ports to be honest, since the chipset usually brings 6 ports and even on the smallest m-ITX cases there are at least 4 ports. I really appreciate it though that you payed attention to even such a small detail. I'm new here but it makes me feel safe and at home in this community.

Thanks a lot

Well, the storage section is pretty easy-going and we have a few

competent people here. Also, we don't busy ourselves with the

petty quasi-religious debates which are all too common on CPU and

GPU subforums, it's pretty peaceful here for the most part (well,

on occasion some people get a bit passionate about SSDs, but it's

not too bad usually). ;)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Performance did drop significantly by then of course, though I'm assuming

not due to RAM. Read speeds were still >50 MB/s, but write speeds dropped

below 20 MB/s and would dip down to 5 MB/s on occasion.

 

Actually that wouldn't have happened with more RAM. RAM acts as a Write/Read cache in ZFS. Of course HDDs are getting slower bit by bit because free sectors are harder to reach once a drive fills up, the ZFS Caching technology though prevents that from happening usually. The downside is that blackouts can be hazardous to ZFS storages (when it writes just half a file). UFS is recommended for using with ZFS based devices. Same goes for ECC RAM since errors get written to the hard drives really easily. When using ZFS you generally want to make sure everything that happens to the File System is error free, you still want to scrub the system every 2 weeks or so.

 

And to be honest, GPU/CPU was important to me when i was younger... played games ... went to LANs ... had school and LOADS of free time... Now i'm working, usually got free time for myself like 3 hours a day max and when I've got free time I've got to do more important stuff than worrying about CPU and GPU power. I just feel like I'm with guys that are the same. That makes me feel at home.

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Actually that wouldn't have happened with more RAM. RAM acts as a Write/Read cache in ZFS. Of course HDDs are getting slower bit by bit because free sectors are harder to reach once a drive fills up, the ZFS Caching technology though prevents that from happening usually.

Oh, right, forgot about that, makes sense. :)

 

And yeah, in my new server I do actually have ECC RAM, it

was one of the reasons I upgraded (the original machine was

put together not with ZFS in mind because at the stage of

planning ZFS on Linux wasn't yet production-ready).

And to be honest, GPU/CPU was important to me when i was younger... played games ... went to LANs ... had school and LOADS of free time... Now i'm working, usually got free time for myself like 3 hours a day max and when I've got free time I've got to do more important stuff than worrying about CPU and GPU power. I just feel like I'm with guys that are the same. That makes me feel at home.

It's not that I don't care about CPU/GPU stuff (I have two dual-CPU machines

and a Titan after all, and would love to have a quad-socket :D), but I can't

really say to care enough to get into heated arguments about that kind of stuff

anymore.

Then again, my perspective on the subject might be a bit skewed because as a

mod I usually get into contact mostly with those debates on these subjects

that turn sour. ;)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

makes sense in some way :D

 

i'm more into silent things nowadays, got a 4770k a bit under clocked with a huge Prolimatech cooler, a Inno3D GeForce GTX 780 Ti iChill HerculeZ X3 Ultra, a Raid0 SSD and some BeQuiet! PSU in a  Corsair 350D with Blacksilent Pro fans at low voltage in it. (not my win workstation, my private one) Just silent, got enough power, no flashing lights or fans in it but whisper quiet. No bad Rig but i almost have it because i wanted to have it, no real reason, I'm not even using it every day.

 

Storage became a far huger problem for me and since i need a potent expansion and going for a DAS+NAS+Internal storage expansion for my workstations and getting a NAS/SAN with 10GbE is almost equally as expensive, i went for the second since it's more uncomplicated and more future orientated.

 

 

And to get a real production ready ZFS you need to go for Solaris or OpenBSD in my opinion

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

makes sense in some way :D

 

i'm more into silent things nowadays, got a 4770k a bit under clocked with a huge Prolimatech cooler, a Inno3D GeForce GTX 780 Ti iChill HerculeZ X3 Ultra, a Raid0 SSD and some BeQuiet! PSU in a  Corsair 350D with Blacksilent Pro fans at low voltage in it. (not my win workstation, my private one) Just silent, got enough power, no flashing lights or fans in it but whisper quiet. No bad Rig but i almost have it because i wanted to have it, no real reason, I'm not even using it every day.

Yes, silence is nice. Very high on my list of priorities for

my current main build (HELIOS).

And to get a real production ready ZFS you need to go for Solaris or OpenBSD in my opinion

I was a bit skeptical about ZFS on Linux at first, so I did some

reading on it before deploying it.

Since the Lawrence Livermoore National Lab is actively using it

in their supercomputer clusters (and is also a major part in its

development as far as I understand) I'm confident it's stable enough

to not need to worry (well, at least not more than any other branches),

the only thing I really miss is integrated encryption (which would

require going with Oracle's proprietary branch).

The primary source of issues seems to be buggy integration with your

distro of choice, so if you wish to use it, make sure to pick the

right one. Arch has been very good for me (yeah, I know "bleeding

edge, unstable blablabla", but it's actually been rock solid for

me for three years I've been using it) and Gentoo too from what I've

read. Debian-based distros seem to be a bit more hinky in that

regard according to some posts I've come across, though I don't

have any personal experience with ZFS+Debian.

I did look into and experiment with FreeBSD (not OpenBSD yet) at

some point, but I don't really have the time to get to know it well

enough to really feel comfortable with it (which I'd say is important

for deploying it on my fileserver), and when I toyed around with

networking between Linux and FreeBSD I encountered a few...

idiosyncrasies that would require quite a bit of time to resolve

(time I don't really have due to being busy with college and all

that).

I do intend to delve deeper into the BSD side of things again at

some point, but it's going to have to wait until I have lots of

spare time. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

nah, don't do that, BSD is a great system but RedHat related systems are far better to spend time on (CentOS would be my choice, RedHat started collaborating with the CentOS community really actively so i expect the RedHat Pilot to be adapted to CentOS as well in some kind of way / That's a better Webmin that's deeply integrated into the system / If you look for bleeding edge tech Fedora is your choice, especially since a server version for Fedora is developed at the moment) BSD just is a great base for developing stuff bc of it's copyrights.

 

 

But that ZFS thingy was new for me, I appreciate that information

 

FreeNAS/nas4Free supports encryptions though so a BSD solution would work in that case, I dislike encryption though. It makes no sense for me as long as you are not having Life'n'death data on your disks or highly valued cutting edge technology schematics or something like that. For home use it's just like trying to hit a fly with an axe

 

/edit: sry i tend to edit my posts regularly during the next 10 minutes after posting

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×