Jump to content

Regarding the backup, you're using automatic snapshots and then using replication tasks to backup the data to a different volume? I couldn't really find a lot of guides on this and this was the approach I took - wasn't sure if there was a better method. I don't have very large changes frequently so it's working just fine, most in a day I might have is 30gb.

 

Anything in this setup you wish you did differently? If I may ask, how much invested do you have in this setup? I feel the burn and I'm only $2k into mine.

 

Yup. Automatic snapshots are just so convenient and simple once you get it setup. I really love the features that come with ZFS out of the box. It really is the filesystem that the world should have had 20 years ago!

 

I've reused a number of components that I already had (mostly disks and a few others bits here and there), so I'm not really sure of the total cost with recycled old servers. At a guess, I'd have to say around £3,000 GBP.

 

Ideally I would have built two identical machines, and performed completed snapshot replication between the two, but money doesn't really permit.

 

The case is a little bit of a squeeze to connect all 9x SAS SFF-8087 cables up to the backplanes; cable routing isn't especially easy with that many cables! I think Supermicro envisioned most people using the backplanes with a single SFF-8087 with contention, but I didn't want to compromise as I already had sourced the HBAs for a really good cheap price.

 

I can see that I may want to put a 10gig network card in later on down the line, so I'm kicking myself for not buying the slightly more expensive server motherboard that had dual 10gig NICs rather than the dual 1gig NICs. Still, I was trying to be frugal with the initial components, and I always have the option to spend the money later on when I decide I really need to.

 

I'm very happy with it so far. It's leaps and bounds better compared to the hotch-potch setup I had before (a QNAP TS-859 with 8x 2TB disks in RAID6, and two 4x 4TB disks RAID5 DASes connected via eSATA <-> USB3 to a TranquilPC NFS head).

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

Long time lurker, first time poster. I actually created an account to post in here.

 

Hardware

 

HP MicroServer N40L

CASE: Stock
PSU:Stock
MB: Stock
CPU: AMD turion
HS: Stock AMD heat sink
RAM: 8GB ECC DDR3
RAID CARD : OEM
HDD: 4x 3TB WD RED

NIC: quad port intel nic


Software and Configuration:
My server is running FreeNas. Advertised storage is 12TB. My main array consists of 4  3TB drives in RAIDZ2, so I end up with 5.17TB actual storage space on that array.


Usage:
I use the storage for movies and music, backups of PCs and phones. I use about a TB for VM storage. i run plex media server on a machine and kodi on another as well as two fire sticks all pulling from the nas.

Backup:

In the land of hopes and dreams.

Additional info:
I'm currently piecing a build to help relieve my hardware obsession aka consolidate machines. also working on another FreeNas machine that is capable of holding more drives.

Photo's:

 

media storage

IMG 20160113 042159

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Today I received the upgraded components for my server. I'll still be running FreeNAS 9.3 and I'll probably still use the same install. 

 

The updated specs:

  • Xeon E3 1230v2 
  • Supermicro X9SCM-F 
  • 32GB ECC DDR3 
  • SYBA SI-PEX40064 Sata Controller 
  • NZXT Source 210 
  • Corsair CX500m 
  • 3x4TB WD RED (JBOD)
  • 1TB Seagate Barracuda (2007~) + 1TB WD Green (2009~) (Mirror'd) 
  • 1TB WD Blue

The cable management is atrocious, I know. Unfortunately, the Source 210 doesn't have a large enough cut out to get the 24 pin cable up at the top (and right now it's a few inches too short to reach through the side cut out -- so I'll need to buy an extension at some point). I also haven't connected any of the drives yet as I'm running a stress test in windows and I didn't want to mess around with the other drives. 

 

I would love to pick up a Define R4/R5 and replace the cheap Source 210, as well as replace the USB with an SSD, but I don't foresee either happening any time soon as I don't see either worth the money.

 

The new server:

BbCkJhf.jpg

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Today I received the upgraded components for my server. I'll still be running FreeNAS 9.3 and I'll probably still use the same install.

The updated specs:

  • Xeon E3 1230v2
  • Supermicro X9SCM-F
  • 32GB ECC DDR3
  • SYBA SI-PEX40064 Sata Controller
  • NZXT Source 210
  • Corsair CX500m
  • 3x4TB WD RED (JBOD)
  • 1TB Seagate Barracuda (2007~) + 1TB WD Green (2009~) (Mirror'd)
The cable management is atrocious, I know. Unfortunately, the Source 210 doesn't have a large enough cut out to get the 24 pin cable up at the top (and right now it's a few inches too short to reach through the side cut out -- so I'll need to buy an extension at some point). I also haven't connected any of the drives yet as I'm running a stress test in windows and I didn't want to mess around with the other drives.

I would love to pick up a Define R4/R5 and replace the cheap Source 210, as well as replace the USB with an SSD, but I don't foresee either happening any time soon as I don't see either worth the money.

The new server:

BbCkJhf.jpg

What about a node 804 instead?

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

What about a node 804 instead?

I think I'd rather have the r4/r5, and keep atx support as well as easier cable management and what I think is a better drive mounting system. (Plus support for an extra two drives through the 5.25" bay -- but that's one thing I actually prefer about the source 210 as since it has three 5.25" bays I can add five more drives). The bigger issue is the cost...the source 210 is equally (even more tbh) functional and doesn't require me to spend $80.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

I think I'd rather have the r4/r5, and keep atx support as well as easier cable management and what I think is a better drive mounting system. (Plus support for an extra two drives through the 5.25" bay -- but that's one thing I actually prefer about the source 210 as since it has three 5.25" bays I can add five more drives). The bigger issue is the cost...the source 210 is equally (even more tbh) functional and doesn't require me to spend $80.

The R3 was great for drives :P

 

11x3.5" drives and 1x2.5" drive

 

(pic from 2011)

2011-10-15%2019.32.25.jpg

 

EDIT:

Pro tip.

Dont drop it :P

(sorry for super bad pics)

 

2012-05-28%2012.50.20.jpg2012-05-28%2012.50.04.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware
CASE: Norcotek 4224
PSU: Silverstone StriderPlus S70F-PB 
MB: Supermicro X10SRA C612
CPU: E5-2620 2.4Ghz
HS: Noctua NH-U9DX i4 
RAM: 16GB DDR4 ECC
RAID CARD 1: IBM M1015, Flashed with "IT" firmware
RAID CARD 2: HP Expander card 36port
SSD: Kingston SSDnow V300 120GB
HDD 1: 10 x 3TB WD Greens (WDIdle)

FANS: 3x Noctua 120mm NF-S12B Redux Edition 700RPM 


Software and Configuration:
Server is running ESXi 5.5. I have a few debian VM's for other stuff like plex (single purpose VMs) but FreeNAS for ZFS storage.

Usage:
It's for general storage, mostly videos.

Backup:
N/A 

Photo's:

IMAG0309.jpg

J4SMy56.png

Link to comment
Share on other sites

Link to post
Share on other sites

So a terrible thing happened.... 

 

I ran out of space on my NAS (22TB). Unfortunately my case is already full up with drives at the moment, so my temporary solution was/is to put the 3x 8TB seagate archive drives in an external HHD dock. Now my storage is 46TB raw.

 

I will do a proper update post once I have bought a rackmount case. Speaking of which, any recommendations? Looking for something not top expensive, something like the silverstone or norco rack cases. preferably 20+ drives

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, unknownkwita said:

So a terrible thing happened.... 

 

I ran out of space on my NAS (22TB). Unfortunately my case is already full up with drives at the moment, so my temporary solution was/is to put the 3x 8TB seagate archive drives in an external HHD dock. Now my storage is 46TB raw.

 

I will do a proper update post once I have bought a rackmount case. Speaking of which, any recommendations? Looking for something not top expensive, something like the silverstone or norco rack cases. preferably 20+ drives

You might consider shopping for used Server chassis on ebay. I picked up a used 24 bay 4U SuperMicro Chassis for $200 myself. The ebay seller's store was close to my house so I was able to just go drive and pick it up.

 

Otherwise, I would look at Norco (new). SuperMicro (new) is really expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, we have a new stats script for IPS4, with new plots and a new ranking table! Also, added a new rule: A system must now have both more than 10 TB of storage and more than 5 storage drives (due to 8 TB and 10 TB drives now being a thing). So some systems which were previously ranked have now been dropped to the noteworthy list.

 

We now also have some more fancy plots for various things.

 

See here:

 

 

 

On 9/28/2015 at 2:46 PM, PhantomWarz said:

Photo's:

To be attached soon....

 

If you can add some pics, I can your macbook USB creation to the noteworthy list, since we don't count USB drives. :)

 

On 9/28/2015 at 4:44 PM, AnonymousGuy said:

Well GG Seagate, game over for me:

Damn, that's a bummer. :(

Updated your config accordingly though.

On 9/29/2015 at 1:09 PM, Alexdaman said:

- update -

Updated. 

On 10/4/2015 at 1:01 AM, thebennyboy said:
Storage drives: 4 x 3TB Seagate Barracuda (ST3000DM001)

Added to noteworthy systems (due to the new rule being minimum 5 drives for the main list).

On 10/29/2015 at 3:06 AM, Patrick3D said:

- update -

Updated.

On 10/30/2015 at 8:41 PM, ElfFriend said:

Actual OP related stuff:

If you upload some pics of that, I can bump you up to the main list. 

On 11/2/2015 at 1:14 PM, nik96 said:

- snip -

 

Not too shabby at all, welcome to the list!

 

On 11/15/2015 at 8:16 PM, alex75871 said:

Here's my home server I built about a year ago :)

Added, welcome to the list!

On 11/20/2015 at 10:00 AM, MyNameIsNicola said:

 

An update on the disks in this along with little Perl script to export my FreeNAS disk information to a little CSV file for my own records. It exports things like device name/path, disk model, capacity, form factor, speed, firmware, serial, ZFS GPTID etc.

https://nicolaw.uk/#freenas_disk_info.pl[1]

 

Funny, now that I've switched from Perl to Python for the ranking script, somebody bumps in here mentioning Perl! I'll have to give it a look sometime, thanks!

 

On 12/1/2015 at 2:20 PM, weeandykidd said:

- update -

Updated.

On 12/2/2015 at 10:56 PM, Ramaddil said:

Update #7 Added 2TB and changed PSU to HX1000i from TX850

Updated. 

On 12/6/2015 at 9:57 PM, Rudel said:

- snip -

Very nice, welcome to the rankings!

On 12/7/2015 at 4:12 AM, Cvet76 said:

03:12 am:
*UUU! This topic looks interesting! I'll just look at a few solutions people have come up with, then I'm going to bed."
05:10 am:
Well... We've all been here.

 

You're lucky, you're just reading the thread, maintaining it is even worse! :D

(okay, it's actually pretty cool, I don't mind)

 

On 12/30/2015 at 7:38 PM, unijab said:

- snip -

Hardware, sweet hardware, isn't it nice when you get those boxes?

On 1/5/2016 at 0:33 AM, MrBucket101 said:

- snip -

Those are some nice numbers, I wouldn't mind getting those at all (although I'm bottlenecked by my network anyway)!

Updated.

 

On 1/5/2016 at 2:10 PM, maxtch said:

- snip -

 

Excellent, welcome to the list!

 

On 1/10/2016 at 10:27 PM, MyNameIsNicola said:

Cat tax

Well, well, well, what have we here? Looks like if you'd just sneezed once or twice during filling your shopping cart, you might have accidentally bumped looney from the #1 spot. :D

 

Really digging the rack case on wheels.

 

And I shall relay your cat tax to the overlords of the interwebz.

 

 

On 1/13/2016 at 10:14 AM, adamsir2 said:

 

IMG 20160113 042159

 

Always good to see a nice and compacted system. Added to the noteworthy list (due to the new 5 drives minimum rule).

 

On 1/27/2016 at 0:17 AM, djdwosk97 said:

- snip -

Updated.

 

 

 

On 1/29/2016 at 11:45 AM, RandomNOOB said:

- droppage -

Ouch!

 

 

On 1/30/2016 at 5:18 AM, Morgo_mpx said:

- snip -

 

Splendid, added to list!

 

2 hours ago, unknownkwita said:

So a terrible thing happened.... 

 

I ran out of space on my NAS (22TB). Unfortunately my case is already full up with drives at the moment, so my temporary solution was/is to put the 3x 8TB seagate archive drives in an external HHD dock. Now my storage is 46TB raw.

 

I will do a proper update post once I have bought a rackmount case. Speaking of which, any recommendations? Looking for something not top expensive, something like the silverstone or norco rack cases. preferably 20+ drives

 

Oh, the tragedy! I've only seen your post now that I was typing out this response, so it's not yet updated, but ah well, such is life.

 

 

If I have forgotten somebody or there's an error somewhere, please notify me, thanks!

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

The new script and graphics are quite lovely. The new ranking criteria is really neat as well.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, scottyseng said:

The new script and graphics are quite lovely. The new ranking criteria is really neat as well.

Thanks! Now I just need to refactor the horrifying mess that is the ranking script. I was a bit lazy and just wanted to get it done, didn't pay much attention to good coding practices, plus I'm not very experienced in Python. :D

 

But yes, the topic of huge HDDs starting to mess up the rankings had been raised a few times, so we thought it was time to take that into account and mitigate that issue. At first I just took the product of drive count and total capacity, but the results of that were a bit extreme, hence why I switched to using the logarithm for the drive count. An example for how it looked without the log:

Rankings w/ product
Rank Username Ranking Points Capacity Drives Case
1 looney 6232 152 41 Supermicro SuperServer 6027TR-DTRF
2 STUdog 3486 83 42 Coolermaster Cosmos 2 + Norco
3 dangerous1 3399 103 33 Lian Li PC-A70F
4 Ramaddil 1917 71 27 Lian Li 343B
5 Alexdaman 1863 69 27 Ri-Vier 24bay 4U Case
6 AnonymousGuy 1728 72 24 Fractal Node 804
7 Whaler_99 1610 46 35 Antec 1200 v3
8 raphaex 1440 60 24 Norco RPC-4224 4U
9 RandomNOOB 1152 48 24 NorcoTek RPC-4224
10 madcow 1116 62 18 Supermicro SC846BE26-R920B
11 Ssoele 1024 64 16 Norcotek RPC4216
12 madcow 1024 64 16 Supermicro SC846BE26-R920B

So yeah... results got a bit too extreme for our tastes, with some systems surpassing others despite having significantly less storage capacity.

 

Figuring out the table design was also quite interesting. At first I was just going to replicate the old table, but then I realized I could do significantly more with the new forum. Plus, spoilers actually look nice now, so I could use those without screwing everything up, aesthetically. Which made it possible to add more plots without the post becoming a mile long.

 

This is another example of the table, first version where I added bars. Not very readable I think, hence why I changed it:

Spoiler

 

Rank Username Ranking Points Capacity Drives Case
1 looney 564.46 152 41 Supermicro SuperServer 6027TR-DTRF
 
 
 
 
 
 
2 dangerous1 360.14 103 33 Lian Li PC-A70F
 
 
 
 
 
 
3 STUdog 310.23 83 42 Coolermaster Cosmos 2 + Norco
 
 
 
 
 
 
4 Ramaddil 234.00 71 27 Lian Li 343B
 
 
 
 
 
 

 

Also, credit to @MG2R for helping me figure out a sane way to structure the table. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, d33g33 said:

I tried to follow the point system but I cant hahahaha

I show up as rank 40 under capacties but then nowhere else...

 

Not that it really matter because this happened.

 

There's only one ranking, which goes by points. And on that, you're rank 40. ;)

The capacity and drive count are just for additional info, but the actual rankings go by points. That's why the table and the three ranking plots are all in the same order, they're all sorted by ranking points (which is the product of the capacity in terabytes and the natural logarithm of the number of storage drives).

 

And my sympathies, that sucks. Did you lose anything critical?

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, alpenwasser said:

 

There's only one ranking, which goes by points. And on that, you're rank 40. ;)

The capacity and drive count are just for additional info, but the actual rankings go by points. That's why the table and the three ranking plots are all in the same order, they're all sorted by ranking points (which is the product of the capacity in terabytes and the natural logarithm of the number of storage drives).

 

And my sympathies, that sucks. Did you lose anything critical?

Ahhhhhhh so I am, must be blind,


Nah just some raw video stuff and my gopro footage from when I drove across the country. The rest is recoverable.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, d33g33 said:

Ahhhhhhh so I am, must be blind,


Nah just some raw video stuff and my gopro footage from when I drove across the country. The rest is recoverable.

Well then, now that you have an excuse to upgrade, I look forward to seeing your 100 drives, 600 TB machine, right? :P

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, d33g33 said:

I tried to follow the point system but I cant hahahaha

I show up as rank 40 under capacties but then nowhere else...

 

Not that it really matter because this happened.

T4skoAE.png

That really sucks, but I don't inderstand. The top shows 2 failed drives, but the bottom only shows an issue with one?

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/8/2016 at 9:47 PM, alpenwasser said:

Well then, now that you have an excuse to upgrade, I look forward to seeing your 100 drives, 600 TB machine, right? :P

Well in all seriousness, I need to move to a 10gbe capable device. What that will look like at the moment I have no idea.

23 hours ago, brwainer said:

That really sucks, but I don't inderstand. The top shows 2 failed drives, but the bottom only shows an issue with one?

So there's a bit of a story so ill explain otherwise ill get told how stupid I am.
So i bought the DS1511+ a 5bay NAS, but capable of having another 5x drives and then another 5x drives by way of external caddy.
So I just had the main unit 5 drive bays so I set it up as SHR(1) which is Synology's software expandable raid with 1 redundant disk. Fine.
I filled the 4 disks + redundant so I bought an expansion unit, at this stage I should have moved to a 2nd redundant disk but that would mean moving the data off and rebuilding the volume, I couldn't transition from 1 to 2 disk redundancy. 
Probably right at that point it became a matter of when, not if, i'd lose the whole volume. 1 redundant disk over +5 disks (let alone 10 that I was running) was just asking for trouble,

 

So to explain what happened above, I had a bad disk I tried to hold off as long as I could because of funds and I tried to be cheeky and instead of replacing a disk that had a couple of re-initializes I pulled the disk, zero'd it and popped it back in. It was then rechecking and adding back to the volume in a healthy state (disk 3 on DX510) and then boom, lost disk 2 in the main unit.

All over.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/20/2015 at 5:00 AM, MyNameIsNicola said:

 

An update on the disks in this along with little Perl script to export my FreeNAS disk information to a little CSV file for my own records. It exports things like device name/path, disk model, capacity, form factor, speed, firmware, serial, ZFS GPTID etc.

https://nicolaw.uk/#freenas_disk_info.pl[1]

An example of the output put in to Google Docs (minus some hidden information like the serials and GPTIDs etc):http://i.imgur.com/Ob9P81V.png 

 

Hope the script is useful to someone other than me.

 

I'll be posting some half decent pictures soon as I'm racking it all up properly with PDUs etc this weekend.

I had missed this and it was quoted just now - I found it very useful ^_^. Being a bit of a nub, I'm not sure of what two of the columns are. Sample from one is "5 000cca 24ce6fcba" and the other "75e23725-cf3e-11e3-9840-00221991a1db"

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Mikensan said:

I had missed this and it was quoted just now - I found it very useful ^_^. Being a bit of a nub, I'm not sure of what two of the columns are. Sample from one is "5 000cca 24ce6fcba" and the other "75e23725-cf3e-11e3-9840-00221991a1db"

I've just updated the script on my homepage/wiki so that the first row printed gives you the column names: https://nicolaw.uk/freenas_disk_info.pl

 

The first column that you mention is a WWN. (See https://en.wikipedia.org/wiki/World_Wide_Name)

 

The second column is the FreeBSD GEOM GPTID of the partitions on that device that are assigned to your ZFS VDEVs. (See the output of glabel list and zpool status -v commands).

 

I find this information handy to have a copy of in an emergency because it means that you will have all the information about any given disk to hand in case you need to replace one. (I've seen some disks that annoyingly only have a barcode of the WWN on the exterior of the drive and not the serial, or perhaps the other way around). I just like belt-and-braces record keeping when you're dealing with hoarding of precious data pr0n.

 

I hope this helps!

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, MyNameIsNicola said:

I've just updated the script on my homepage/wiki so that the first row printed gives you the column names: https://nicolaw.uk/freenas_disk_info.pl

 

The first column that you mention is a WWN. (See https://en.wikipedia.org/wiki/World_Wide_Name)

 

The second column is the FreeBSD GEOM GPTID of the partitions on that device that are assigned to your ZFS VDEVs. (See the output of glabel list and zpool status -v commands).

 

I find this information handy to have a copy of in an emergency because it means that you will have all the information about any given disk to hand in case you need to replace one. (I've seen some disks that annoyingly only have a barcode of the WWN on the exterior of the drive and not the serial, or perhaps the other way around). I just like belt-and-braces record keeping when you're dealing with hoarding of precious data pr0n.

 

I hope this helps!

Ah ok very cool and well done. I find it useful as well, I like having my network/servers documented and this is right up my ally. The GPTID explains why I had an empty row with nothing but a GPTID - I partitioned out 2GB on my SSD for ZIL/SLOG - (seeing if it helps my iSCSI connections) and I guess that's the ID for this 2gb partition. Neat :-)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mikensan said:

Ah ok very cool and well done. I find it useful as well, I like having my network/servers documented and this is right up my ally. The GPTID explains why I had an empty row with nothing but a GPTID - I partitioned out 2GB on my SSD for ZIL/SLOG - (seeing if it helps my iSCSI connections) and I guess that's the ID for this 2gb partition. Neat :-)

Mmm, interesting. Would you mind messaging me the the output you get for that line and the line above and below it? (anonymise anything as you wish of course) .. I'll see if I can fix the script, as it shouldn't be doing that. I'm obviously made a silly assumption or missed an edge case in my parsing logic.

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware

CASE: Norcotek RPC2304 (plus ICY DOCK MB324SP-B 4x2.5" hotswap)

PSU: Corsair RM450

MB: Supermicro X10SLM-F

CPU: Intel Xeon E3-1230v3

HS: Stock Intel heatsink

RAM: 16GB ECC DDR3L 1600

HBA: LSI 9207-4i4e

OS SSD: Samsung 850 Evo 250GB

HDD 1: 1x 3TB HGST HDN724030AL

HDD 2: 2x 3TB Seagate ST3000DM001-1E61

HDD 3: 1x 2TB WD WD20EFRX-68E

SSD 1: 2x Samsung 850 Evo 250GB

SSD 2: 2x Crucial MX100 512GB

 

Software and Configuration:

OS is WIndows Server 2012 R2 and storage system is Storage Spaces. I have the boot SSD and two pool SSDs directly attached to the motherboard, and the RPC2304's 4x3.5" drive bays attached to the LSI HBA. I picked this HBA because it comes in IT mode, and I can later on add a SAS JBOD. I was basically limited to this chassis because the rack it has to fit into only has a 26" internal depth. Adding the Icydock 4x2.5" bay was a snap, I took the front bezel off the case and dremeled out a hole. As you can see in the images, it fits well with room to space for cables. The only reason I can think of for Norco to not make a chassis with this type of capability is that you might run into issues with redundant ATX power supplies, which can be longer than normal power supplies. And if I was doing this again, I might have gone for an SFX power supply for even more clearance.

Usage:

Bulk data storage, Plex server, VM host for Linux VMs (Hyper-V), backup target for computers in the house.

Backup:

The server backs itself up nightly to a virtual disk on the pool. All data (except the backups) is synced by Bittorrent Sync to another server of the same basic setup but in a Silverstone DS380 case, which as of right now only has 8TB of capacity. That server also runs nightly backups to an internal virtual drive, and is located about 300 miles away.

Additional info:

My usage of the two Pool SSDs is mainly for Storage Spaces Tiering, which means they actually are data drives not cache drives (142GB used on each, 136 of which is allocated as the Hot tier of the various virtual disks, and 16GB is writeback cache). So hopefully this means I can slide around the 5 drive requirement with 6 drives actually being used to hold data (Storage Spaces does not keep a copy on the Cold tier of data in the Hot tier - a virtual disk created with 100GB on HDD and 10GB on SSD shows as 110GB capacity)

Photo's:

Ignore the Adata SP600 and the 2x WD Red 1TB drives in the 2.5" bays, they aren't part of the final build. Those bays are where the 3 SSDs are sitting right now. (neither is the PCIe mSata card on the left of the 1st image - that was an experiment gone wrong, but thankfully I didn't lose any data due to it)

(the second and third images should be vertical - if you open the images directly in another tab your browser should be able to flip them right side up - at least Firefox does)

Spoiler

IMG_0587.JPG

Spoiler

IMG_0588.JPG

Spoiler

IMG_0589.JPG

Spoiler

IMG_0590.JPG

Spoiler

IMG_0591.JPG

Spoiler

Storage%20Spaces.PNG

P.S. two things - first, this whole thread should be in the new "Servers and NAS" subforum, but I can understand leaving it in Storage. second - the DS380 server I mentioned above will soon reach the 10TB mark, with 5 HDDs and 2 SSDs (plus boot SSD) - should I make a seperate post for it when that happens?

 

Update 2/16/16: Added 2x 512GB SSDs. @alpenwasser

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

My NASter rig which is my daily driver router and NAS. This won't make the main list but I don't know if it can count as an honorable mention. This is one of my über-cheap NAS platform based system with emphasis on networking and some sightly better hardware. 

 

Hardware
CASE: (I forgot)
PSU: Lenovo 250W OEM PSU (with all low-voltage caps replaced)
MB: Asus P5BV-C
CPU: Intel Core 2 Quad Q9300 (same silicon as Xeon X3320, and maybe I will get a X3330 for this rig)
HS: Lenovo OEM cooler
RAM: 4x Micron 2GB DDR2-800 ECC = 8GB (maxed out)
RAID CARD: Adaptec 6805
HDD Enclosure: Winsonic MRA501 4-bay SATA hot swapping enclosure (occupies 3 ODD slots, hacked to use the LED output from the RAID card)

HDD 1: 2x 3TB WD Red WD30EFRX

HDD 2: 1x 3TB HGST Ultrastar 5K3000 HUA5C3030ALA640

NIC 0: 2x Marvell 88E8052 PCIe GbE NIC (onboard) 

NIC 1: Intel i82546EB dual-port PCI GbE NIC (not PCIe or PCI-X)

NIC 2: Intel i82571EB dual-port PCIe GbE NIC

NIC 3: Ralink RT5360 PCI 150M 802.11b/g/n NIC

NIC 4: Broadcom BCM4322 PCI 300M 802.11a/b/g/n dual-band NIC

 

I bought the MB and HDD enclosure used, rescued the case from being sent down the recycling center, transplanted the PSU, CPU and HS from my dismantled Lenovo OEM workstation, and transplanted the RAID card from Battleship (since Adaptec 6805 can work with the enclosure and case LEDs but 3ware 9750-8i can't)

 

The multitiude of NICs will come in with some significance in the software part.

Software and Configuration:
The RAID card is configured as a 3-drive RAID-5 with 6TB effective storage space. This machine runs vanilla Ubuntu Server 15.10.

Usage:
This server is my wireless router / NAS combo. My previous unit was an Apple Time Capsule but its built-in single hard disk drive is very slow (probably because the internal drive, despite having onboard SATA connectors, is connected internally over USB) and backing up my laptop to that unit is a tedious job and restoring is a nightmare. I then decided to build a new unit, providing the same level of service (a wireless router & NAS combo with dual-band Wi-Fi, Time Machine capability and lots of internal storage) but using proper server-grade hardware (ECC memory, proper redundant storage instead of a single drive) to provide enough handling capacity to saturate a dual-GbE aggregation.

 

System setup:

  • System:
    • Ubuntu Server 15.10 amd64
    • qemu-kvm capabilities enabled
  • Network:
    • Uplink: 1x dual GbE aggregation (802.3ad) on i82546EB NIC
    • Downlink: 5 logic interfaces bridged together:
      • 1x dual GbE aggregation (802.3ad) on i82576EB NIC
      • 2x single GbE links on two 88E8052 NICs
      • 1x RT5360 as 2.4MHz 802.11b/g/n mixed access point
      • 1x BCM4322 as 5GHz 802.11a/n mixed access point
  • Services:
    • PPPoE for FTTH uplink with IPv4 NAT and IPv6 tunnelling
    • UPnP and NAT-PMP port mapping for IPv4 (IPv6 not firewalled)
    • AFP, NFS, SMB and WebDAV NAS access (WebDAV gave me the best throughput, and the close second is NFS)
    • Time Machine backup over AFP
    • DDNS updates
    • DHCPv4 server
    • IPv6 stateless autoconfiguration with NDP with DHCPv6 for DNS configuration (Windows-friendly IPv6)
    • Caching DNS with DNSSEC (optional DNSCrypt)


Backup:
It runs a nightly incremental backup to the Time Capsule's upgraded internal drive.

Additional info:
If you are replicating this build, you can get away without the drive enclosure and the Adaptec 6805 which are both fairly expensive. A used LSI 9260-8i or HP P410 will work just fine as the RAID solution and just get a case with enough internal bays. Used ECC RAM can be fairly stable too but a lot cheaper (as much as 1/20 as brand new) as DDR2 are fairly difficult to find new now.

Photo's:

 

IMG_0357.JPG

IMG_0359.JPG

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×